Hi everyone, I have a couple of noob questions…
I did manage to setup a single instance of cadence on openshift, but I want to prepare for the future, so here we go:
• Can I later on change the configuration and add clusters over multiple availability zones? Or do I have to start from scratch?
• I created a domain as non-global, I understand this cannot be changed, but I also can’t delete the domain to start over… is there no delete domain action?
• To our use case the best suited setup would be to spin up a dynamic number of child workflows, based on the current template from the database. So my question is if the main workflow replays, does it try to recreate the already existing child workflows and gets into a deadlock if no duplicates are allowed? Or do I have to fail the child workflows to allow replay on a catastrophe?
• And my last question, can I provide a folder in the archival s3:// path or do I need a new bucket for the default and maybe for the domains If they should deviate from the default?
Welcome to the community!
Can I later on change the configuration and add clusters over multiple availability zones? Or do I have to start from scratch?
Yes, it is possible to upgrade clusters and change their configuration without downtime. It includes adding a multi-cluster setup.
I created a domain as non-global, I understand this cannot be changed, but I also can’t delete the domain to start over… is there no delete domain action?
There is no delete domain action yet. Closed workflows are subject to deletion after a configured retention period. So unused domains don’t use any resources if they don’t have open workflows after some time.
To our use case the best suited setup would be to spin up a dynamic number of child workflows, based on the current template from the database. So my question is if the main workflow replays, does it try to recreate the already existing child workflows and gets into a deadlock if no duplicates are allowed? Or do I have to fail the child workflows to allow replay on a catastrophe?
I’m not sure what you mean by “replay on a catastrophe”. If you mean replay that is used to restore a workflow state. Then you shouldn’t really think about it. It automatically does the right thing and your code doesn’t even notice that recovery happened.
And my last question, can I provide a folder in the archival s3:// path or do I need a new bucket for the default and maybe for the domains If they should deviate from the default?
Looking at the code and documentation of S3 archival it is not clear if it is going to work with the folder. AFAIK s3 doesn’t really support folders and they are just key prefixes.
In any case it wouldn’t be hard to add such a feature if it is not already supported.