We have plans to support Spring boot natively. Until then I would not recommend using dependency injection for workflows. Workflows should not directly depend on any external objects, so injecting them as dependencies is not a good idea. Read the documentation about other limitations that the workflow code must obey.
The IoC autowiring should work for activities without any problems as activities are normal code without practically any limitations.
Thanks Maxim, I understand that the workflow needs to deterministic. Was using dependency injection to pass a configuration object that included the taskqueue name. Perhaps overkill.
For future reference, solved this use case by following this http://blog.jdevelop.eu/?p=154 . Simple way of obtaining the application context inside the parameterless constructor and get access to the properties.
Thank you for the quick response, wishing you a good day !
Michel,
I still think that accessing any configuration inside the workflow code is a bad idea. The reason is that configuration by definition can change and such change can break determinism. While changing an activity task queue name is indeed a backward-compatible change it is too easy to use it for something that is not backward compatible.
I would recommend injecting configuration into an activity implementation object and then returning it through the call to the GetConfiguration activity. This way it will be recorded into the workflow execution history and is not going to break any running workflows if updated.
Thanks Maxim, I like that idea. I also completely agree that that is a lot more in accordance with the design. Got a small poc working with min.io. Now onto a kubernetes deployment.
To connect to a remote temporal deployment; I only need to set the target to the ip/hostname through WorkflowServiceStubsOptions.BuildersetTarget(java.lang.String target) ?
The workflow will be for ocr-ing pdf documents and store the result in s3 so we can later further process them, entity recognition etc, and send to solr. The documents will come from a webscraper in python and will need to be stored somewhere, nfs or s3 I think makes sense as an intermediate storage as files can be >10MB.
A solution like kafka might work as well but we would loose the benefits of keeping a centralized history, retry logic, job cancellation, etc. All things that temporal seems to offer described in an easier way to reason about and focus more on the business logic than the actual implementation. I also trust you have given these things already considerable thought, so why reinvent the wheel.
Yes, I agree, pretty enthusiastic about the project. Checked out some other projects but this one and agro proj seems to be the most promising ones. Just hoping there isn’t hell after the simple hello world
@arpijoy - I did not. My original implementation was somewhat non-standard (we have to support both Temporal and internal orchestration at the same time), so I ended up doing a little of my own SB integration.
But from what I see, applicaai's approach looks reasonable for the standard use-case. Once we are clear to move to a pure-temporal implementation, I may circle back to this and give it a try.