Hi Maxim,
Thanks for the insights! Why i was pushing for the submarine to have it’s own workflows, because it’s a pretty autonomous device doing a lot and complex processing. In other words, it could benefit from Durable Execution capabilities.
What i had in mind is a combination of an Entity Workflow running in the cloud (let’s call it MissionPlanner), which is receiving ExplorationRequests (waypoints to visit). Would break them down to convenient-sized missions and would schedule them for the submarines, by spawning a child-workflow (let’s call it ExecuteMission) and waiting for completion (success, timeout, failure, etc). With these results, it can compose ExplorationResult that the end-user will browse.
The ExecuteMission workflow can run on the submarine and after some time with starting it, the submarine will loose connectivity to the Server. Based on the documentation about local activities (Local Activity | Temporal Platform Documentation), i had the impression that if the ExecuteMission workflow only spawns local activities while disconnected, they are going to be scheduled to the worker’s task queue and no server connection is needed. And before ExecuteMission finishes, connection will be available again so results can be communicated back.
Based on your comment, i have the feeling that my line of thinking is having a weak spot, could you please help me understand why exactly this approach would encounter issues?
You mentioned another interesting alternative: the submarine could run it’s own Temporal instance, then disconnection is not an issue. Is this a usecase for Temporal Nexus to integrate submarine-logic and cloud-logic?
Best
Andras