Mere my curiosity and ignorance.
Let us say I have small Temporal application developed using Go SDK comprised of some workflows and activities and related Go functions.
How Temporal application assembly and deployment works? Is it typical Go development practices?
How the Docker which contains Temporal service deployed?
How Temporal service and Temporal Client application(Activities & Workflows) are deployed in production?
What exactly is the Client tier for a temporal server and Temporal application developed?
How Mobile/Web clients communicates with services in Temporal application deployed?
Will there be any Rest API application in between Mobile/Web clients and Temporal service, As CLI is more of developer tool?
How third party application communicate with Temporal application?
As a developer what are all I need know or take care about database like Apache Cassandra or MySQL?
How authentication and authorization are taken care of? What about data security?
Can someone assist me in answering above questions? Questions may be overlapping, but general briefing will help me understand better.
Temporal is not opinionated about the way an application is deployed. Both workflow and activity code is deployed and operated outside of the control of the Temporal gRPC service. So we recommend whatever deployment infrastructure you are already using for your Go or Java applications.
The service itself is just a Go binary which needs some environment variables and configuration files to start. So it can be deployed in a variety of ways. The docker is the most common way to package the binary and Kubernetes is the most common way to deploy it to production. You can use the supported helm-chart. For development, we recommend using the docker-compose deployment of the service to a local machine. Mark provided more details on this one.
No hard requirements here. The most common at this point is K8s, but there are many other ways to deploy them.
Use the approach you are already using. Temporal doesn’t impose any requirements on the client tier deployment besides that it should be able to reach the service gRPC frontends through some sort of load balancer.
They usually don’t communicate with Temporal service directly but through some sort of application-specific services.
We might add support for Rest API at some point in the future, but currently gRPC is the only option. CLI is mostly for development.
They usually communicate through some sort of application-specific proxy. If all the applications are part of the same organization they can communicate through the service gRPC API directly or through the Java or Go SDKs.
Currently, Temporal MySQL binding doesn’t support sharding. So if you need a highly scalable deployment you want to go with Cassandra.
TLS is supported to secure the connections. We are going to add more authentication and authorization features in the future.
Talking about activity worker deployment, I wonder if you have any thoughts on a way for running those as on-demand Serverless functions, instead of long-running daemon processes.
Motivation is that on a public Cloud keeping such components always-on might cost money. If instead we could have a way of triggering/spinning up of such components as part of an Activity invocation, we could save by only running them when there is actual work to do and also managing capacity elastically, growing it in line with demand and shutting these down when demand drops.
Have you seen or thought about integrating the Task Queueing that takes place in the temporal engine with FunctionAsService frameworks that could be used to run workers on demand.
If not that is ok, I am just starting a project where I might try to build that type of integration myself.
Effectively wrapping the activity invocation in with a call to a FaaS proxy that would first start an activity worker which would in turn claim the activity task from the corresponding queue
If instead we could get temporal itself post a message to the FaaS proxy or queue off the back of the task being queued the whole thing would be more seamless and robust/reliable.
Do you think the above is at all of value? More than happy to discuss further if you do.
I don’t think you need anything complicated to support your requirements. Just implement an activity that invokes your lambda function when invoked. You still need to host that particular activity, but as you are already hosting the whole service it shouldn’t be a problem.
Hi @maxim, is there any updates on configurable sharding for mysql? Right now I understand we’re limited to the fixed history shard behavior; are there plans to shard workflows across multiple logical databases?
I tried to install it on GKE using helm as described above, but it times out after some time: Error: failed post-install: timed out waiting for the condition
I think the problem is with cassandra readiness based on below:
temporal-helm-charts % kubectl get pod
NAME READY STATUS RESTARTS AGE
temporaltest-admintools-7c756b677-bmwch 1/1 Running 0 24m
temporaltest-cassandra-0 0/1 Running 0 24m
temporaltest-frontend-5669459b8b-5642g 0/1 Init:0/4 0 24m
temporaltest-history-7fb6b75f9c-6hbkz 0/1 Init:0/4 0 24m
temporaltest-matching-78c8d69db4-tpfkg 0/1 Init:0/4 0 24m
temporaltest-web-57745b7465-6dv8j 1/1 Running 0 24m
temporaltest-worker-59877c5cc4-lx4wf 0/1 Init:0/4 0 24m
How do I find the problem with cassandra?
Thanks in advance!
I further found that ‘nodetool status’ on the cassandra pod is taking too long (>20s). That is causing the liveness/readiness to fail. Any help on why that could be taking so long?