Long-Running workflow, memory exhaustion, and parameter passing

So either I understand it badly, or when there will be too many running workflows, it will kill some of them, and restore later. Or is there something hidden I’m missing?

Workflows are not immediately removed from the worker’s memory. For efficiency, they are stored in an LRU cache. So as soon as the cache is full they will be removed from memory. The size of the pool is controlled by the worker.SetStickyWorkflowCacheSize global setting. You can restart the worker at any time to see that workflows are not loaded on restart. Then go to UI and open the stack trace tab (while the worker is running) to see the stack trace of the blocked workflow.

So it is safe to pass any kind of argument? Like gRPC connection (more about that below), or DB connection, etc. What about passing structures with not exported attributes. Are those OK as well?
And what about Value Context with some values? Are those also OK?

No, it is not safe to pass such objects. The objects that are passed should be value types that can be converted to a binary payload. The conversion is pluggable with JSON/Protobuf supported out of the box. Here is the default DataConverter.

Passing connections into Activity

They are passed as arguments to the activity struct during its initialization. See the greetings sample.