It’s my understanding that one worker entity is sufficient for even a large number of workflows
Yes a single worker is capable of executing a large number of concurrently running workflows. Depending on your load and expected performance you would need to use sdk and server metrics to fine tune your worker(s), look to set the correct tworker capacity, and determine the right number of pollers for your workflow and activity workers.
For deployment you want to have replicas of your worker processes for resilience and look at the resources you provide to each worker pod depending if your activities are memory and/or cpu intensive.
One such workflow will have an activity that is essentially a mapreduce function
To my knowledge server team is working on improving the batch execution support that will be added in the future. For best practices and ideas on implementing this see forum posts here (look for iterator pattern) and here (doing batching inside long-running activity).
Would it make sense for each workflow to have its own worker entity for its group of activities?
Workflow executions do not depend on specific worker. Having a worker per workflow type could lead to more complex deployment scenarios for your worker processes (as you would possibly have a large number of them). For activities you would look at your rate limiting requirements and see if any of your activities should run on their specific task queues if they require specific rate limiting (see forum post here for more info).