If the records are small, the first activity could return the records to the workflow, and the workflow could call the other activities with the records.
If the records are too large to comfortably pass through the Temporal server, the first activity could save the record somewhere (e.g. to disk or to S3, etc.) and then pass a reference to the saved data back to the workflow (the disk filename, or S3 bucket and key, etc). The workflow could then call the other activities with the location of data.
If you want to pass the records in memory from the first activity to the other activities so that you don’t need to save the data, you need to ensure that all the activities involved run in the same worker so that they share the same memory space. (See Best way to streaming data between activities in Temporal - #7 by antonio.perez) A disadvantage is lower availability: if your single worker crashes or you restart it for a code update, your system will need to wait until it has restarted to continue processing.
And finally, if your activities might be simply transforming data (as opposed to taking actions such as making API calls), and the processing you want to do might fit naturally into a MapReduce or other data processing pipeline system, you might perhaps want to move the data processing into such a data processing system, and then use workflows to control the data processing system through its API.