Why restriction on retention period 30 days?

I understand the reason behind the retention and archiving of completed workflows but can’t we have retention period config time based on the user needs? If the temporal runs on the scalable data source like Cassandra, do the user still need to constraint themselves with 30 day archival max limit? Is this because there could be performance issues with Cassandra even with no cluster size restriction?

As of now, If we choose to keep the data as long as we need one option to keep the workflow open, is it? Can you please suggest the best way to do so?

Certain race conditions with archival might lead to data loss with retention over 30 days. We are planning work to fix the archival implementation which in turn would allow unlimited (up to DB capacity) retention.

Can you please elobarate how is the race condition kicks in only when the archival day is more than 30? Please point me to the issue tracker if already available.

As of now this force us to use additional DB to store domain state and not rely only on the temporal database as we will lose the data in 30 days. Should we consider keeping the workflows open to retain the data?

Keeping workflows open is a reasonable workaround.