Theoretical discussion regarding use of LWTs at very high scale (Apple, Netflix, etc.).
My theory: Every conditional write (LWT) generates several tiny files that the database has to rewrite again and again during compaction. When you double your LWT traffic, the amount of background disk work grows by roughly four times (extra writes × repeated rewrites). But adding servers only increases disk bandwidth in a straight line. So past a certain load the disks fall behind, compaction backlog skyrockets, and latency blows up, no matter how many CPU cores you throw at the cluster.
Is this correct? What part of the system does not scale or is irreducible under very high APS/RPS, when it comes to Temporal’s high usage of LWTs?