1. Start
2. Load the queue.json from the object store
3. Receive request(s)
3. Edit in memory JSON with batch data
4. Save data with CAS
5. On failure not due to CAS, recover (or fail)
6. On success, succeed requests and go to 3
7. On failure due to CAS, fail active requests and terminate
The client should have a retry mechanism against the broker (which may include looking up the address again).
From the brokers PoV, it will never fail a CAS until another broker wins a CAS, at which point that other broker is the leader. If it does fail a CAS the client will retry with another broker, which will probably be the leader. The key insight is that the broker reads the file once, it doesn't compete to become leader by re-reading the data and this is OK because of the nature of the data. You could also say that brokers are set up to consider themselves "maybe the leader" until they find out they are not, and losing leadership doesn't lose data.
The mechanism to start brokers is only vaguely discussed, but if a host-unreachable also triggers a new broker there is a neat from-zero scaling property.
Further, this system (as described) scales best when writes are colocated (since it maximizes throughput via buffering). So even just by having a second writer you cut your throughput in ~half if one of them is basically dead.
If you split things up you can just do "merge manifests on conflict" since different writers would be writing to different files and the manifest is just an index, or you can do multiple manifests + compaction. DeltaLake does the latter, so you end up with a bunch of `0000.json`, `0001.json` and to reconstruct the full index you read all of them. You still have conflicts on allocating the json file but that's it, no wasted flushing. And then you can merge as you please. This all gets very complex at this stage I think, compaction becomes the "one writer only" bit, but you can serve reads and writes without compaction.
https://doi.org/10.14778/3415478.3415560
Note that since this paper was published we have gotten S3 CAS.
Alternatively, I guess just do what Kafka does or something like that?