upvote
What you describe is very similar to how Icechunk[1] works. It works beautifully for transactional writes to "repos" containing PBs of scientific array data in object storage.

[1]: https://icechunk.io/en/latest/

reply
deleted
reply
A lot of good insights here. I am also wandering if they can just simply put different jobs (unclaimed, in-progress, deleted/done) into different directory/prefix, and rely on atomic object rename primitive [1][2][3] to solve the problem more gracefully (group commit can still be used if needed).

[1] https://docs.cloud.google.com/storage/docs/samples/storage-m... [2] https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameOb... [3] https://fractalbits.com/blog/why-we-built-another-object-sto...

reply
I didn't know about atomic object rename... it's going to take me a long time to think through the options here.

> RenameObject is only supported for objects stored in the S3 Express One Zone storage class.

Ah interesting, I don't use this but I bet in a year+ AWS will have this everywhere lol S3 is just too good.

reply
This is news to me. What motivates you to reach for an S3-backed queue versus SQS?
reply
I'm not building a queue, but a lot of things on s3 end up being queue-shaped (more like 'log shaped') because it's very easy to compose many powerful systems out of CAS + "buffer, then push". Basically, you start with "build an immutable log" with those operations and the rest of your system becomes a matter of what you do with that log. A queue needs to support a "pop", but I am supporting other operations. Still, the architecture overlap all begins with CAS + buffer.

That said, I suspect that you can probably beat SQS for a number of use cases, and definitely if you want to hold onto the data long term or search over it then S3 has huge options there.

Performance will be extremely solid unless you need your worst case latency for "push -> pop" to be very tight in your p90.

reply
This is fascinating. It sounds like you're building "cloud datastructures" based on S3+CAS. What are the benefits, in your view, of doing using S3 instead of, say, dynamo or postgres? Or reaching for NATS/rabbitmq/sqs/kafka. I'd love to hear a bit more about what you're building.
reply
It's just trade-offs. If you have a lot of data, s3 is just the only option for storing it. You don't want to pay for petabytes of storage in Dynamo or Postgres. I also don't want to manage postgres, even RDS - dealing with write loads that S3 handles easily is very annoying, dealing with availability, etc, all is painful. S3 "just works" but you need to build some of the protocol yourself.

If you want consistently really low latency/ can't tolerate a 50ms spike, don't retain tons of data, have <10K/s writes, and need complex indexing that might change over time, Postgres is probably what you want (or some other thing). If you know how your data should be indexed ahead of time, you need to store a massive amount, you care more about throughput than a latency spike here or there, or really a bunch of other use cases probably, S3 is just an insanely powerful primitive.

Insane storage also unlocks new capabilities. Immutable logs unlock "time travel" where you can ask questions like "what did the system look like at this point?" since no information is lost (unless you want to lose it, up to you).

Everything about a system like this comes down to reducing the cost of a GET. Bloom filters are your best friend, metadata is your best friend, prefetching is a reluctant friend, etc.

I'm not sure what I'm building. I had this idea years ago before S3 CAS was a thing and I was building a graph database on S3 with the fundamental primitive being an immutable event log (at the time using CRDTs for merge semantics, but I've abandoned that for now) and then maintaining an external index in Scylla with S3 Select for projections. Years later, I have fun poking at it sometimes and redesigning it. S3 CAS unlocked a lot of ways to completely move the system to S3.

reply
> I solved this at one point with token fencing

Could you expand on that? Even if it wasn't the approach you stuck with, I'm curious.

reply
Oof, I probably misspoke there just slightly. I attempted to solve this with token fencing, I honestly don't know if it worked under failure conditions. This was also a while ago. But the idea was basically that there were two tiers - one was a ring based approach where a single file determined which writer was allocated a 'space' in the ring. Then every write was prepended with that token. Even if a node dropped/ joined and others didn't know about it (because they hadn't re-read the ring file), every write had this token.

Writes were not visible until compaction in this system. At compaction time, tokens would be checked and writes for older tokens would be rejected, so even if two nodes thought that they owned a 'place' in the ring, only writes for the higher value would be accepted. Soooomething like that. I ended up disliking this because it had undesirable failure modes like lots of stale/ wasted writes, and the code sucked.

reply