upvote
I'm 100% aware of how S3 works. I was questioning why the S3 API is needed when the service is using local storage.
reply
Sometimes API compatibility is an important detail.

I've worked at a few places where single-node K8s "clusters" were frequently used just because they wanted the same API everywhere.

reply
The API has sort of become a standard. There are many providers providing S3 API-compatible storage.
reply
Apart from all these other products that implement s3? MinIO, Ceph (RGW), Garage, SeaweedFS, Zenko CloudServer, OpenIO, LakeFS, Versity, Storj, Riak CS, JuiceFS, Rustfs, s3proxy.
reply
Add Tigris to the list as well please.

We maintain a page that shows our compatibility with S3 API. It's at https://www.tigrisdata.com/docs/api/s3/. The test runner is open source at https://github.com/tigrisdata-community/s3-api-compat-tests

reply
Riak CS been dead for over a decade which makes me question the rest. Some of these also do not have the same behaviors when it comes to paths (MinIO is one of those IIRC).

Also, none of them implement full S3 API and features.

reply
There's a difference between S3 API spec and what Amazon does with S3 - for isntance, the new CAS capabilities with Amazon are not part of the spec.

Ceph certainly implements the full API spec, though it may lag behind some changes. It's mostly a question of engineering time available to the projects to keep up with changes.

reply
What does RadosGW miss?
reply
What kind of vendor lock-in do you even talk about. Their API is public knowledge, AWS publishes the spec, there are multiple open source reference client implementations available on GitHub, there are multiple alternatives supporting the protocol, you can find writings from AWS people as high in hierarchy as Werner Vogels about internals. Maybe you could say that some s3 features with no alternative implementation in alternative products are a lock-in. I would consider it a „competitive advantage”. YMMV.
reply
> part of it is just to lock people into AWS once they start working with it.

This is some next-level conspiracy theory stuff. What exactly would the alternative have been in 2006? S3 is one of the most commonly implemented object storage APIs around, so if the goal is lock-in, they're really bad at it.

reply
deleted
reply
> What exactly would the alternative have been in 2006?

Well, WebDAV (Document Authoring and Versioning) had been around for 8 years when AWS decided they needed a custom API. And what service provider wasn't trying to lock you into a service by providing a custom API (especially pre-GPT) when one existed already? Assuming they made the choice for a business benefit doesn't require anything close to a conspiracy theory.

And it worked as a moat until other companies and open source projects started cloning the API. See also: Microsoft.

reply
WebDAV is kinda bad, and back then it was a big deal that corporate proxies wouldn't forward custom HTTP methods. You could barely trust PUT to work, let alone PROPFIND.
reply
WebDAV is ass tho. I don't remember a single positive experience with anything using it.

And still need redundant backend giving it as API

reply
When I was in school, we had a SkunkDAV setup that department secretaries were supposed to use to update websites... supporting that was no fun at all. I'm not sure why it was so painful (was 25 years ago) but it left a bad taste in my mouth.
reply