The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.
The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.
However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.
In your opinion, is WebDAV good enough to be the protocol for exposing file systems over HTTP, or is there room for something better? I was bullish on Solid but they don't seem to be making much progress.
Not that it is a good comparison. NFS isn't super popular, macos can do it, I don't think windows can. But both windows and macos can do webdav.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.
WebDAV is neat.
Use it where it makes sense. And S3 does not necessarily equate to using Amazon. I like the Garage S3 project that is interesting for smaller scale uses and self-hosted systems. The project is funded with EU Horizon grants via NLnet.
Both Windows and Mac have 9p support built in and both have locked away from the end user. Windows has it exclusively for communication with WSL. macOS has 9p but it's exclusively for communication with it's virtualization system. It would be amazing if I could just mount 9p from the UI.
Exhibit A: https://help.ovhcloud.com/csm/en-ie-web-hosting-ftp-storage-...
From that link:
2. SSH connection
You will need advanced knowledge and an OVHcloud web hosting plan Pro or Performance to use this access type.
Well, maybe we are. I'd cross that provider off my list right there.The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.
Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.
That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.
The bad news with FTP in particular is that only one request has to be intercepted and recorded to have persistent compromise, because the credentials are just a username and password transmitted in clear.
Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.
It's not so much about the data, but protecting your credentials for the server.
Also, how do you know that there isn't someone performing a MITM (man in the middle) attack? FTP has no mechanism that I know of to verify that you're connecting to the server that you think you are.
It may well be that you're not a sizeable target and that no-one is interested in hacking your site, but that's just luck and not an endorsement of unencrypted FTP.
We have to put a limit to paranoia. If things work correctly for decades and there are no signs of foul play after endless real world usage, it's safe to say nobody is hacking our FTP.
It's different if you're a bank or the KGB or the CIA.
> It may well be that you're not a sizeable target and that no-one is interested in hacking your site, but that's just luck and not an endorsement of unencrypted FTP.
Do you drive an armored car?
A frame-less one?
It costs approximately zero to use encryption and protect against the FTP exploits, so why continue to use FTP? There's literally no advantage and several possible disadvantages. Just relying on not being hacked before seems a foolish stance to me.
I challenge you to select any FTP website of your choosing and make a tiny change to prove that you've hacked it and let me know here.
Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.
(It's different if you're the NSA or some other state-level actor, but you're not.)
And what happens if your ISP is compromised without their knowledge? What happens when it's a consumer device such as a router? Don't forget that nearly every TP-Link router has an active malware infection.
It's not just one ISP that you have to trust, it's every single intermediate piece of equipment.
Intercepting traffic is a trivial & common form of compromise, and the problem multiplies by how many different parties you are handing your data to. It is wildly irresponsible to not attempt to protect against this.
Like you, I will miss the glory days of FTP :'(
The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.
Yeah, there’s definitely been some wild consolidation. I’ve actually been involved in quite a few acquisitions myself over the last decade in one form or another.
> (I used to own one, hence why I still track the market)
I’m still in the industry, though in a very different segment now. I do still keep a small handful of legacy customers, folks I’ve known for years, on shared setups, but it’s more of a “you scratch my back, I’ll scratch yours” kind of thing now. It’s not really a profit play, more a mix of nostalgia and habit.
That’s been happening, at least from my own memory, since at least the mid-2000s.
> plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started.
Ah yes, the 2020s version of “just start a Facebook page.” The more things change, the more they stay the same I suppose.
> Combine that with site builders eating at shared hosting's market share
I remember hearing that for the first time in I wanna say...2006? It sure did cause a panic for at least a little while.
> and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Yes, I've heard this one more times than I can count too.
The funny thing is, I’ve been hearing this same “shared hosting is dying” narrative for nearly two decades now. Yet, in that time, I’ve seen multiple companies launch, thrive, and sell for multi-million dollar exits.
But sure, this time it’s definitely the death knell. Meanwhile, I assure you, the bigger players in the space are still making money hand over fist.
https://www.mordorintelligence.com/industry-reports/web-host...
> By hosting type, shared hosting led with 37.5% of the web hosting market share in 2024
Just like how there are usecases ftp supports that s3 doesn't.
The main downside is people will sometimes assume you mean SFTP (not having heard of FTPS or realising they are different), and then get upset when it doesn't work as they expect. However good tooling will support both e.g. Filezilla.
Have a look there: https://codeberg.org/lunae/dav-next
/!\ it's a WIP, thus not packaged anywhere yet, no binary release, etc… but all feedback welcome
I debated between this scanner and the Brother ADS-1800W but the Brother has a slow UI and no thingy where the paper lands when it's done scanning (not sure how it's called in English).
There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.
(old ref, but the architecture hasn't changed AFAIK)
Ref: https://learn.microsoft.com/en-us/previous-versions/windows/...
I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.
Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.
The go stdlib has quite a good one that just works with only a small bit of wrapping in a main() etc.
Although ive since written one in elixir that seems to handle my traffic better..
(you can also mount them on macos and browse with finder / shell etc which is pretty nice)
Sftpgo also supports webdav, but for use cases in the article sftp is just better.
We need to keep using open protocols such as WebDAV instead of depending on proprietary APIs like the S3 API.
Unlike NFS or SMB, WebDAV mounts do not get stuck for a minute when the connection becomes unstable.
Can WebDAV be made fast?
There is also NzbDav for this too, https://github.com/nzbdav-dev/nzbdav
I like WebDAV because it 'just works' with the mTLS infra I had already setup on my homelab for access from the outside world.
I use sftpgo (https://sftpgo.com/) on the server side.
You can run a WebDAV server using caddy easily.
What else?
A standard way of doing progressive chunked uploads would be a solid improvement. However, older protocols like this have an air of lacking supervision which is a shame.
Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:
X-Update-Range: append
X-Update-Range: bytes=3-6
X-Update-Range: bytes=4-
X-Update-Range: bytes=-2
https://sabre.io/dav/http-patch/ https://github.com/miquels/webdavfslAnother example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...
Apache has has a PUT with Content-Range, https://github.com/miquels/webdav-handler-rs/blob/master/doc...
Great write-up in rclone on trying to support partial updates, https://forum.rclone.org/t/support-putstream-for-webdav-serv...
It would be great to see a proper extension formalized here! But there are options.
It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.
I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.
Hahahaha, haha, ha, no. And probably (still)more used than WebDAV
pls send help
FTP is such a clunky protocol, it is peculiar it has had such staying power.
Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.
My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.
I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.
And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.
WebDAV (Web Distributed Authoring and Versioning) is a set of extensions to the Hypertext Transfer Protocol (HTTP), which allows user agents to collaboratively author contents directly in an HTTP web server by providing facilities for concurrency control and namespace operations, thus allowing the Web to be viewed as a writeable, collaborative medium and not just a read-only medium.[1] WebDAV is defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).
Says who?
https://github.com/lookfirst/sardine
Still going.
The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.
None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.