upvote
> they would likely obey robots.txt

If only... Despite providing a useful service, they are not as nice towards site owners as one would hope.

Internet Archive says:

> We see the future of web archiving relying less on robots.txt file declarations geared toward search engines

https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...

They are not alone in that. The "Archiveteam", a different organization, not to be confused with archive.org, also doesn't respect robots.txt according to their wiki: https://wiki.archiveteam.org/index.php?title=Robots.txt

I think it is safe to say that there is little consideration for site owners from the largest archiving organizations today. Whether there should be is a different debate.

reply
It seems like the general problem is that the original common usage of robots.txt was to identify the parts of a site that would lead a recursive crawler into an infinite forest of dynamically generated links, which nobody wants, but it's increasingly being used to disallow the fixed content of the site which is the thing they're trying to archive and which shouldn't be a problem for the site when the bot is caching the result so it only ever downloads it once. And more sites doing the latter makes it hard for anyone to distinguish it from the former, which is bad for everyone.

> The "Archiveteam", a different organization, not to be confused with archive.org, also doesn't respect robots.txt according to their wiki

"Archiveteam" exists in a different context. Their usual purpose is to get a copy of something quickly because it's expected to go offline soon. This both a) makes it irrelevant for ordinary sites in ordinary times and b) gives the ones about to shut down an obvious thing to do, i.e. just give them a better/more efficient way to make a full archive of the site you're about to shut down.

reply
[dead]
reply
What an absolutely insufferable explanation from ArchiveTeam. What else do you expect from an organization aggressively crawling websites and bringing them down to their knees because they couldn't care less?
reply
I'm curious to hear about examples of where this has happened. Because ArchiveTeam also has an important role in rescuing cultural artefacts that have been taken into private hands and then negligently destroyed.
reply
Having a laudable goal doesn't absolve them from bad behavior.
reply
That page was written by Jason Scott in 2011 and has barely been changed since then.
reply
Evasion techniques like JA3 randomization or impersonation can bypass detection.
reply
I am aware, fortunately I haven't seen much of this... yet. Also JA4 is supposed to be a bit less vulnerable to this. Also this is why I really want TCP and HTTP fingerprinting. But the best i've found so far is https://github.com/biandratti/huginn-net and is only available as rust library, I really need it as an nginx module. I've been tempted to try to vibe code an nginx module that wraps this library.
reply
[dead]
reply
I wonder if it would be practical to have bot-blocking measures that can be bypassed with a signature from a set of whitelisted keys... In this case the server would be happy to allow Internet Archive crawlers.
reply
That's an interesting idea. Mtls could probably be used for this pretty easily. It would require IA to support it if course, but could be a nice solution. I wonder, do they already support it? I might throw up a test...
reply