upvote
I had the same issue when I first put up my gitea instance. The bots found the domain through cert registration in minutes, before there were any backlinks. GPTbot, ClaudeBot, PerplexityBot, and others.

I added a robots.txt with explicit UAs for known scrapers (they seem to ignore wildcards), and after a few days the traffic died down completely and I've had no problem since.

Git frontends are basically a tarpit so are uniquely vulnerable to this, but I wonder if these folks actually tried a good robots.txt? I know it's wrong that they ignore wildcards, but it does seem to solve the issue

reply
Where does one find a good robots.txt? Are there any well maintained out there?
reply
Cloudflare actually has this as a free tier feature so even if you don't want to use it for your site you can just setup a throwaway domain on Cloudflare and periodically copy the robots.txt they generate from your scraper allow/block preferences, since they'll be keeping up to date with all the latest.
reply
I will second a good robots.txt. Just checked my metrics and < 100 requests total to my git instance in the last 48 hours. Completely public, most repos are behind a login but there are a couple that are public and linked.
reply
> I wonder if these folks actually tried a good robots.txt?

I suspect that some of these folks are not interested in a proper solution. Being able to vaguely claim that the AI boogeyman is oppressing us has turned into quite the pastime.

reply
> Being able to vaguely claim that the AI boogeyman is oppressing us has turned into quite the pastime.

FWIW, you're literally in a comment thread where GP (me!) says "don't understand what the big issue is"...

reply
Since you had the logs for this, can you confirm the IP ranges they were operating from? You mention "Claudebot and GPTBot" but I'm guessing this is based off of the user-agent presented by the scrapers and could easily be faked to shift blame. I genuinely doubt Anthropic and such would be running scrapers that are this badly written/implemented, it doesnt make economic sense. I'd love to see some of the web logs from this if you'd be willing to share! I feel like this is just some of the old scraper bots now advertising themselves as AI bots to shift blame into the AI companies.
reply
There are a bit too many IPs to list but from my logs they're always of the form 74.7.2XX.* for GPTBot, matching OpenAIs published ip ranges[0].

So yes, they are definitely running scrapers that are this badly written.

Also old scraper bots trying to disguise themselves as GPTBot seems wholly unproductive, they're try to immitate users, not bots.

[0] https://openai.com/gptbot.json

reply
> but I'm guessing this is based off of the user-agent presented by the scrapers and could easily be faked to shift blame

Yes, hence the "which was the only two I saw, but could have been forged".

> I'd love to see some of the web logs from this if you'd be willing to share!

Unfortunately not, I'm deleting any logs from the server after one hour, and also don't even log the full IP. I took a look now and none of the logs that still exists are from any user agent that looks like one of those bots.

reply
Huh, I had a gitea instance in the public web on one of my netcup vps's. I didn't set any logs and was using cloudflare tunnels (with a custom bash script which makes cf tunnels expose PORT SUBDOMAIN).

Maybe its time for me to go ahead and start it again with logs to see if there are any logs.

I will maybe test it in all three 1) With CF tunnels + AI Block, 2) Only CF tunnels, 3) On a static IP directly. Maybe you can try the experiment too and we can compare our findings (also saying because I am lazy and I had misconfigured that cf tunnel so when it quit, I was too lazy to restart the vps given I just use it as a playground and just wanted to play around self hosting but maybe I will do it again now)

reply