400 Bad Request, the generic client error code, which is correct but boring;
402 Payment Required, and honestly if you want to pay me to make a particular URL with query string work, I’m open to it;
404 Not Found, but it’s too likely to have side effects, and it doesn’t convey the idea that the request was malformed, which is what I’m going for; and
303 See Other with no Location header, which is extremely uncommon these days but legitimate. Or at least it was in RFC 2616 (“The different URI SHOULD be given by the Location field in the response”), but it was reworded in 7231 and 9110 in a way that assumes the presence of a Location header (“… as indicated by a URI in the Location header field”), while 301, 302, 307 and 308 say “the server SHOULD generate a Location header field”. Well, I reckon See Other with no Location header is fair enough. But URI Too Long was funnier."
https://chrismorgan.info/no-query-strings?fooObviously it's against the spirit of the thing, but I don't think it's wrong per-se.
>Complain to whoever gave you the bad link, and ask them to stop modifying URLs, because it’s bad manners.
It's ironic that an error response so blatantly violating the robustness principle is throwing shade about bad manners.
In our modern world, the robustness principle has become an invitation to security bugs, and vendor lock-in. Edge cases snuck through one system on robustness, then trigger unfortunate behavior when they hit a different system. Two systems tried to do something reasonable on an ambiguous case, but did it differently, leading to software that works on one, failing to work on the other.
That said, we are paying a huge complexity cost due to our efforts to allow nonconforming pages. This complexity is widely abused by malicious actors. See, for instance, https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Ev... for ways in which attackers try to bypass security filters. A lot of it is only possible because of this unnecessary complexity.