And economics. Many people here are blaming incompetent security teams and app developers, but a lot of seemingly dumb security policies are due to insurers. If an insurer says "we're going to jack up premiums by 20% unless you force employees to change their password once every 90 days", you can argue till you're blue in the face that it's bad practice, NIST changed its policy to recommend not regularly rotating passwords over a decade ago, etc., and be totally correct... but they're still going to jack up premiums if you don't do it. So you dejectedly sigh, implement a password expiration policy, and listen to grumbling employees who call you incompetent.
It's been a while since I've been through a process like this, but given how infamous log4shell became, it wouldn't surprise me if insurers are now also making it mandatory that common "hacking strings" like /etc/hosts, /etc/passwd, jndi:, and friends must be rejected by servers.
We're SOC2 + HIPAA compliant, which either means convincing the auditor that our in-house security rules cover 100% of the cases they care about... or we buy an off-the-shelf WAF that has already completed the compliance process, and call it a day. The CTO is going to pick the second option every time.
If your startup is on the verge of getting a 6 figure MRR deal with a company, but the company's security team mandates you put in a WAF to "protect their data"... guess you're putting in a WAF, like it or not.
Install the WAF crap, and then feed every request through rot13(). Everyone is happy!
I understand the need for security tooling, but I don’t think companies often consider the huge performance impact these tools add.
This is such a bizarre hybrid policy, especially since forced password rotations at fixed intervals are already not recommended for end-user passwords as a security practice.
One discourse I hear is that "people will just use the same password everywhere". To which I'll answer, "but we have mfa". "yeah, but the insurance guys".
I keep hearing that often on HN, however I've personally never seen seen such demands from insurers. I would greatly appreciate if one share such insurance policy. Insurance policies are not trade secrets and OK to be public. I can google plenty of commercial cars insurance policies for example.
https://retail.direct.zurich.ch/resources/definition/product...
Questionnaire Zurich Cyber Insurance
Question 4.2: "Do you have a technically enforced password policy that ensures use of strong passwords and that passwords are changed at least quarterly?"
Since this is an insurance questionnaire, presumably your answers to that question affect the rates you get charged?
(Found that with the help of o4-mini https://chatgpt.com/share/680bc054-77d8-8006-88a1-a6928ab99a...)
I just cant imagine any outcome other than it was translated to just a "no" and increased your premium over what it would have otherwise been.
Totally bonkers stuff.
Eliminating everything but a business's industry specific apps, MS Office, and some well-known productivity tools slashes support calls (no customization!) and frustrates cyberattacks to some degree when you can't deploy custom executables.
In around 2011, the Defence Signals Directorate (now the Australian Signals Directorate) went through and did an analysis of all of the intrusions they had assisted with over the previous few years. It turned out that app whitelisting, patching OS vulns, patching client applications (Office, Adobe Reader, browsers), and some basis permission management would have prevented something like 90% of them.
The "Top 4" was later expanded to the Essential Eight which includes additional elements such as backups, MFA, disabling Office macros and using hardened application configs.
https://www.cyber.gov.au/resources-business-and-government/e...
At first glance that might seem a poor move for corporate information security. But crucially, the security of cloud webapps is not the windows sysadmins' problem - buck successfully passed.
It's about the transition from artisanal hand-configuration to mass-produced fleet standards, and diverting exceptional behavior and customizations somewhere else.
Alice is on Discord because half of the products the company uses now give more or less direct access to their devs through Discord
You install software via ticket requests to IT, and devs might have admin rights, but not root, and only temporary.
This is nothing new though, back in the timesharing days, where we would connect to the development server, we only got as much rights as required for the ongoing development workflows.
Hence why PCs felt so liberating.
Just wait when more countries keep adopting cybersecurity laws for companies liabilities when software doesn't behave, like in any other engineering industry.
A breach can turn out into enough money being lost, in credibility, canceled orders, or lawsuits, big enough to close shop, or having to fire those that thought security rules were dumb.
Also anyone with security officer title, in many countries has legal responsibilities when something goes wrong, so when they sign off software deliverables that go wrong, is their signature on the approval.
The worst part about cyber insurance, though, is that as soon as you declare an incident, your computers and cloud accounts now belong to the insurance company until they have their chosen people rummage through everything. Your restoration process is now going to run on their schedule. In other words, the reason the recovery from a crypto-locker attack takes three weeks is because of cyber insurance. And to be fair, they should only have to pay out once for a single incident, so their designated experts get to be careful and meticulous.
Fear of a prospective expectation, compliance, requirement, etc., even when that requirement does not actually exist is so prevalent in the personality types of software developers.
My mental model at this point says that if there's a cost to some important improvement, the politics and incentives today are such that a typical executive will only do the bare minimum required by law or some equivalent force, and not a dollar more.
If anything, I think this attitude is part of the problem. Management, IT security, insurers, governing bodies, they all just impose rules with (sometimes, too often) zero regard for consequences to anyone else. If no pushback mechanism exists against insurer requirements, something is broken.
If the insurer requested something unreasonable, you'd go to a different insurer. It's a competitive market after all. But most of the complaints about incompetent security practices boil down to minor nuisances in the grand scheme of things. Forced password changes once every 90 days is dumb and slightly annoying but doesn't significantly impact business operations. Having to run some "enterprise security tool" and go through every false positive result (of which there will be many) and provide an explanation as to why it's a false positive is incredibly annoying and doesn't help your security, but it's also something you could have a $50k/year security intern do. Turning on a WAF that happens to reject the 0.0001% of Substack articles which talk about /etc/hosts isn't going to materially change Substack's revenue this year.
It negatively impacts security, because users then pick simpler passwords that are easier to rotate through some simple transformation. Which is why it's considered not just useless, but an anti-pattern.
I still have a nervous tick from having a screen lock timeout "smaller than or equal to 30 seconds".
I would argue that password policies are very context dependent. As much as I detest changing my password every 90 days, I've worked in places where the culture encouraged password sharing. That sharing creates a whole slew of problems. On top of that, removing the requirement to change passwords every 90 days would encourage very few people to select secure passwords, mostly because they prefer convenience and do not understand the risks.
If you are dealing with an externally facing service where people are willing to choose secure passwords and unwilling to share them, I would agree that regularly changing passwords creates more problems than it solves.
When you don’t require them to change it, you can just assign them a random 16 character string and tell them it’s their job to memorize it.
Always made me judge my company's security teams as to why they enable this stupidity. Thankfully they got rid of this gradually, nearly 2 years ago now (90 days to 365 days to never). New passwords were just one key l/r/u/d on the keyboard.
Now I'm thinking maybe this is why the app for a govt savings scheme in my country won't allow password autofill at all. Imagine expecting a new password every 90 days and not allowing auto fill - that just makes passwords worse.
The long passphrase is more for the key that unlocks your password manager rather than the random passwords you use day to day.
I believe that this is overall a reasonable approach for companies that are bigger than "the CEO knows everyone and trusted executives are also senior IT/Devs/tech experts" and smaller than "we can spin an internal security audit using in-house resources"
Information loss is an inherent property of large organizations.
That's such an interesting axiom, I'm curious if you would want to say more about it? It feels right intuitively - complexity doesn't travel easily across contexts and reaching a common understanding is harder the more people you're talking to.
I imagine this gets amplified in a large org. The docs are lacking, people might not read them anyway, and you get an explosion of people who don't understand very much but still have a job to do.
The underlying purpose of the rules and agency to apply the spirt rather than the letter gets lost early in the chain and trying to unwind it can be tedious.
I wouldn't be mean about it. I'm imagining adding a line to the email such as:
> (Yes, I know this is annoying, but it's required by our insurance company.)
What is the insurance company going to do, jack up our rates because we accurately stated what their policy was?
I'm not saying you're wrong, I've never worked in a company this large (except for a brief internship), or in IT specifically. But also, like, come on people, grow up.
It would've gone from the insurer to the legal team, to the GRC team, to the enterprise security team, to the IT engineering team, to the IT support team, and then to the user.
Steps #1 to #4 can (and do) introduce their own requirements, or interpret other requirements in novel ways, and you'd be #5 in the chain.
Ask the CIO what actual threat all this is preventing, and you'll get blank stares.
As an engineer what incentive is there to put effort into knowing where each form input goes and how to sanitize it in a way that makes sense? You are getting paid to check the box and move on, and every new hire quickly realizes that. Organizations like these aren't focused on improving security, they are focused on covering their ass after the breach happens.
the CIO is securing his job.
Every CIO I have worked for (where n=3) has gotten where they are because they're a good manager, even though they have near-zero current technical knowledge.
The fetishizing of "business," in part through MBAs, has been detrimental to actually getting things done.
A century ago, if someone asked you what you do and you replied, "I'm a businessman. I have a degree in business," you'd get a response somewhere between "Yeah, but what to you actually do" and outright laughter.
Finance and business grads have really taken over the economy, not just through technocratic "here's how to do stuff" advice but by personally taking all the reigns of power. They're even hard at work taking over medicine and pushing doctors out of the work-social upper-middle-class. Already did it with professors. Lawyers seem safe, so far.
Nope, lawyers are fucked too. It's just not as advanced yet: https://www.abajournal.com/web/article/arizona-approves-alte...
No.
> aggressively
No.
>, and in this case, to the wrong content altogether.
Yes - making it not a Scunthorpe problem.
WAFs are always a bad idea (possible exception: in allow-but-audit mode). If you knew the vulnerabilities you'd protect against them in your application. If you don't know the vulnerabilities all you get is a fuzzy feeling that Someone Else is Taking Care of it, meanwhile the vulnerabilities are still there.
Maybe that's what companies pay for? The feeling?
Correction: it is not your application but someone else's Certified Stuff (TM) that you can't change, but which is still vulnerable.
Who would be that stupid?
I do wish it were possible to write the rules in a more context-sensitive way, maybe possible with some standards around payloads (if the WAF knows that an endpoint is accepting a specific structured format, and how escapes work in that format, it could relax accordingly). But that's probably a pipe dream. Since the backend could be doing anything, paranoid rulesets have to treat even escaped data as a potential issue and it's up to users to poke holes.
We changed auditors after that.
We disabled this check, auditor swerved out of his lane, I spent more several hours explaining things he didn’t understand, and things resolved after our CEO had a call with him (you can imagine how the discussion went).
All in all, if the auditor would have been more reasonable it wouldn’t have been an issue, but I’ve always been wary of managed firewall rulesets because of this reason.
I might be out of the loop here, but it seems to me that any WAF that's triggered when the string "/etc/hosts" is literally anywhere in the content of a requested resource, is pretty obviously broken.
A false positive from a conservative evaluation of a query parameter or header value is one thing, conceivably understandable. A false positive due to the content of a blog post is something else altogether.
Rules like this might very well have had incredible positive impact on ten of thousands of websites at the cost of some weird debugging sessions for dozens of programmers (made up numbers obviously).
<!DOCTYPE html>
<html lang="en">
<body>
<p>/etc/hosts is a file on Unix hosts</p>
is pretty clearly broken. And you can't meaningfully measure product metrics like impact for fundamentally broken products.agree
> And you can't meaningfully measure product metrics like impact for fundamentally broken products
disagree
Oh: I resisted tooth and nail about turning on a WAF at one of my gigs (there was no strict requirement for it, just cargo cult). Turns out - I was right.
I agree. There is a business opportunity here. Right in the middle of your sentences.
Hint: Context-Aware WAF.
Many platforms have emerged in the last decade - some called it smart WAF, some called it nextgen WAF.. All vaporware garbage that consumes tons and tons of system resource and still manages to do a shit job of _actually_ WAF'ing web requests.
To be truly context-aware, you need to compute a priori about the situation - the user, the page, the interactions etc.
Definitely, though I have seen other solutions, like inserting non-printable characters in the problematic strings (e.g. "/etc/ho<b></b>sts" or whatever, you get the idea). And honestly that seems like a reasonable, if somewhat annoying, workaround to me that still retains the protections.
They shouldn't be doing that job at all. The content of user data is none of their business.
I favor the latter approach. That group of Cloudflare users will understand the complexity of their use case accepting SQL in payloads and will be well-positioned to modify the default rules. They will know exactly where they want to allow SQL usage.
From Cloudflare’s perspective, it is virtually impossible to reliably cover every conceivable valid use of SQL, and it is likely 99% of websites won’t host SQL content.
WAFs do throw false positives and do require adjustments OOTB for most sites, but you’re missing the forest by focusing on this single case.
And WAF rules can be tuned. There's no reason an apostrophe in a username or similar needs to be blocked, if it were by a rule.
Let's see what's blocked:
"Division by zero" anywhere in the response body since that's a php error. Good luck talking about math ([0] and [1])
Common substrings in webshells, all matched as strings in response bodies, rather than parsing HTML, so whatever, don't comment about webshells either [2]
Unless the body is compressed, in which case don't apply the above. Security [3].
Also, read this regex and tell me you understand what it's doing. Tell me the author of it understands what it matches: https://github.com/coreruleset/coreruleset/blob/943a6216edea...
What the coreruleset is doing here is trying to parse HTML, SQL, HTTP, and various other languages with Regular Expressions. This doesn't work. This will never give you a right result.
It's trying to keep up to date with the string representation of java and php errors, without even knowing the version of Java the server is running, and without the Java maintainers, who constantly add new errors, having any say.
The only reasons attackers aren't evading the webshell rules here trivially is because so few people use these rules in practice that they're not even worth defeating (and it is quite easy to have your php webshell generate unique html each load, which cannot be matched by a regular expression short of /.*/; html is not a regular grammar).
I was ready to see something that made WAFs feel like they did _anything_ based on your comment, but all I see is a pile of crap that I would not want anywhere near my site.
Filtering java error strings and php error strings out of my rust app's responses using regexes to parse html is just such a clown-world idea of security. Blocking the loading of web-shells until the attacker changes a single character in the 'title' block of the output html seems so dumb when my real problem is that someone could write an arbitrary executable to my server.
Every WAF ruleset I've read so far has made me sure it's a huge pile of snake-oil, and this one is no different.
[0]: https://github.com/coreruleset/coreruleset/blob/943a6216edea...
[1]: https://github.com/coreruleset/coreruleset/blob/943a6216edea...
[2]: https://github.com/coreruleset/coreruleset/blob/943a6216edea...
[3]: https://github.com/coreruleset/coreruleset/blob/943a6216edea...
These rules do in fact work. Like I've said previously, these rules require tuning for your particular website. If I'm "talking about math" then I would modify or disable that rule as needed.
I think this is the forest you're missing. WAF isn't "install it and walk away". WAF needs to be tested in conjunction with your release, like any other code would.
The WAF can and does protect against attacks your code would never think of. It also /logs requests/ in a way your web server will not, making it invaluable for auditing.
And when running 3rd party software that has a function you cannot control, but need to prevent, WAFs can do that, too. I have a particular query string that must work from an internal but not external network while external/internal users leverage the same URL -- WAF can do that with a custom rule examining the query string and denying access to the outside world.
Or if I need to prevent [AI] bot scraping. WAF can do that with a couple of clicks.
WAF also unloads the web server from malicious traffic. Instead of having to size up or out a web server, I can have a WAF appliance prevent that traffic from ever reaching the server.
> Every WAF ruleset I've read so far
You don't appear to have any experience with implementation or operation of a WAF, but are attempting to be authoritative and dismiss a WAFs utility.
(in the anti-WAF camp but playing a pedant here)
In your Django app, you indeed follow the best practices and don't concatenate strings together and so think that this security theater doesn't apply. Yet, this is precisely how Django ORM works under the hood, and SQL injections are periodically found there.
The real solution here is to subscribe to the django-announce list and update Django, or backport the fix manually.
If you know what you're doing, turn these protections off. If you don't, there's one less hole out there.