We're SOC2 + HIPAA compliant, which either means convincing the auditor that our in-house security rules cover 100% of the cases they care about... or we buy an off-the-shelf WAF that has already completed the compliance process, and call it a day. The CTO is going to pick the second option every time.
If your startup is on the verge of getting a 6 figure MRR deal with a company, but the company's security team mandates you put in a WAF to "protect their data"... guess you're putting in a WAF, like it or not.
Install the WAF crap, and then feed every request through rot13(). Everyone is happy!
I understand the need for security tooling, but I don’t think companies often consider the huge performance impact these tools add.
This is such a bizarre hybrid policy, especially since forced password rotations at fixed intervals are already not recommended for end-user passwords as a security practice.
One discourse I hear is that "people will just use the same password everywhere". To which I'll answer, "but we have mfa". "yeah, but the insurance guys".
I keep hearing that often on HN, however I've personally never seen seen such demands from insurers. I would greatly appreciate if one share such insurance policy. Insurance policies are not trade secrets and OK to be public. I can google plenty of commercial cars insurance policies for example.
https://retail.direct.zurich.ch/resources/definition/product...
Questionnaire Zurich Cyber Insurance
Question 4.2: "Do you have a technically enforced password policy that ensures use of strong passwords and that passwords are changed at least quarterly?"
Since this is an insurance questionnaire, presumably your answers to that question affect the rates you get charged?
(Found that with the help of o4-mini https://chatgpt.com/share/680bc054-77d8-8006-88a1-a6928ab99a...)
I just cant imagine any outcome other than it was translated to just a "no" and increased your premium over what it would have otherwise been.
Totally bonkers stuff.
Eliminating everything but a business's industry specific apps, MS Office, and some well-known productivity tools slashes support calls (no customization!) and frustrates cyberattacks to some degree when you can't deploy custom executables.
In around 2011, the Defence Signals Directorate (now the Australian Signals Directorate) went through and did an analysis of all of the intrusions they had assisted with over the previous few years. It turned out that app whitelisting, patching OS vulns, patching client applications (Office, Adobe Reader, browsers), and some basis permission management would have prevented something like 90% of them.
The "Top 4" was later expanded to the Essential Eight which includes additional elements such as backups, MFA, disabling Office macros and using hardened application configs.
https://www.cyber.gov.au/resources-business-and-government/e...
At first glance that might seem a poor move for corporate information security. But crucially, the security of cloud webapps is not the windows sysadmins' problem - buck successfully passed.
It's about the transition from artisanal hand-configuration to mass-produced fleet standards, and diverting exceptional behavior and customizations somewhere else.
Alice is on Discord because half of the products the company uses now give more or less direct access to their devs through Discord
You install software via ticket requests to IT, and devs might have admin rights, but not root, and only temporary.
This is nothing new though, back in the timesharing days, where we would connect to the development server, we only got as much rights as required for the ongoing development workflows.
Hence why PCs felt so liberating.
Just wait when more countries keep adopting cybersecurity laws for companies liabilities when software doesn't behave, like in any other engineering industry.
A breach can turn out into enough money being lost, in credibility, canceled orders, or lawsuits, big enough to close shop, or having to fire those that thought security rules were dumb.
Also anyone with security officer title, in many countries has legal responsibilities when something goes wrong, so when they sign off software deliverables that go wrong, is their signature on the approval.
The worst part about cyber insurance, though, is that as soon as you declare an incident, your computers and cloud accounts now belong to the insurance company until they have their chosen people rummage through everything. Your restoration process is now going to run on their schedule. In other words, the reason the recovery from a crypto-locker attack takes three weeks is because of cyber insurance. And to be fair, they should only have to pay out once for a single incident, so their designated experts get to be careful and meticulous.
Fear of a prospective expectation, compliance, requirement, etc., even when that requirement does not actually exist is so prevalent in the personality types of software developers.
My mental model at this point says that if there's a cost to some important improvement, the politics and incentives today are such that a typical executive will only do the bare minimum required by law or some equivalent force, and not a dollar more.
If anything, I think this attitude is part of the problem. Management, IT security, insurers, governing bodies, they all just impose rules with (sometimes, too often) zero regard for consequences to anyone else. If no pushback mechanism exists against insurer requirements, something is broken.
If the insurer requested something unreasonable, you'd go to a different insurer. It's a competitive market after all. But most of the complaints about incompetent security practices boil down to minor nuisances in the grand scheme of things. Forced password changes once every 90 days is dumb and slightly annoying but doesn't significantly impact business operations. Having to run some "enterprise security tool" and go through every false positive result (of which there will be many) and provide an explanation as to why it's a false positive is incredibly annoying and doesn't help your security, but it's also something you could have a $50k/year security intern do. Turning on a WAF that happens to reject the 0.0001% of Substack articles which talk about /etc/hosts isn't going to materially change Substack's revenue this year.
It negatively impacts security, because users then pick simpler passwords that are easier to rotate through some simple transformation. Which is why it's considered not just useless, but an anti-pattern.
I still have a nervous tick from having a screen lock timeout "smaller than or equal to 30 seconds".
I would argue that password policies are very context dependent. As much as I detest changing my password every 90 days, I've worked in places where the culture encouraged password sharing. That sharing creates a whole slew of problems. On top of that, removing the requirement to change passwords every 90 days would encourage very few people to select secure passwords, mostly because they prefer convenience and do not understand the risks.
If you are dealing with an externally facing service where people are willing to choose secure passwords and unwilling to share them, I would agree that regularly changing passwords creates more problems than it solves.
When you don’t require them to change it, you can just assign them a random 16 character string and tell them it’s their job to memorize it.
Always made me judge my company's security teams as to why they enable this stupidity. Thankfully they got rid of this gradually, nearly 2 years ago now (90 days to 365 days to never). New passwords were just one key l/r/u/d on the keyboard.
Now I'm thinking maybe this is why the app for a govt savings scheme in my country won't allow password autofill at all. Imagine expecting a new password every 90 days and not allowing auto fill - that just makes passwords worse.
The long passphrase is more for the key that unlocks your password manager rather than the random passwords you use day to day.
I believe that this is overall a reasonable approach for companies that are bigger than "the CEO knows everyone and trusted executives are also senior IT/Devs/tech experts" and smaller than "we can spin an internal security audit using in-house resources"
Information loss is an inherent property of large organizations.
That's such an interesting axiom, I'm curious if you would want to say more about it? It feels right intuitively - complexity doesn't travel easily across contexts and reaching a common understanding is harder the more people you're talking to.
I imagine this gets amplified in a large org. The docs are lacking, people might not read them anyway, and you get an explosion of people who don't understand very much but still have a job to do.
The underlying purpose of the rules and agency to apply the spirt rather than the letter gets lost early in the chain and trying to unwind it can be tedious.
I wouldn't be mean about it. I'm imagining adding a line to the email such as:
> (Yes, I know this is annoying, but it's required by our insurance company.)
What is the insurance company going to do, jack up our rates because we accurately stated what their policy was?
I'm not saying you're wrong, I've never worked in a company this large (except for a brief internship), or in IT specifically. But also, like, come on people, grow up.
It would've gone from the insurer to the legal team, to the GRC team, to the enterprise security team, to the IT engineering team, to the IT support team, and then to the user.
Steps #1 to #4 can (and do) introduce their own requirements, or interpret other requirements in novel ways, and you'd be #5 in the chain.