That said, this should be used sparingly; as it embeds a behavior deep. If that behavior later no longer makes sense it can be extremely costly to change it later.
Too often liabilities exceed assets, or the liabilities are externalised.
Liability doesn't work as an incentive for many risks. For uncommon but extreme risks, it can be better to roll the dice on company failure than regularly pay low amounts for mitigation.
It is especially effective to ignore liabilities when a company has poor profitability anyways.
And then you see major companies sidestep the costs of their liabilities (plenty of examples after security failures, but also companies like Johnson&Johnson).
At some point I was asked to look over the documents for the compliance definition and it was really hilarious. I had to give my engineering perspective on which aspects of the requirements we were and weren't meeting.
But they were stuff like "you must have logs". "You must authenticate users". "You must log failed authentication attempts".
Did we fulfill these requirements? It's a meaningless question. Unless you were literally running an open door telnet service or something you could interpret the questions so as to support any answer you wanted to give.
So I just had to be like "do you want me to say yes?" and they did, so I said yes. Nothing productive was ever achieved during that engagement.
Companies do want to be secure. They try, and they often fail because it's hard.
They hire auditors to find problems and to shift blame. But since they only have 30 days to fix the problems that are found, it's going to see a lot like they only care about shifting the blame. Because at that point, they only care about passing that audit.
Right after that, though, they start caring about security again.
How do I know? 19 years experience going through those audits on the company side. For 11 months of the year, it was clear the boss cared about security. For that 1 month during the 'free retest' period, they only cared about passing that audit.