https://www.cisa.gov/sites/default/files/2024-03/CSRB%20Revi...
What I found most incredible about the story is that it wasn't Microsoft who found the intrusion. It was some sysadmin at State who saw that some mail logs did not look right and investigated.
---
[1] https://techcrunch.com/2025/03/11/doge-axes-cisa-red-team-st...
"Azure's Security Vulnerabilities Are Out of Control" - https://www.lastweekinaws.com/blog/azures_vulnerabilities_ar...
"Microsoft comes under blistering criticism for “grossly irresponsible” security" - https://arstechnica.com/security/2023/08/microsoft-cloud-sec...
(See also: quite a few bits of COVID mitigation)
https://arstechnica.com/information-technology/2026/03/feder...
I have become very cautious of such stories for this very reason. Who gets how much blame has a lot to do with "culture" or momentum. Bashing Microsoft for example is always super fine, but at multiple occasions I found the facts to be much more nuanced.
[edit: 'pretty' instead of 'perfectly']
ProPublica has an agenda, and they slant their reporting to push it.
You can like their agenda and support this effort, but it’s not journalism.
I'm not sure if I understand this part. I'm trying to put it into my own words. Is the following correct? The attacker provided an input that was so long, that it was rejected by the database. And the program that submitted the SQL query to the database did not have any logic for handling a query failure, which is why there is no trace of the login attempt in the log or elsewhere.
Reading through the article I can't help but think that many of these authentication/authorization flows are entirely to complex. I understand that they need to be, for some use cases, but those are probably not the majority.
If I remember the issue right, we lost a client secret (it just vanished!) and I went to the audit logs to see who dun it. According to the logs, I had done it. And yet, I also knew that I had not done it.
I eventually reconstructed the bug to an old page load. I had the page loaded when there were just secrets "A" & "B". When I then clicked the delete icon for "B", Azure deleted secrets "B" and "C" … which had been added since the page load. Essentially, the UI said "delete this row" but the API was "set the set of secrets to {A}". The audit log then logged the API "correctly" in the sense of, yes, my credentials did execute that API call, I suppose, but utterly incorrectly in the sense of any reasonable real-world view as to what I had done.
Thankfully we got it sorted, but it sort of shook my faith in Azure's logs in particular, and a little bit of audit logs in general. You have to make sure you've actually audited what the human did. Or, conversely, if you're trying to reason with audit logs, … you'd best understand how they were generated.
I don't think I would ever accept audit logs in court, if I were on a jury. Audit logs being hot lies is within reasonable doubt.
I would say that the audit log was accurate, though, even though the bad UI design caused unintended consequences.
The human in the loop doesn't really control what gets done, it only expresses intend to the frontend.
I’m at some legacy business that depends on some .Net Framework LOB application, some random SaaS web software along with usual office stuff. I need to manage Windows machines, identity for everyone include integration with random SaaS web software and enforce random policies that Security swears if I don’t, we fail PCI audit and that business ending. Oh yea, our funding and salaries for department wouldn’t cover one scrum team at FAANG. What is my solution, go!
For most, they default to Microsoft solution because it works well enough to collect meager paycheck and go home.
Try managing a directory service even on RedHat and see how it goes.
I... get it.
The FAANGS needed to scale to a level where paying per-core licensing fees for an operating system was simply out of the question, not to mention the lack of customisability.
As a consequence, they all adopted Linux as their core server operating system.
Then, as their devs made millions in share options, they all scattered and made thousands of little startups... each one of which cloned the assumption that only Linux was a viable operating system for servers.
The mistake here is the same one that caused "Only MongoDB is Web Scale" and "Microservices are necessary for two devs and a PC as our server".
Just because a trillion dollar corporation decides on a thing, it does not mean it applies universally.
Outside of this bizarre little bubble, Windows is everywhere and Windows Server is still about 50% of the overall server market.
Operating systems and other applications that demand per-core licensing fees exist only because the people who buy them do not use their own money for this, so they do not care how much money they are wasting.
Most companies waste huge amounts of money not only for software, but for many other things, because those who have the power to make purchasing decisions have personal interests that are not aligned with what is really optimum for the company, while those who might have the best interests of the company in mind do not have the knowledge that would allow them to evaluate whether such purchasing decisions are correct.
The survival of Windows Server is not justified by any technical advantages. A few such advantages exist, but they do not compensate the huge PITA caused by licensing. I worked at a few companies where Windows Server was used and replacing it with either Linux or FreeBSD was always a great improvement, less by removing the payments for the licensing fees, but by providing complete freedom to make any changes in the environment without the friction caused by the consequences that such changes could have in modified licensing fees.
Absolutely savage lol
[If you didn't read the thing, it's one curl command.]
It should really horrify everybody that Microsoft is not investing more into Azure considering they host the worlds most known LLM (and used?).
A bug in the software is a bug in the process, and the process is the job of leadership. They've never cared about software quality. They'll put out lots of books about it, lots of talks, lots of claims. But they won't actually put out quality software. It's not in their DNA, never was.
It's not their size nor their age that makes this hard for them. Plenty of larger, older companies put out better product ever day. It's just them. Someone in each size class is the best, and someone else is the worst. MS has been the worst the entire time.
Google Cloud is simplistic in comparison. AWS is full of legacy complexity (IAM policies, sigh) but it's fairly self-contained and can be worked around by splitting stuff into accounts.
I have not looked at Oracle cloud yet. Is it any better than MS?
At last glance it's far more like infrastructure leasing, with some Oracle twists, such as hosted Oracle databases, than it is full on cloud services. But this was a few years ago.
The Audit log showed the service identity of Application Insights, not the user that pressed the button! The cloud ops team changed the size back, and then the mysterious anonymous developer... changed it back. We had to have an "all hands" meeting to basically yell at the whole room to cut that out. Nobody fessed up, so we still don't know who it was.
The Azure Support tech argued with me vehemently that this was by design, that Azure purposefully obscures the identity of users in audit logs!!! He mumbled something about GDPR, which is nonsense, because we're on the opposite side of the planet from Europe.
At first I was absolutely flabbergasted that anyone even remotely associated with a security audit log design could be this stupid, but then something clicked for me and it all started making sense:
Entra Id logs are an evolution of Office 365 logs.
Microsoft developed Entra ID (original Azure Active Directory) initially for Microsoft 365, with the Azure Public Cloud platform a mere afterthought.They have a legitimate need to protect customer PII, hence the logs don't contain their customers' private information when this isn't strictly necessary. I.e.: Microsoft's subcontractors and outsourced support staff don't need and shouldn't see some of this information!
The problem was that they re-used the same code, the same architecture decisions, the same security tradeoffs for what are essentially 100% private systems. We need to see who on our payroll is monkeying around with our servers! There is NO expectation of privacy for staff! GDPR does NOT apply to non-European government departments! Etc...
To this day I still see gaps in their logging where some Microsoft dev just "oops" forgot to log the identity of the account triggering the action. The most frustrating one for me is that Deployments don't log the identity of the user. It's one of only three administrative APIs that they have!
[1] As an aside: The plan had a 3-year Reservation on it, which meant that we were now paying for the original plan and something twice the size and non-Reserved! This was something like 5x the original cost, with no warning and no obvious way to see from the Portal UI that you're changing away from a Reserved size.
There is just... not for this. This is literally the case allowed by GDPR, only thing that GDPR requires is making sure those logs can only be accessed by people designated in organisation to parse it
It was also nonsense because the GDPR is crystal clear about where PII may be used. Audit logs are one of those exceptions where the goal of identifying users simply permits storing usernames and associated attributes (certainly in the case of upgrading a paid plan).
This wasn't about the GDPR; you were being told to sod off.
Vast misunderstanding of GDPR by the clowns implementing it is also possible; or just "can't be arsed so hide it all"
Most businesses using a public cloud need to log the activities of their staff accessing their own systems, which has an entirely different set of policies.
A similar example is Azure Application Insights. Microsoft uses it internally, so they keep removing features that log PII to be "GDPR compliant". Again, they're logging the activities of the general public across the entire world population, so GDPR legitimately applies. To them! Not us. Most of our scenarios are internal staff or partner organisations accessing private systems. Not only do we not do business with anyone from Europe, our systems are either privately networked or geo region locked. Europeans can't access anything in our local state government's internal staff portal even if they wanted to! Unless they hack us... but then we would very much like to log that.
But Microsoft can totally handle applying the GDPR correctly. They have a lot of countries as customer which use Azure in some capacity and where the need for comprehensive audit logging exists. What you were seeing is a bug; or rather a design flaw, marked as WONTFIX. Some customer rep was giving you the two-fingered salute by starting with 'but GDPR…'.