(www.strix.ai)
It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.
So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.
I genuinely think the problem is that frameworks don't do this for you. Why should you need a DBA and platform architect to make a multi tenant CRUD app, pretty much every one does the same thing..
Claude Code will do this, and actively encourage bypassing any verification before pushing to prod. I saw that first hand with its attempted handling of a major CIAM provider, and then Vercel using whatever OAuth provider in the ol' transitive breach
That is common knowledge now, right? Or am I just smoking yellow tops
In my personal assessment some individuals within leadership at this startup were highly risk-tolerant. I speculate that had those individuals been in leadership at other companies not subject to HIPAA, security practices would have been as lax and irresponsible as what's being described as the norm in this thread.
However, because of HIPAA, security practices at this company were fair-to-middling. There were certainly weak areas and mindless box-checking a la SOC-2, but it wasn't a complete shitshow. Those of us in the engineering deparment who cared were able to raise concerns and not have them dismissed, and were generally allowed to do things the right way.
My takeaway: when there are actual severe penalties for privacy breaches, startups may not be so cavalier with your data.
I don't have any concrete recommendations other than that one really good senior+ engineer is more important than a legion of juniors early on. Basic security doesn't require an extra hire; it requires somebody experienced enough to build your product right.
I'll bet at some point someone contact this company and said "hey I'm being shown the wrong course" or "I can't access the material I just uploaded."
I've never seen anyone who got the basics right compromised because of some esoteric security issue. I'm sure it happens and probably will happen more now that it can be automated but it's usually a case of a system being left wide open.
As a solo entrepreneur you really have to prioritize your time but spending an extra day or two to think through everything using something like Gemini thinking or pro and an llm with an eye on security before you start taking customer data is probably a really good use of your time and you'll learn a thing or three. Just keep asking why and think critically.
Tried a bunch of open source pentesters, including strix (though we never managed to get strix to actually complete.) this project called shannon was the only one that we managed to get working reliably and it definitely smoked the output of one of the $10K pentests we did, (we had just discovered shannon after we had gotten the pentest firm's report, so it gave us a good baseline comparison). caveat: this was white box and our pentest firm did greybox, but neverthless I was still very unimpressed by what I got from the pentest firm. $50 vs $10K is not even a comparison lol with far far better results and sent our cto into near heart attack mode.
i think the days of pentesting firms are over - especially with mythos/5.5-cyber etc like capability coming into play. very exciting times ahead!
Let me guess though. They are SOC2 and ISO compliant right ?
I have come to believe that most security audits, even ones conducted through widely-reputed groups or under strict standards, are much worse than useless.
Audits are a thing that can theoretically be done well/in a value-adding way, but rarely are, for the same reasons that most private-sector security teams I’ve worked with are effective only at generating internal badwill, and ineffective at increasing security above a very low baseline.
For example, they won't create for me an MS Entra ID App Registration for our internal project Because Security Reasons (they literally won't tell me why). So instead, I use Integrated Windows Authentication, which is about as secure as a hotel bar patron charging to "his" room.
They are insisting everyone start RDPing into a VM in Azure to do development work. Won't be able to get to the new source control system without it. Old system is losing its license, etc, etc. Oh, but the new system is not approved for storing CUI. So... what the actual fuck are our AFSIM developers supposed to do?
These VMs are 1/4 the hardware specs of my laptop in almost every dimension, yet still somehow car 50% more to rent per year than the entire purchase price of my laptop. Plus they are timesharing is in them, 4 developers per VM. It's not like we live in majorly different timezones. We're either all going to be on from 9am - 5pm EST or we're not.
Within these VMs, I have absolutely zero ability to install any software or modify any settings. Even the god damn clock is set to GMT+0 and I can't change it to local time. Sure would be nice if the must visible clock in my visual field accurately portrayed the current time when I have the RDP session running full screen, which is basically the only way to run it without wanting to hammer drill my brains out.
I have heard rumors that a lot of the other developers have started working from their personal devices, because otherwise they are at a complete work stoppage on their work computers due to the cockamamie IT setup. So congratulations, IT Security Team. Good job.
I still want to know why--when we're wanting to run services like Document Intelligence and Azure OpenAI in Azure GCC High, a FedRAMP-High approved environment with these services claiming DoD Impact Level 5 compliance--our IT Security department thinks that can't be used for CUI. They say we need to spend 2 years and $2 million doing some kind of review of Azure itself before it can be approved for CUI. Uhm, no? If it needs that, why would we spend that money and time? Why wouldn't Microsoft be the one to do that?
The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.
It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
https://www.wiz.io/blog/azure-active-directory-bing-misconfi...
1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?
2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
DoD does appear to offer a “Defense Industrial Base - Vulnerability Disclosure Program” for all public-facing DoD/DoW systems.[1] However, this might not include contractor-controlled assets or services. I cannot view the HackerOne page that it redirects to (login is required) to view more details.
[1]: https://www.dc3.mil/Missions/Vulnerability-Disclosure/DIB-Vu...
In my experience it’s usually foreign nationals from third-world countries doing drive-by beg-bounty testing. Presumably they don’t much consider legality.
Or the operation is not even illegal where they come from?
Well that’s pretty damning.
If your name is associated with a startup in a visible leadership position you will get mass-spammed from people claiming to have discovered critical vulnerabilities in your system. When you engage with them, the conversation will turn into requests to hire them for their services.
So the CEO handled it poorly, but it's also not a great choice to withhold the details of the vulnerability in initial contact. If the goal was to get something fixed it should have been included in an easy-to-forward e-mail that could have been sent to someone who could act upon it.
Anyone who works with security or bug bounties can tell you that the volume of bad reports was a problem before LLMs. Now that everyone thinks they're going to use LLMs to get gigs as pentesters the volume of reports is completely out of control.
Their response isn't damning to me. It sounds like they just assume they're one of these spammers.
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
No flying cars? Okay. Nobody traveled much beyond the orbit of the Moon? Dang. But email? We didn't even get reliable privacy separate from identity?
Oh, don't think that outer space will let you escape the misery of email:
> "I have two Microsoft Outlooks and neither one is working": Artemis II astronauts
When the "good Samaritan" do not go to the vendor, they go to the client (i.e., they do not contact the DIB company, they contact the Gov agency).
I have seen government contractors getting pilloried, losing their livelihood when this happened. And, yes there is always a "quick fix offer" by the "good Samaritan" to the vendor and promised re-assurance to the Gov agency, only if this misguided vendor would go with their solution.
It is also not unusual to find out later on, that the identification or even the resource reported on was wrong - but by this time the Gov agency already punished the contractor and the reporting "good Samaritan" is laughing (sometimes to the bank).
they can get away with unethical vulnerability disclosure because think of the children, the threat to the nation, grandma off the cliff, and <insert your favorite cliche justification of malfeasance>.
Yes, sore subject.
It's the same thing with selling general offensive security tools. You have to proactively make it clear that it's for testing and not criminal use. Otherwise, cops are going to assume you're complicit and make things shitty.
The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.
I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
Or any other dataset with a hyper targeted demographic.
How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
The CEO seems more interested in insulting people than securing his company’s product.
Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.
> Their website is literally a16z.com
I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)
> exceedingly relevant for the HN audience
We contain multitudes.
So the world needs to adapt to your knowledge instead of you learning to adapt to a often used, and well-known moniker?