1) One can no longer trust things out on the web. 2) One no longer needs things out on the web.
For 1), I hope the defense mechanism kicks in time to bake security into our computing culture and pervades throughout the stack.
Being a responsible programmer/sys admin has always been read heavy, as long as I've been alive. Write only code is antithetical to the basis of running a trustworthy system.
The Web is a rather different beast, but the question is not "can you trust the Internet", but "can you trust a random website", and now even "can you trust a previously trustworthy website".
You of course should not trust any pictures or videos as critical evidence, they should be corroborated by other means. But this has been true for several years now.
I assume you mean software, because we haven't been trusting other things on the web already for decades.
As for software, everybody interested knew about inherent insecurity of supply chain of modern software but the solutions proposed were too expensive. We need an order of magnitude more money lost for organizations to start switching from today's security theater to a model with security built in.
Even though we were aware of the insecurity of the supply chain, 1) In practice we tend to ignore it except for mission critical cases. We still do. 2) Autonomous vulnerability/exploitation at scale was difficult and reserved for high value targets.
What you said will be accelerated by 2) now.
Not in this world. It would create friction in the money printing machines.
I personally would still recommend software engineering. Security in far majority of places is still checkbox and cost driven. Outrage happens around incidents, but rarely are people willing to invest meaningful in their people. Security SaaS on the other hand, is doing great, so anything driving revenue there is good.
If AI is capable of performing these attacks, what would stop AI from replacing the security engineers?
Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked.
Therefore, there is still value in being the human in Cyber Security (however you are supposed to capitalise that!)
There are still protections and mitigations that targets can do, but those things require humans. The things that attackers can do require no humans in the loop.
Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked.
> There are still protections and mitigations that targets can do, but those things require humans.
Which things would you point to here?
I didn't claim that the human defence is the only layer. Your analogy is only valid if my claim is that it's AI attackers vs Human defenders. It's not. It's AI attackers vs AI + Human defenders.
> Which things would you point to here?
If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over; I am not in the mood to enumerate the value that a human can add to AI defence.
I honestly find that a bizarre response in the middle of a discussion but you do you.
Maybe someone else could humour me since you're not in the mood to expand on the point that you made? The topic of the thread was that the ability of the AI tooling is outpacing what individuals can handle. Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human?
> Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human?
I did not make the claim that humans are in a better position to defend.
This was always the case? Security is asymmetric and attacker only needs to succeed once.
Compare how fast real attackers could iterate vs the defenders.
It’s the time between then and now that we’re talking about.
Geopolitics is the cause of the recent uptick in activity. Many of these groups are state sponsored or just fronts for nation-states themselves. genAI just makes it easier for people further down the chain to go after low hanging fruit.
The most significant impact genAI is having on infosec is creating work for those people in infosec through vibe coding and turning untested AI systems loose on internal networks. genAI just lets developers and admins shoot themselves in the foot faster. genAI is an artificial intern.
(I do realize the irony of writing this on HN, but I digress)
Yes, but you can't be a CISSP or SOC monkey - that has no future.
You need to be an actual Software Engineer who understands development fundamentals, OS internals, web dev fundamentals, algorithms, etc as well as offensive and defensive concepts.
To many "cybersecurity" graduates in North America aren't even qualified to do L1 IT Helpdesk, which is a shame because the IT to Security talent pipeline is critical (along with the SRE, SWE, and ML to security pipeline).
Cybersecurity is basically a wholistic architectural review of software that takes business, engineering, and operational context into account to make a qualified judgment about risk.
> hate checking boxes, feels like it creates some pointless work sometimes
Everyone does. It doesn't actually help reduce tangible risk, but it helps you understand the operational and liability aspect of cybersecurity which is critical as well.
> compliance alone makes me never want to do cybersecurity
Compliance =/= Cybersecurity. If you work in an organization where security actually means compliance, then leave.
---
Honestly, it's region and industry dependent. If you are east coast, transition into a JPMC or GS tier bank (yes, banks are bleeding edge security personas).
If you are west coast, it shouldn't be difficult for a SRE/DevOps/Cloud type to become a SWE or Solutions Engineer at a cybersecurity company.
If you are in Europe, get an H1B and leave. I literally helped sponsor 2 O-1s today from European cybersecurity founders who wanted to leave becuase of subpar terms and bureaucracy.
This was exactly the reason why GPT-2 was restricted for general release in 2019.
Check out section 4 - https://cdn.openai.com/GPT_2_August_Report.pdf
I recall there being Malvertising campaign problems ~12-15 years ago or so, and then they seemed to get on top of it.