I think that there is now a phase were the bugs that are findable by AI will be reported en masse, and there will be a period of patching them and working through the queue. After this we will end up with better software overall, which is what Linus predicted a couple months ago.
That said there still needs to be a penality for crap reports, because those are still received, making us loose time on what is functionally just spam.
Infosec professional for almost 30 years here. I can confirm that the latest iterations of AI models are finding high quality bugs and vulnerabilities in the code we work with. If Daniel has access to Mythos, I suspect his experience would be even more shocking.
The problem I have is that the AI tools can find bugs faster than they can be patched, which is eventually going to prompt companies to use AI to patch bugs found by AI. Before long, no living being will be able to make heads or tails out of the code we run. Just my 2¢.
AI tools can find bugs faster than they can be patched
Not a security expert but wasn’t that the case already? It feels like before AI there were already a lot more bugs, security related or not, on backlogs. That’s precisely why there are metrics like severity.
Does that mean the bug bounty program will come back?
If they are getting valid findings with high quality reports from AI tools already, why would they do that?