The reaction people are having to AIs that can find bugs in code is fascinating. Finally, we have the capacity to fix the crisis in computer security we’ve had for decades, and everyone is treating it like it’s a tragedy. A central mistake here is that people regard this as “no one will ever be safe again” rather than “there will be a brief period when we get rid of most of the problems.”
People seem to be acting as though there will always be more security holes for these systems to find, forever, and so there can never be safety, but that’s not the way this works at all.
There are not an infinite number of computer security bugs in existence. It is only felt that way because we haven’t had the ability to carefully audit absolutely everything. There are also techniques that we could never afford to use before, like formal verification, that will let us vanquish a lot of the problems forever, but which require AI to really take advantage of because they are simply too labor-intensive for human beings.
This is not the beginning of some era of permanent insecurity where no one can ever feel safe again. It’s the end of a long period of insecurity where no one had any safety.
The problem is, certain companies are hyping this as “these tools are too dangerous to let anyone have!” Which of course means that people won’t be able to audit their own code to get rid of their bugs before they release software. Hopefully that too is also temporary. It would indeed be tragic if it wasn’t.