Skip to main content

A recent study reveals that widely available AI agents had an 87% success rate at exploiting zero day vulnerabilities. Researchers from the University of Illinois Urbana-Champaign unleashed Open AI’s latest GPT-4 on a database containing zero day vulnerabilities without existing patches or bug fixes. While the majority of open-source scanners could not even detect the vulnerabilities, the advanced chatbot was able to autonomously exploit the flaws armed only with a basic description of their characteristics. The necessary information includes the Common Vulnerabilities and Exposures (CVE) description of the flaw, as well as additional information provided through embedded links.   

When Collaboration Becomes a Weakness

The CVE is a publicly listed catalog of known security threats maintained by the MITRE corporation and supported by the Cybersecurity and Infrastructure Security Agency (CISA). Established to enable more proactive cybersecurity defense by making knowledge of unique threats available to all, the CVE is a product of the recent emphasis on open communication in the cybersecurity industry. Increased reporting of attacks, vulnerabilities, and exposures has been championed as an effective way to share critical preventative measures with a wider audience.  

The pitfall, however, is demonstrated by the malicious use of large language models (LLM) as demonstrated in the study. The precise information about vulnerabilities allows aspiring hackers to let the technology work for them as all they need to do is supply easily attained characteristics to wreak major damage. Of the vulnerabilities successfully exploited, all were classified as either “high” or “critical” in terms of severity.  

 It is important to note that even GPT-3.5, the second-most advanced offering from OpenAI, achieved 0% success in detecting and exploiting vulnerabilities given the same information leveraged successfully by GPT-4. So far GPT-4 is behind a paywall, so it is not as easily accessible as other free LLM variants. However, as these models gain even greater complexity it is important to prepare for the democratization of cybercrime.  

How to Protect Your Organization from AI Attackers

Where to go from here? Experts disagree. Some support the notion of security through obscurity, in which listed explanations of discovered vulnerabilities are purposefully vague. This negates the benefits of information sharing, which has empowered organizational cybersecurity through the strength of the crowd. Leaving each security team to its own defenses once again weakens their ability to quickly respond to flaws that similar enterprises may also be suffering. 

More proactive measures to protect against these threats must become common practice. Updating packages regularly has been proposed as an effective solution. Additionally, greater scrutiny during the software development process to avoid the creation of zero day vulnerabilities is ideal.

Unstoppable Force vs Immovable Object

Using automation to defend against automation, CodeHunter's threat detection & analysis engine catches never-before-identified malicious code because it does not rely solely on signature and pattern matching. In addition, CodeHunter provides information on which strains of code it has deemed malicious to make the mitigation process much more efficient.  In a time where the cybersecurity skills gap continues to widen, empower your SOC with actionable intelligence at speed and at scale. 

Find out more about how efficient CodeHunter makes threating hunting here