Recommended articles
Social share
Want to keep learning?
Sign up to receive security learning articles from Verified Visitors
The information you provide to Verified Visitors is governed by the terms of our Privacy Policy
ChatGPT-4 Bots now used in powerful teams to hack zero-day vulnerabilities.
Sign up to receive security learning articles from Verified Visitors
The information you provide to Verified Visitors is governed by the terms of our Privacy Policy
AI agents using GenAI bots such as ChatGPT have proven adept at hacking into websites with known vulnerabilities. Initially bots are used in reconnaissance mode - to passively discover known vulnerabilities, and then launch an automated attack based on the known issues. However, until now, they have pretty much failed for zero-day exploits without these helpful vulnerability labels.
Now researchers at Cornell have used teams of LLM agents working together to actually orchestrate and plan zero day vulnerabilities using GPT-4
You knew it was just a question of time, didn’t you?
According to the Cornell researchers, they developed a multi-agent system based on LLMs, that pass on the critical context to jointly discover, plan and execute the hacking based on their shared contextual knowledge of the new target.
The new technique which they call HPTSA: Hierarchical Planning and Task-Specific Agents allows the agents to work together. The first agent, the hierarchical planning agent, explores the website to determine what kinds of vulnerabilities to attempt and on which pages of the website, and then co-ordinates the work to become about 4-5x more efficient at discovering vulnerabilities.