The Growing Cybersecurity Threat: How Hackers Are Exploiting AI Agents for Malicious Purposes
- Date & Time:
- |
- Views: 12
- |
- From: India News Bull

Cybersecurity professionals are raising alarms about artificial intelligence agents, considered the next step in the generative AI evolution, potentially being exploited to execute hackers' malicious intentions.
AI agents are sophisticated programs leveraging AI chatbots to perform online tasks typically handled by humans, such as purchasing airline tickets or managing calendar events.
The ability to control these AI agents using natural language commands creates opportunities for mischief, even for individuals without technical expertise.
According to AI company Perplexity, "We're entering an era where cybersecurity is no longer about protecting users from bad actors with a highly technical skillset. For the first time in decades, we're seeing new and novel attack vectors that can come from anywhere."
These injection attacks aren't new to hackers, but previously required sophisticated and hidden code to cause damage.
As AI tools transition from simple text, image, or video generation to becoming "agents" that independently navigate the internet, the risk of manipulation through hacker-planted prompts increases significantly.
Marti Jorda Roca, software engineer at NeuralTrust specializing in large language model security, emphasized that "People need to understand there are specific dangers using AI in the security sense."
Meta describes this query injection threat as a "vulnerability," while OpenAI's chief information security officer Dane Stuckey refers to it as "an unresolved security issue."
Both technology giants are investing billions in AI development, with usage and capabilities expanding rapidly.
Query injection can occur in real-time when a legitimate user command like "book me a hotel reservation" gets manipulated by an attacker into something malicious like "wire $100 to this account."
These deceptive prompts can also lurk on websites, ready to ambush AI agents integrated into browsers as they encounter potentially compromised data containing hidden hacker commands.
Eli Smadja from Israeli cybersecurity firm Check Point identifies query injection as the "number one security problem" for large language models powering AI assistants emerging from the ChatGPT revolution.
Leading AI companies have implemented protective measures and published guidelines to counter such cyberattacks.
Microsoft has incorporated tools to detect malicious commands based on various factors including command origin.
OpenAI notifies users when their agents visit sensitive websites and requires human supervision before proceeding.
Security experts recommend requiring user confirmation before AI agents perform critical tasks like data exports or financial transactions.
"One huge mistake that I see happening a lot is to give the same AI agent all the power to do everything," Smadja told AFP.
Cybersecurity researcher Johann Rehberger, known professionally as "wunderwuzzi," highlights that the greatest challenge is the rapid improvement of attack methods.
"They only get better," Rehberger said regarding hacker tactics.
A significant challenge involves balancing security with user convenience, as people desire AI assistance without constant verification requirements.
Rehberger contends that AI agents aren't yet mature enough for handling sensitive information or important tasks.
"I don't think we are in a position where you can have an agentic AI go off for a long time and safely do a certain task," the researcher noted. "It just goes off track."
Source: https://www.ndtv.com/world-news/ai-agents-open-door-to-new-hacking-threats-9612282