Google Offers Up to $30,000 Bounty for Finding Security Vulnerabilities in Its AI Systems

Google has launched a new AI Vulnerability Reward Program offering researchers up to $30,000 for identifying security flaws in its AI systems including Gemini, Search, Gmail, and Drive. The program categorizes vulnerabilities by severity and specifically targets security issues like unauthorized actions, data leakage, and model theft rather than content generation problems.

Google Is Offering Up To Rs 26 Lakh For Finding Bugs In Its AI Systems

Google has launched a new AI Vulnerability Reward Program (AI VRP), offering security researchers rewards of up to $30,000 (approximately Rs 26.6 lakh) for discovering and reporting security vulnerabilities in its AI systems. The company has established a base reward of up to $20,000 (Rs 17.75 lakh), with an additional bonus of $10,000 (Rs 8.9 lakh) for reports demonstrating exceptional quality or novelty.

The program encompasses Google's major AI-integrated products, including Gemini, Google Search, Gmail, and Drive. Researchers interested in participating can submit their findings through Google's official Bug Hunters website.

Google security engineering managers Jason Parsons and Zak Bennett clarified in a blog post that simply causing an AI model to hallucinate or generate unwanted content does not qualify for a reward. "We don't believe a Vulnerability Reward Program is the right format for addressing content-related issues," they stated.

Google has categorized valid vulnerabilities by severity levels. The highest priority issues include Rogue Actions, which involve attacks that compromise a user's account or data security, such as indirect prompt injections that might cause Google Home to unlock a smart door without authorization.

Other critical vulnerabilities include Sensitive Data Exfiltration (leaks of personal information like emails or financial details), Phishing Enablement (security flaws facilitating attempts to trick users into revealing sensitive information), and Model Theft (extraction of proprietary AI model parameters or architectural details).

Additional reportable vulnerabilities include context manipulation, access control bypass, unauthorized product usage, and cross-user denial of service. However, issues related to AI hallucinations, copyright infringement, or inappropriate content such as hate speech should be reported through in-product feedback mechanisms instead.

The reward structure varies based on product classification. Flagship products like Search, Gemini, Gmail, and Drive offer up to $20,000 for top-tier bugs. Standard products including AI Studio, Jules, and NotebookLM can earn researchers up to $15,000, while other products offer up to $10,000. Lower-severity issues like denial-of-service attacks may receive rewards starting at $500.

Google revealed it has already paid $430,000 to AI researchers over the past two years through earlier experimental programs. In the previous year, the company distributed nearly $12 million across all security bug reports under its Vulnerability Reward Program.

Additionally, Google announced the introduction of CodeMender, an AI tool designed to automatically patch vulnerable code.

Source: https://www.ndtv.com/world-news/google-is-offering-up-to-rs-26-lakh-for-finding-bugs-in-its-ai-systems-9415677