Google announced a new AI vulnerability program on Monday as part of the company’s broader AI bounty initiative that began in 2023. Bug hunters can now earn up to $30,000 for a single qualifying report under the updated rules.

According to Google’s Bug Hunter announcement, the company has revised the rules for its Vulnerability Reward Program (VRP), focused on AI systems. Since its launch, this initiative has awarded AI researchers more than $430,000 for uncovering AI vulnerabilities

The new AI Vulnerability Reward Program aims to help keep users safe by incorporating lessons learned from previous programs and offering clearer guidance under updated criteria.

“The updated rules provide for base rewards of up to $20,000,” states the document. “We’ve also adopted the same report quality and novelty bonus multipliers as the Google VRP, which could raise the reward for an individual report to as much as $30,000.”

Google also shared a table outlining award categories, including those that combine security and abuse issues. The highest-paying category, Rogue Actions, offers $20,000 per individual report for findings affecting Google’s flagship AI systems, such as Google Search, Gemini Apps, and Google Workspace. The lowest reward, $100, applies to vulnerabilities found in standard Google products under the Cross-user Denial of Service category.

“Our goal for the AI VRP is to focus researchers on the most impactful AI issues,” explained Google’s Bug Hunter team. “We’re excited to be launching this new program, and we hope our valued researchers are too!”

Google has provided specific details and information clarifying what constitutes an AI bug — for example, attacks that modify the state of the victim’s account or exfiltrate complete, detailed, and confidential model parameters — and which vulnerabilities will not be considered in the program, such as prompt injection, alignment issues, and jailbreaks.

Cyberattacks and hacking strategies targeting AI systems have been on the rise. Researchers recently discovered that malicious actors have been using images to exploit AI systems, including Google’s Gemini.