Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

New research has found that Google Cloud API keys, typically designated as project identifiers for billing purposes, could be abused to authenticate to sensitive Gemini endpoints and access private data.

The findings come from Truffle Security, which discovered nearly 3,000 Google API keys (identified by the prefix “AIza”) embedded in client-side code to provide Google-related services like embedded maps on websites.

“With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account,” security researcher Joe Leon said, adding the keys “now also authenticate to Gemini even though they were never intended for it.”

The problem occurs when users enable the Gemini API on a Google Cloud project (i.e., Generative Language API), causing the existing API keys in that project, including those accessible via the website JavaScript code, to gain surreptitious access to Gemini endpoints without any warning or notice.

This effectively allows any attacker who scrapes websites to get hold of such API keys and use them for nefarious purposes and quota theft, including accessing sensitive files via the /files and /cachedContents endpoints, as well as making Gemini API calls, racking up huge bills for the victims.

In addition, Truffle Security found that creating a new API key in Google Cloud defaults to “Unrestricted,” meaning it’s applicable for every enabled API in the project, including Gemini.

“The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet,” Leon said. In all, the company said it found 2,863 live keys accessible on the public internet, including a website associated with Google.

The disclosure comes as Quokka published a similar report, finding over 35,000 unique Google API keys embedded in its scan of 250,000 Android apps.

“Beyond potential cost abuse through automated LLM requests, organizations must also consider how AI-enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key,” the mobile security company said.

“Even if no direct customer data is accessible, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing-identifier model developers relied upon.”

Although the behavior was initially deemed intended, Google has since stepped in to address the problem.

“We are aware of this report and have worked with the researchers to address the issue,” A Google spokesperson told The Hacker News via email. “Protecting our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API.”

It’s currently not known if this issue was ever exploited in the wild. However, in a Reddit post published two days ago, a user claimed a “stolen” Google Cloud API Key resulted in $82,314.44 in charges between February 11 and 12, 2026, up from a regular spend of $180 per month.

We have reached out to Google for further comment, and we will update the story if we hear back.

Users who have set up Google Cloud projects are advised to check their APIs and services, and verify if artificial intelligence (AI)-related APIs are enabled. If they are enabled and publicly accessible (either in client-side JavaScript or checked into a public repository), make sure the keys are rotated.

“Start with your oldest keys first,” Truffle Security said. “Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API.”

“This is a great example of how risk is dynamic, and how APIs can be over-permissioned after the fact,” Tim Erlin, security strategist at Wallarm, said in a statement. “Security testing, vulnerability scanning, and other assessments must be continuous.”

“APIs are tricky in particular because changes in their operations or the data they can access aren’t necessarily vulnerabilities, but they can directly increase risk. The adoption of AI running on these APIs, and using them, only accelerates the problem. Finding vulnerabilities isn’t really enough for APIs. Organizations have to profile behavior and data access, identifying anomalies and actively blocking malicious activity.”

Similar Posts

  • 10+ Best Free Personal Portfolio WordPress Themes in 2026

    Portfolio websites are a key part of any creative professional’s branding. They’re a place to showcase your work and share your expertise with prospective clients. On a personal level, they’re also a nice way to look back on your past achievements. WordPress is the perfect tool for creating an online portfolio. The content management system…

  • 100 Most Cited Domains in Google’s AI Mode

    Article Performance Data from Ahrefs Linking websites The number of websites linking to this post. Get the week’s best marketing content We analyzed our Brand Radar database of 5.5 million AI Mode queries to identify the domains that appear most often in Google’s AI Mode responses. These are the 100 most cited domains, along with…

  • LLM Visibility: What It Is and How to Optimize for It

    Millions of people still turn to Google for answers. But there’s also a growing number of people who are turning to AI assistants like ChatGPT, Claude, and Perplexity directly for answers and getting their tasks done. If your brand isn’t appearing in these AI responses, you’re becoming invisible to an increasingly important segment of your audience….

  • Drupal AI: How to Set It Up and Try It Out

    After watching the Driesnote earlier this week, I wanted to try and play around with the AI tools that were demonstrated. Mostly because I find this space fascinating, and I like to try and see what kind of goofy things I can get the robots to do. So I installed all the relevant modules on…