Speaking session
Leaking Secrets in the Age of AI: How AI Adoption is Creating New Attack Vectors
In a rush to adopt and experiment with AI, corners are getting cut. This is evident from incidents of resource abuses (attackers running adult bots for profit), unsafe 3rd-party model executions, and a variety of model escape vulnerabilities.
A major, underexplored side effect, however, is the leakage of AI secrets in public repositories. Despite the general awareness about exposed secrets in code, finding a valid secret remains shockingly easy, you just need to know where to look. High-privilege secrets with enterprise-wide impact are out in the wild, waiting to be found.
This talk presents a novel methodology for identifying “juicy” (high-value and high-probability) targets for secret hunting using BigQuery / githubarchive, GitHub API, and automated secret scanning tools. Based on this methodology, I will present findings from an intensive month-long secret scanning campaign across thousands of repositories, revealing hundreds of validated secrets from over 40 organizations—including multiple Fortune 100 companies. We’ll analyze the results, showing how AI-related secrets constitute a disproportional majority, and what new leakage patterns are emerging from this Age of AI.
The presentation concludes with actionable mitigation strategies, serving as a wake-up call for AI and data science communities to urgently improve their security practices.
Speakers
Shay Berkovich
Threat Resercher