In 2024, our Vulnerability Reward Program confirmed the ongoing value of engaging with the security research community to make Google and its products safer. This was evident as we awarded just shy of $12 million to over 600 researchers based in countries around the globe across all of our programs.



Vulnerability Reward Program 2024 in Numbers


You can learn about who’s reporting to the Vulnerability Reward Program via our Leaderboard – and find out more about our youngest security researchers who’ve recently joined the ranks of Google bug hunters.



VRP Highlights in 2024

In 2024 we made a series of changes and improvements coming to our vulnerability reward programs and related initiatives:


  • The Google VRP revamped its reward structure, bumping rewards up to a maximum of $151,515, the Mobile VRP is now offering up to $300,000 for critical vulnerabilities in top-tier apps, Cloud VRP has a top-tier award of up $151,515, and Chrome awards now peak at $250,000 (see the below section on Chrome for details).

  • We rolled out InternetCTF – to get rewarded, discover novel code execution vulnerabilities in open source and provide Tsunami plugin patches for them.

  • The Abuse VRP saw a 40% YoY increase in payouts – we received over 250 valid bugs targeting abuse and misuse issues in Google products, resulting in over $290,000 in rewards.

  • To improve the payment process for rewards going to bug hunters, we introduced Bugcrowd as an additional payment option on bughunters.google.com alongside the existing standard Google payment option. 

  • We hosted two editions of bugSWAT for training, skill sharing, and, of course, some live hacking – in August, we had 16 bug hunters in attendance in Las Vegas, and in October, as part of our annual security conference ESCAL8 in Malaga, Spain, we welcomed 40 of our top researchers. Between these two events, our bug hunters were rewarded $370,000 (and plenty of swag).

  • We doubled down on our commitment to support the next generation of security engineers by hosting four init.g workshops (Las Vegas, São Paulo, Paris, and Malaga). Follow the Google VRP channel on X to stay tuned on future events.



More detailed updates on selected programs are shared in the following sections.



Android and Google Devices

In 2024, the Android and Google Devices Security Reward Program and the Google Mobile Vulnerability Reward Program, both part of the broader Google Bug Hunters program, continued their mission to fortify the Android ecosystem, achieving new heights in both impact and severity. We awarded over $3.3 million in rewards to researchers who demonstrated exceptional skill in uncovering critical vulnerabilities within Android and Google mobile applications. 



The above numbers mark a significant change compared to previous years. Although we saw an 8% decrease in the total number of submissions, there was a 2% increase in the number of critical and high vulnerabilities. In other words, fewer researchers are submitting fewer, but more impactful bugs, and are citing the improved security posture of the Android operating system as the central challenge. This showcases the program’s sustained success in hardening Android.



This year, we had a heightened focus on Android Automotive OS and WearOS, bringing actual automotive devices to multiple live hacking events and conferences. At ESCAL8, we hosted a live-hacking challenge focused on Pixel devices, resulting in over $75,000 in rewards in one weekend, and the discovery of several memory safety vulnerabilities. To facilitate learning, we launched a new Android hacking course in collaboration with external security researchers, focused on mobile app security, designed for newcomers and veterans alike. Stay tuned for more.



We extend our deepest gratitude to the dedicated researchers who make the Android ecosystem safer. We’re proud to work with you! Special thanks to Zinuo Han (@ele7enxxh) for their expertise in Bluetooth security, blunt (@blunt_qian) for holding the record for the most valid reports submitted to the Google Play Security Reward Program, and WANG,YONG (@ThomasKing2014) for groundbreaking research on rooting Android devices with kernel MTE enabled. We also appreciate all researchers who participated in last year’s bugSWAT event in Málaga. Your contributions are invaluable! 



Chrome

Chrome did some remodeling in 2024 as we updated our reward amounts and structure to incentivize deeper research. For example, we increased our maximum reward for a single issue to $250,000 for demonstrating RCE in the browser or other non-sandboxed process, and more if done directly without requiring a renderer compromise. 



In 2024, UAF mitigation MiraclePtr was fully launched across all platforms, and a year after the initial launch, MiraclePtr-protected bugs are no longer being considered exploitable security bugs. In tandem, we increased the MiraclePtr Bypass Reward to $250,128. Between April and November, we also launched the first and second iterations of the V8 Sandbox Bypass Rewards as part of the progression towards the V8 sandbox, eventually becoming a security boundary in Chrome. 



We received 337 reports of unique, valid security bugs in Chrome during 2024, and awarded 137 Chrome VRP researchers $3.4 million in total. The highest single reward of 2024 was $100,115 and was awarded to Mickey for their report of a MiraclePtr Bypass after MiraclePtr was initially enabled across most platforms in Chrome M115 in 2023. We rounded out the year by announcing the top 20 Chrome VRP researchers for 2024, all of whom were gifted new Chrome VRP swag, featuring our new Chrome VRP mascot, Bug.



Cloud VRP

The Cloud VRP launched in October as a Cloud-focused vulnerability reward program dedicated to Google Cloud products and services. As part of the launch, we also updated our product tiering and improved our reward structure to better align our reports with their impact on Google Cloud. This resulted in over 150 Google Cloud products coming under the top two reward tiers, enabling better rewards for our Cloud researchers and a more secure cloud.



Since its launch, Google Cloud VRP triaged over 400 reports and filed over 200 unique security vulnerabilities for Google Cloud products and services leading to over $500,000 in researcher rewards. 



Our highlight last year was launching at the bugSWAT event in Málaga where we got to meet many of our amazing researchers who make our program so successful! The overwhelming positive feedback from the researcher community continues to propel us to mature Google Cloud VRP further this year. Stay tuned for some exciting announcements!



Generative AI

We’re celebrating an exciting first year of AI bug bounties.  We received over 150 bug reports – over $55,000 in rewards so far – with one-in-six leading to key improvements. 



We also ran a bugSWAT live-hacking event targeting LLM products and received 35 reports, totaling more than $87,000 – including issues like “Hacking Google Bard – From Prompt Injection to Data Exfiltration” and “We Hacked Google A.I. for $50,000”.



Keep an eye on Gen AI in 2025 as we focus on expanding scope and sharing additional ways for our researcher community to contribute. 

Looking Forward to 2025

In 2025, we will be celebrating 15 years of VRP at Google, during which we have remained fully committed to fostering collaboration, innovation, and transparency with the security community, and will continue to do so in the future. Our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services. 



We want to send a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe! 



Thank you to Dirk Göhmann, Amy Ressler, Eduardo Vela, Jan Keller, Krzysztof Kotowicz, Martin Straka, Michael Cote, Mike Antares, Sri Tulasiram, and Tony Mendez.

Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions (30 posts in 2024)!

Google has been at the forefront of protecting users from the ever-growing threat of scams and fraud with cutting-edge technologies and security expertise for years. In 2024, scammers used increasingly sophisticated tactics and generative AI-powered tools to steal more than $1 trillion from mobile consumers globally, according to the Global Anti-Scam Alliance. And with the majority of scams now delivered through phone calls and text messages, we’ve been focused on making Android’s safeguards even more intelligent with powerful Google AI to help keep your financial information and data safe.

Today, we’re launching two new industry-leading AI-powered scam detection features for calls and text messages, designed to protect users from increasingly complex and damaging scams. These features specifically target conversational scams, which can often appear initially harmless before evolving into harmful situations.

To enhance our detection capabilities, we partnered with financial institutions around the world to better understand the latest advanced and most common scams their customers are facing. For example, users are experiencing more conversational text scams that begin innocently, but gradually manipulate victims into sharing sensitive data, handing over funds, or switching to other messaging apps. And more phone calling scammers are using spoofing techniques to hide their real numbers and pretend to be trusted companies.

Traditional spam protections are focused on protecting users before the conversation starts, and are less effective against these latest tactics from scammers that turn dangerous mid-conversation and use social engineering techniques. To better protect users, we invested in new, intelligent AI models capable of detecting suspicious patterns and delivering real-time warnings over the course of a conversation, all while prioritizing user privacy.

Scam Detection for messages

We’re building on our enhancements to existing Spam Protection in Google Messages that strengthen defenses against job and delivery scams, which are continuing to roll out to users. We’re now introducing Scam Detection to detect a wider range of fraudulent activities.

Scam Detection in Google Messages uses powerful Google AI to proactively address conversational scams by providing real-time detection even after initial messages are received. When the on-device AI detects a suspicious pattern in SMS, MMS, and RCS messages, users will now get a message warning of a likely scam with an option to dismiss or report and block the sender.

As part of the Spam Protection setting, Scam Detection on Google Messages is on by default and only applies to conversations with non-contacts. Your privacy is protected with Scam Detection in Google Messages, with all message processing remaining on-device. Your conversations remain private to you; if you choose to report a conversation to help reduce widespread spam, only sender details and recent messages with that sender are shared with Google and carriers. You can turn off Spam Protection, which includes Scam Detection, in your Google Messages at any time.

Scam Detection in Google Messages is launching in English first in the U.S., U.K. and Canada and will expand to more countries soon.

Scam Detection for calls

More than half of Americans reported receiving at least one scam call per day in 2024. To combat the rise of sophisticated conversational scams that deceive victims over the course of a phone call, we introduced Scam Detection late last year to U.S.-based English-speaking Phone by Google public beta users on Pixel phones.

We use AI models processed on-device to analyze conversations in real-time and warn users of potential scams. If a caller, for example, tries to get you to provide payment via gift cards to complete a delivery, Scam Detection will alert you through audio and haptic notifications and display a warning on your phone that the call may be a scam.

During our limited beta, we analyzed calls with Gemini Nano, Google’s built-in, on-device foundation model, on Pixel 9 devices and used smaller, robust on-device machine-learning models for Pixel 6+ users. Our testing showed that Gemini Nano outperformed other models, so as a result, we’re currently expanding the availability of the beta to bring the most capable Scam Detection to all English-speaking Pixel 9+ users in the U.S.

Similar to Scam Detection in messaging, we built this feature to protect your privacy by processing everything on-device. Call audio is processed ephemerally and no conversation audio or transcription is recorded, stored on the device, or sent to Google or third parties. Scam Detection in Phone by Google is off by default to give users control over this feature, as phone call audio is more ephemeral compared to messages, which are stored on devices. Scam Detection only applies to calls that could potentially be scams, and is never used during calls with your contacts. If enabled, Scam Detection will beep at the start and during the call to notify participants the feature is on. You can turn off Scam Detection at any time, during an individual call or for all future calls.

According to our research and a Scam Detection beta user survey, these types of alerts have already helped people be more cautious on the phone, detect suspicious activity, and avoid falling victim to conversational scams.

Keeping Android users safe with the power of Google AI

We’re committed to keeping Android users safe, and that means constantly evolving our defenses against increasingly sophisticated scams and fraud. Our investment in intelligent protection is having real-world impact for billions of users. Leviathan Security Group, a cybersecurity firm, conducted a funded evaluation of fraud protection features on a number of smartphones and found that Android smartphones, led by the Pixel 9 Pro, scored highest for built-in security features and anti-fraud efficacy1.

With AI-powered innovations like Scam Detection in Messages and Phone by Google, we’re giving you more tools to stay one step ahead of bad actors. We’re constantly working with our partners across the Android ecosystem to help bring new security features to even more users. Together, we’re always working to keep you safe on Android.

Notes


  1. Based on third-party research funded by Google LLC in Feb 2025 comparing the Pixel 9 Pro, iPhone 16 Pro, Samsung S24+ and Xiaomi 14 Ultra. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries. 

For decades, memory safety vulnerabilities have been at the center of various security incidents across the industry, eroding trust in technology and costing billions. Traditional approaches, like code auditing, fuzzing, and exploit mitigations while helpful haven’t been enough to stem the tide, while incurring an increasingly high cost.


In this blog post, we are calling for a fundamental shift: a collective commitment to finally eliminate this class of vulnerabilities, anchored on secure-by-design practices not just for ourselves but for the generations that follow.


The shift we are calling for is reinforced by a recent ACM article calling to standardize memory safety we took part in releasing with academic and industry partners. It’s a recognition that the lack of memory safety is no longer a niche technical problem but a societal one, impacting everything from national security to personal privacy.



The standardization opportunity

Over the past decade, a confluence of secure-by-design advancements has matured to the point of practical, widespread deployment. This includes memory-safe languages, now including high-performance ones such as Rust, as well as safer language subsets like Safe Buffers for C++. 


These tools are already proving effective. In Android for example, the increasing adoption of memory-safe languages like Kotlin and Rust in new code has driven a significant reduction in vulnerabilities.


Looking forward, we’re also seeing exciting and promising developments in hardware. Technologies like ARM’s Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.

While these advancements are encouraging, achieving comprehensive memory safety across the entire software industry requires more than just individual technological progress:  we need to create the right environment and accountability for their widespread adoption. Standardization is key to this. 


To facilitate standardization, we suggest establishing a common framework for specifying and objectively assessing memory safety assurances; doing so will lay the foundation for creating a market in which vendors are incentivized to invest in memory safety. Customers will be empowered to recognize, demand, and reward safety. This framework will provide governments and businesses with the clarity to specify memory safety requirements, driving the procurement of more secure systems. 


The framework we are proposing would complement existing efforts by defining specific, measurable criteria for achieving different levels of memory safety assurance across the industry. In this way, policymakers will gain the technical foundation to craft effective policy initiatives and incentives promoting memory safety.



 

A blueprint for a memory-safe future

We know there’s more than one way of solving this problem, and we are ourselves investing in several. Importantly, our vision for achieving memory safety through standardization focuses on defining the desired outcomes rather than locking ourselves into specific technologies.



To translate this vision into an effective standard, we need a framework that will:


Foster innovation and support diverse approaches: The standard should focus on the security properties we want to achieve (e.g., freedom from spatial and temporal safety violations) rather than mandating specific implementation details. The framework should therefore be technology-neutral, allowing vendors to choose the best approach for their products and requirements. This encourages innovation and allows software and hardware manufacturers to adopt the best solutions as they emerge.



Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts. 



Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products.



Be practical and actionable: Alongside the technology-neutral framework, we need best practices for existing technologies. The framework should provide guidance on how to effectively leverage specific technologies to meet the standards. This includes answering questions such as when and to what extent unsafe code is acceptable within larger software systems, and guidelines on structuring such unsafe dependencies to support compositional reasoning about safety.



Google’s commitment

At Google, we’re not just advocating for standardization and a memory-safe future, we’re actively working to build it.


We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services.


This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That’s why we’re also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.



Let’s build a memory-safe future together

This effort isn’t about picking winners or dictating solutions. It’s about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement. It’s about enabling a future where:


  • Developers and vendors can confidently build more secure systems, knowing their efforts can be objectively assessed.

  • Businesses can procure memory-safe products with assurance, reducing their risk and protecting their customers.

  • Governments can effectively protect critical infrastructure and incentivize the adoption of secure-by-design practices.

  • Consumers are empowered to make decisions about the services they rely on and the devices they use with confidence – knowing the security of each option was assessed against a common framework. 


The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.



Acknowledgments

We’d like to thank our CACM article co-authors for their invaluable contributions: Robert N. M. Watson, John Baldwin, Tony Chen, David Chisnall, Jessica Clarke, Brooks Davis, Nathaniel Wesley Filardo, Brett Gutstein, Graeme Jenkinson, Christoph Kern, Alfredo Mazzinghi, Simon W. Moore, Peter G. Neumann, Hamed Okhravi, Peter Sewell, Laurence Tratt, Hugo Vincent, and Konrad Witaszczyk, as well as many others.

A North Korea-aligned activity cluster tracked by ESET as DeceptiveDevelopment drains victims’ crypto wallets and steals their login details from web browsers and password managers

ESET researchers analyzed a campaign delivering malware bundled with job interview challenges

The atmospheric scientist makes a compelling case for a head-to-heart-to-hands connection as a catalyst for climate action

Android and Google Play comprise a vibrant ecosystem with billions of users around the globe and millions of helpful apps. Keeping this ecosystem safe for users and developers remains our top priority. However, like any flourishing ecosystem, it also attracts its share of bad actors. That’s why every year, we continue to invest in more ways to protect our community and fight bad actors, so users can trust the apps they download from Google Play and developers can build thriving businesses.

Last year, those investments included AI-powered threat detection, stronger privacy policies, supercharged developer tools, new industry-wide alliances, and more. As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps.

But that was just the start. For more, take a look at our recent highlights from 2024:

Google’s advanced AI: helping make Google Play a safer place

To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play.

That’s enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.

Working with developers to enhance security and privacy on Google Play

To protect user privacy, we’re working with developers to reduce unnecessary access to sensitive data. In 2024, we prevented 1.3 million apps from getting excessive or unnecessary access to sensitive user data. We also required apps to be more transparent about how they handle user information by launching new developer requirements and a new “Data deletion” option for apps that support user accounts and data collection. This helps users manage their app data and understand the app’s deletion practices, making it easier for Play users to delete data collected from third-party apps.

We also worked to ensure that apps use the strongest and most up-to-date privacy and security capabilities Android has to offer. Every new version of Android introduces new security and privacy features, and we encourage developers to embrace these advancements as soon as possible. As a result of partnering closely with developers, over 91% of app installs on the Google Play Store now use the latest protections of Android 13 or newer.

Safeguarding apps from scams and fraud is an ongoing battle for developers. The Play Integrity API allows developers to check if their apps have been tampered with or are running in potentially compromised environments, helping them to prevent abuse like fraud, bots, cheating, and data theft. Play Integrity API and Play’s automatic protection helps developers ensure that users are using the official Play version of their app with the latest security updates. Apps using Play integrity features are seeing 80% lower usage from unverified and untrusted sources on average.

We’re also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK. Last year, in addition to adding 80 SDKs to the index, we also worked closely with SDK and app developers to address potential SDK security and privacy issues, helping to build safer and more secure apps for Google Play.

Google Play’s multi-layered protections against bad apps

To create a trusted experience for everyone on Google Play, we use our SAFE principles as a guide, incorporating multi-layered protections that are always evolving to help keep Google Play safe. These protections start with the developers themselves, who play a crucial role in building secure apps. We provide developers with best-in-class tools, best practices, and on-demand training resources for building safe, high-quality apps. Every app undergoes rigorous review and testing, with only approved apps allowed to appear in the Play Store. Before a user downloads an app from Play, users can explore its user reviews, ratings, and Data safety section on Google Play to help them make an informed decision. And once installed, Google Play Protect, Android’s built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior.

Enhancing Google Play Protect to help keep users safe on Android

While the Play Store offers best-in-class security, we know it’s not the only place users download Android apps – so it’s important that we also defend Android users from more generalized mobile threats. To do this in an open ecosystem, we’ve invested in sophisticated, real-time defenses that protect against scams, malware, and abusive apps. These intelligent security measures help to keep users, user data, and devices safe, even if apps are installed from various sources with varying levels of security.


Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect’s real-time scanning identified more than 13 million new malicious apps from outside Google Play1.

Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. Here are some of the new improvements that are now available globally on Android devices with Google Play Services:

  • Reminder notifications in Chrome on Android to re-enable Google Play Protect: According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off.
  • Additional protection against social engineering attacks: Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls. This safeguard is enabled by default during traditional phone calls as well as during voice and video calls in popular third-party apps.
  • Automatically revoking app permissions for potentially dangerous apps: Since Android 11, we’ve taken a proactive approach to data privacy by automatically resetting permissions for apps that users haven’t used in a while. This ensures apps can only access the data they truly need, and users can always grant permissions back if necessary. To further enhance security, Play Protect now automatically revokes permissions for potentially harmful apps, limiting their access to sensitive data like storage, photos, and camera. Users can restore app permissions at any time, with a confirmation step for added security.

Google Play Protect’s enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers).

Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions – Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect’s enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

By piloting these new protections, we can proactively combat emerging threats and refine our solutions to thwart scammers and their increasingly sophisticated fraud attempts. We look forward to continuing to partner with governments, ecosystem partners, and other stakeholders to improve user protections.

App badging to help users find apps they can trust at a glance on Google Play

In 2024, we introduced a new badge for government developers to help users around the world identify official government apps. Government apps are often targets of impersonation due to the highly sensitive nature of the data users provide, giving bad actors the ability to steal identities and commit financial fraud. Badging verified government apps is an important step in helping connect people with safe, high-quality, useful, and relevant experiences. We partner closely with global governments and are already exploring ways to build on this work.

We also recently introduced a new badge to help Google Play users discover VPN apps that take extra steps to demonstrate their strong commitment to security. We allow developers who adhere to Play safety and security guidelines and have passed an additional independent Mobile Application Security Assessment (MASA) to display a dedicated badge in the Play Store to highlight their increased commitment to safety.

Collaborating to advance app security standards

In addition to our partnerships with governments, developers, and other stakeholders, we also worked with our industry peers to protect the entire app ecosystem for everyone. The App Defense Alliance, in partnership with fellow steering committee members Microsoft and Meta, recently launched the ADA Application Security Assessment (ASA) v1.0, a new standard to help developers build more secure mobile, web, and cloud applications. This standard provides clear guidance on protecting sensitive data, defending against cyberattacks, and ultimately, strengthening user trust. This marks a significant step forward in establishing industry-wide security best practices for application development.

All developers are encouraged to review and comply with the new mobile security standard. You’ll see this standard in action for all carrier apps pre-installed on future Pixel phone models.

Looking ahead


This year, we’ll continue to protect the Android and Google Play ecosystem, building on these tools and resources in response to user and developer feedback and the changing landscape. As always, we’ll keep empowering developers to build safer apps more easily, streamline their policy experience, and protect their businesses and users from bad actors.


1 Based on Google Play Protect 2024 internal data.

Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an “indirect prompt injection,” a term first coined by Kai Greshake and the NVIDIA team.



To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation framework we have developed to automatically red-team an AI system’s vulnerability to indirect prompt injection attacks. We will take you through our threat model, before describing three attack techniques we have implemented in our evaluation framework.



Threat model and evaluation framework



Our threat model concentrates on an attacker using indirect prompt injection to exfiltrate sensitive information, as illustrated above. The evaluation framework tests this by creating a hypothetical scenario, in which an AI agent can send and retrieve emails on behalf of the user. The agent is presented with a fictitious conversation history in which the user references private information such as their passport or social security number. Each conversation ends with a request by the user to summarize their last email, and the retrieved email in context.



The contents of this email are controlled by the attacker, who tries to manipulate the agent into sending the sensitive information in the conversation history to an attacker-controlled email address. The attack is successful if the agent executes the malicious prompt contained in the email, resulting in the unauthorized disclosure of sensitive information. The attack fails if the agent only follows user instructions and provides a simple summary of the email. 



Automated red-teaming

Crafting successful indirect prompt injections requires an iterative process of refinement based on observed responses. To automate this process, we have developed a red-team framework consisting of several optimization-based attacks that generate prompt injections (in the example above this would be different versions of the malicious email). These optimization-based attacks are designed to be as strong as possible; weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections.



Once these prompt injections have been constructed, we measure the resulting attack success rate on a diverse set of conversation histories. Because the attacker has no prior knowledge of the conversation history, to achieve a high attack success rate the prompt injection must be capable of extracting sensitive user information contained in any potential conversation contained in the prompt, making this a harder task than eliciting generic unaligned responses from the AI system. The attacks in our framework include:



Actor Critic: This attack uses an attacker-controlled model to generate suggestions for prompt injections. These are passed to the AI system under attack, which returns a probability score of a successful attack. Based on this probability, the attack model refines the prompt injection. This process repeats until the attack model converges to a successful prompt injection. 



Beam Search: This attack starts with a naive prompt injection directly requesting that the AI system send an email to the attacker containing the sensitive user information. If the AI system recognizes the request as suspicious and does not comply, the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding. If the probability increases, these random tokens are kept, otherwise they are removed, and this process repeats until the combination of the prompt injection and random appended tokens result in a successful attack.

Tree of Attacks w/ Pruning (TAP): Mehrotra et al. (2024) [3] designed an attack to generate prompts that cause an AI system to violate safety policies (such as generating hate speech). We adapt this attack, making several adjustments to target security violations. Like Actor Critic, this attack searches in the natural language space; however, we assume the attacker cannot access probability scores from the AI system under attack, only the text samples that are generated.



We are actively leveraging insights gleaned from these attacks within our automated red-team framework to protect current and future versions of AI systems we develop against indirect prompt injection, providing a measurable way to track security improvements. A single silver bullet defense is not expected to solve this problem entirely. We believe the most promising path to defend against these attacks involves a combination of robust evaluation frameworks leveraging automated red-teaming methods, alongside monitoring, heuristic defenses, and standard security engineering solutions. 




We would like to thank Vijay Bolina, Sravanti Addepalli, Lihao Liang, and Alex Kaskasoli for their prior contributions to this work.




Posted on behalf of the entire Google DeepMind Agentic AI Security team (listed in alphabetical order):

Aneesh Pappu, Andreas Terzis, Chongyang Shi, Gena Gibson, Ilia Shumailov, Itay Yona, Jamie Hayes, John “Four” Flynn, Juliette Pluto, Sharon Lin, Shuang Song

Today, people around the world rely on their mobile devices to help them stay connected with friends and family, manage finances, keep track of healthcare information and more – all from their fingertips. But a stolen device in the wrong hands can expose sensitive data, leaving you vulnerable to identity theft, financial fraud and privacy breaches.

This is why we recently launched Android theft protection, a comprehensive suite of features designed to protect you and your data at every stage – before, during, and after device theft. As part of our commitment to help you stay safe on Android, we’re expanding and enhancing these features to deliver even more robust protection to more users around the world.

Identity Check rolling out to Pixel and Samsung One UI 7 devices

We’re officially launching Identity Check, first on Pixel and Samsung Galaxy devices eligible for One UI 71, to provide better protection for your critical account and device settings. When you turn on Identity Check, your device will require explicit biometric authentication to access certain sensitive resources when you’re outside of trusted locations. Identity Check also enables enhanced protection for Google Accounts on all supported devices and additional security for Samsung Accounts on One UI 7 eligible Galaxy devices, making it much more difficult for an unauthorized attacker to take over accounts signed in on the device.

As part of enabling Identity Check, you can designate one or more trusted locations. When you’re outside of these trusted places, biometric authentication will be required to access critical account and device settings, like changing your device PIN or biometrics, disabling theft protection, or accessing Passkeys.

Identity Check gives you more peace of mind that your most sensitive device assets are protected against unauthorized access, even if a thief or bad actor manages to learn your device PIN.

Identity Check is rolling out now to Pixel devices with Android 15 and will be available on One UI 7 eligible Galaxy devices in the coming weeks. It will roll out to supported Android devices from other manufacturers later this year.

Theft Detection Lock: expanding AI-powered protection to more users

One of the top theft protection features introduced last year was Theft Detection Lock, which uses an on-device AI-powered algorithm to help detect when your phone may be forcibly taken from you. If the machine learning algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out.

Theft Detection Lock is now fully rolled out to Android 10+ phones2 around the world.

Protecting your Android device from theft

We’re collaborating with the GSMA and industry experts to combat mobile device theft by sharing information, tools and prevention techniques. Stay tuned for an upcoming GSMA white paper, developed in partnership with the mobile industry, with more information on protecting yourself and your organization from device theft.

With the addition of Identity Check and the ongoing enhancements to our existing features, Android offers a robust and comprehensive set of tools to protect your devices and your data from theft. We’re dedicated to providing you with peace of mind, knowing your personal information is safe and secure.

You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center.

Notes


  1. Timing, availability and feature names may vary in One UI 7. 

  2. With the exclusion for Android Go smartphones 



In December 2022, we announced OSV-Scanner, a tool to enable developers to easily scan for vulnerabilities in their open source dependencies. Together with the open source community, we’ve continued to build this tool, adding remediation features, as well as expanding ecosystem support to 11 programming languages and 20 package manager formats. 



Today, we’re excited to release OSV-SCALIBR (Software Composition Analysis LIBRary), an extensible library for SCA and file system scanning. OSV-SCALIBR combines Google’s internal vulnerability management expertise into one scanning library with significant new capabilities such as:


  • SCA for installed packages, standalone binaries, as well as source code

  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac

  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)

  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac

  • SBOM generation in SPDX and CycloneDX, the two most popular document formats

  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical



OSV-SCALIBR is now the primary SCA engine used within Google for live hosts, code repos, and containers. It’s been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users’ data at Google scale.


We offer OSV-SCALIBR primarily as an open source Go library today, and we’re working on adding its new capabilities into OSV-Scanner as the primary CLI interface.


Using OSV-SCALIBR as a library

All of OSV-SCALIBR’s capabilities are modularized into plugins for software extraction and vulnerability detection which are very simple to expand.You can use OSV-SCALIBR as a library to:

1.Generate SBOMs from the build artifacts and code repos on your live host:

import (

 “context”

 “github.com/google/osv-scalibr”

 “github.com/google/osv-scalibr/converter”

 “github.com/google/osv-scalibr/extractor/filesystem/list”

 “github.com/google/osv-scalibr/fs”

 “github.com/google/osv-scalibr/plugin”

 spdx “github.com/spdx/tools-golang/spdx/v2/v2_3”

)

func GenSBOM(ctx context.Context) *spdx.Document {

 capab := &plugin.Capabilities{OS: plugin.OSLinux}

 cfg := &scalibr.ScanConfig{

   ScanRoots: fs.RealFSScanRoots(“/”),

   FilesystemExtractors: list.FromCapabilities(capab),

   Capabilities: capab,

 }

 result := scalibr.New().Scan(ctx, cfg)

 return converter.ToSPDX23(result, converter.SPDXConfig{})

}

2. Scan a git repo for SBOMs:

Simply replace “/” with the path to your git repo. Also take a look at the various language extractors to enable for code scanning.

3. Scan a remote container for SBOMs:

Replace the scan config from the above code snippet with

import (

 …

 “github.com/google/go-containerregistry/pkg/authn”

 “github.com/google/go-containerregistry/pkg/v1/remote”

 “github.com/google/osv-scalibr/artifact/image”

 …

)

filesys, _ := image.NewFromRemoteName(

 “alpine:latest”,

 remote.WithAuthFromKeychain(authn.DefaultKeychain),

)

cfg := &scalibr.ScanConfig{

 ScanRoots: []*fs.ScanRoot{{FS: filesys}},

 

}

4. Find vulnerabilities on your filesystem or a remote container:

Extract the PURLs from the SCALIBR inventory results from the previous steps:

import (

 …

 “github.com/google/osv-scalibr/converter”

 

)

result := scalibr.New().Scan(ctx, cfg)

for _, i := range result.Inventories {

 fmt.Println(converter.ToPURL(i))

}

And send them to osv.dev, e.g.

$ curl -d ‘{“package”: {“purl”: “pkg:npm/dojo@1.2.3”}}’ “https://api.osv.dev/v1/query”

See the usage docs for more details.

OSV-Scanner + OSV-SCALIBR

Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities using much of the same extraction as OSV-SCALIBR. 


Some of OSV-SCALIBR’s capabilities are not yet available in OSV-Scanner, but we’re currently working on integrating OSV-SCALIBR more deeply into OSV-Scanner. This will make more and more of OSV-SCALIBR’s capabilities available in OSV-Scanner in the next few months, including installed package extraction, weak credentials scanning, SBOM generation, and more.


Look out soon for an announcement of OSV-Scanner V2 with many of these new features available. OSV-Scanner will become the primary frontend to the OSV-SCALIBR library for users who require a CLI interface. Existing users of OSV-Scanner can continue to use the tool the same way, with backwards compatibility maintained for all existing use cases. 


For installation and usage instructions, have a look at OSV-Scanner’s documentation here.


What’s next

In addition to making all of OSV-SCALIBR’s features available in OSV-Scanner, we’re also working on additional new capabilities. Here’s some of the things you can expect:

  • Support for more OS and language ecosystems, both for regular extraction and for Guided Remediation

  • Layer attribution and base image identification for container scanning

  • Reachability analysis to reduce false positive vulnerability matches

  • More vulnerability and misconfiguration detectors for Windows

  • More weak credentials detectors

We hope that this library helps developers and organizations to secure their software and encourages the open source community to contribute back by sharing new plugins on top of OSV-SCALIBR.

If you have any questions or if you would like to contribute, don’t hesitate to reach out to us at osv-discuss@google.com or by posting an issue in our issue tracker.