Easy Contact
253 Main Ave, Passaic NJ 07055Call 973-777-5656
info@advantagecomputers.com
Fax 973-777-5821
© 2024 ~ All Rights Reserved
Advantage Computer Solutions, Inc
Company
Services
Testimonials
Zack is amazing! I have gone to him with computer issues for the past few years now and he always finds a way to fix things and at a reasonable price. This time I went to Advantage Computer Solutions to find a new laptop. I needed help because like most of us I had no… Read more “Amazing!”
Cannot say enough good things about Zack Rahhal and his team. Professional, smart, sensitive to small biz budgets and a helluva good guy. Could not operate my small biz without them!
stars indeed. So reliable and helpful and kind and smart. We call Al and he is “on it” immediately and such a FABULOUS teacher, patient and terrific. So happy with Advantage Computer Solutions and Al and his AMAZINGLY WONDERFUL STAFF.
I’ve been a customer of the staff at Advantage for many years now. They have never let me down! Whatever my need, however big or small my problem, they have been unfailingly helpful, friendly and professional. Services are performed promptly and effectively, and they are very fair with pricing, too. I am lucky to have… Read more “Whatever my need, unfailingly helpful”
I’ve known the Advantage Team for years. They are the absolute best techs in the field, bar none. I couldn’t tell you how many tens thousands of dollars they saved us over the years; they can be trusted to never scam anyone even though they would do so very easily. The turnaround time is also… Read more “Best Kept Secret”
I had an excellent experience with Advantage. Aside from being extremely professional and pleasant generally, Zack was incredibly responsive and helpful, even before and after my appointment, and really resolved IT issues in my home office that had been plaguing me for years. I am so relieved to not have to think about this anymore!… Read more “Excellent Experience”
Simply The Best! Our company has been working with Advantage Computer Solutions for a few years, Zack and his Team are AWESOME! They are super reliable – whether it’s everyday maintenance or emergencies that may arise, The Advantage Team take care of us! Our team is grateful for their knowledgeable and professional services – a… Read more “Simply The Best!”
The engineering team at Advantage Computers is the best in the business. They are nothing short of technical wizards.
Al, Nasser and Zack have been keeping our operations going for over a decade, taking care of our regular upgrades and our emergency system problems. When we have an emergency, they make it their emergency. Its like having a cousin in the business.
In many cases, exceptional people do not receive recognition for their hard work and superior customer service. We do not want this to be one of those times. Zack Rahhal has been our hardware and technical consultant for our servers, Pc’s and other technical equipment since April 2004 and has provided valuable input and courteous service to… Read more “Exceptional People”
I became a customer about 6-7 months and I can say nothing but great things about this business. Zack takes care of me. I am an attorney and operate my own small firm. I have limited knowledge of computers. Zack is very patient in explaining things. He has offered practical and economical solutions to multiple… Read more “Highly Recommended”
THANK GOD for this local computer repair business who saved me hundreds, my hard drive was messed up, i called the company with warranty they said it would be $600, I went in they did a quick diagnostic, and based on his observations he gave me a step by step of the possible problems and… Read more “Life Savers”
I don’t have enough words to express my appreciation for Nassar and Paul, and the other members of Advantage Computer Solutions. I live in Bergen County and travel to Passaic County because of the trust I have in the competence and honesty of Advantage Computers. What a blessing to have such seasoned and caring professionals… Read more “I don’t have enough words to express my appreciation”
Advantage Computer Solutions is absolutely great. They show up, do what they say they are going to, complete the job without issues (my other computer companies had to keep coming back to fix things they “forgot” to do….) and are fairly priced. Zack is awesome, reliable, dependable, knowledgeable….everything you want in a computer solutions vendor.
Knowledgeable, Reliable, Reasonable Working with Advantage Computers since 1997 for both personal and business tech support has been a rewarding and enjoyable experience. Rewarding, in that the staff is very knowledgeable, approaching needs and issues in a very straightforward, common sense manner, resulting in timely solutions and resolutions. Enjoyable, these guys are really friendly (not… Read more “Knowledgeable, Reliable, Reasonable”
Excellent service! I am the administrator for a busy medical office which relies heavily on our computer system. We have used Advantage Computer Solutions for installation, set-up and for service. The response time is immediate and the staff is often able to provide help remotely. Very affordable and honest…. A++!!! Essex Surgical relies on Advantage… Read more “Excellent service!”
Advantage offers great advice and service I bought parts for my gaming pc online and they put it together in a day for a great price. They are very professional. I was very satisfied with their service. I am a newbie in terms of PC gaming so they gave me great advice on this new piece… Read more “Great Advice and Service”
Our company has been using the services of Advantage Computers since 2006. It was important to find a reliable company to provide us with the technical support both onsite and offsite. It was through a recommendation that we contacted Advantage to have them provide us with a quote to install a new server and update our… Read more “Great Service, Support and Sales”
Our company has been working with Advantage since the 1990’s and have been a loyal client ever since. Advantage does not make it very difficult to be loyal as they offer services from the most intricate and personalized to the global scale. Our company has grown beyond its doors of a local office to National… Read more “Extremely Professional and Passionate”
Advantage Computer Solutions has handled all of our computer and IT needs for the past 2 years. The staff is always professional and the service is always prompt. When your computers are down or not working properly is affects all aspects of your business, it is wonderful to have such a reliable team on our… Read more “Handles all our Office IT”
Since 1996 the Housing Authority of the City of Passaic has been a client of Advantage Computer Solutions. Our Agency has utilized their outstanding services and expertise to solve our technologic problems and growth over the past eighteen years. We would like to personally thank them for proposing cost effective solutions while reducing labor-intense tasks… Read more “Passaic Housing Authority”
“When the computer I use to run my photography business started acting erratically and kept shutting down, I was in a panic. I depend on that computer to deliver final products to my clients. Fortunately, I brought my HP into Advantage for repair and in one day I had my computer back. Not only did… Read more “They made sure EVERYTHING was working”
Google Cloud expands vulnerability detection for Artifact Registry using OSV
Posted by Greg Mucci, Product Manager, Artifact Analysis, Oliver Chang, Senior Staff Engineering, OSV, and Charl de Nysschen, Product Manager OSV
DevOps teams dedicated to securing their supply chain and predicting potential risks consistently face novel threats. Fortunately, they can now improve their image and container security by harnessing Google-grade vulnerability scanning, which offers expanded open-source coverage. A significant benefit of utilizing Google Cloud Platform is its integrated security tools, including Artifact Analysis. This scanning service leverages the same infrastructure that Google depends on to monitor vulnerabilities within its internal systems and software supply chains.
Artifact Analysis has recently expanded its scanning coverage to eight additional language packages, four operating systems, and two extensively utilized base images, making it a more robust and versatile tool than ever before.
This enhanced coverage was achieved by integrating Artifact Analysis with the Open Source Vulnerabilities (OSV) platform and database. This integration provides industry-leading insights into open source vulnerabilities—a crucial capability as software supply chain attacks continue to grow in frequency and complexity, impacting organizations reliant on open source software.
With these recent updates, customers can now successfully scan the vast majority of the images they push to Artifact Registry. These successful scans ensure that any known vulnerabilities are detected, reported, and can be integrated into a broader vulnerability management program, allowing teams to take prompt action.
Open source vulnerabilities, with more reach
Artifact Analysis pulls vulnerability information directly from OSV, which is the only open source, distributed vulnerability database that gets information directly from open source practitioners. OSV’s database provides a consistent, high quality, high fidelity database of vulnerabilities from authoritative sources who have adopted the OSV schema. This ensures the database has accurate information to reliably match software dependencies to known vulnerabilities—previously a difficult process reliant on inaccurate mechanisms such as CPEs (Common Platform Enumerations).
Over the past three years, OSV has increased its total coverage to 28 language and OS ecosystems. For example, industry leaders such as GitHub, Chainguard, and Ubuntu, as well as open source ecosystems such as Rust and Python are now exporting their vulnerability discoveries in the OSV Schema. This increased coverage also includes Chainguard’s Wolfi images and Google’s Distroless images, which are popular choices for minimal container images used by many developers and organizations. Customers who rely on distroless images can count on Artifact Analysis scanning to support their minimal container image initiatives. Each expansion in OSV’s coverage is incorporated into scanning tools that integrate with the OSV database.
Broader vulnerability detection with Artifact Analysis
As a result of OSV’s expansion, scanners like Artifact Analysis that draw from OSV now alert users to higher quality vulnerability information across a broader set of ecosystems—meaning GCP project owners will be made aware of a more complete set of vulnerability findings and potential security risks.
Existing Artifact Registry scanning customers don’t need to take any action to take advantage of this update. Projects that have scanning enabled will immediately benefit from this expanded coverage and vulnerability findings will continue to be available in the Artifact Registry UI, Container Analysis API, and via pub/sub (for workflows).
Existing On Demand scanning customers will also benefit from this expanded vulnerability coverage. All the same Operating Systems and Language package coverage that Registry Scanning customers enjoy are available in On Demand Scan.
Beyond Artifact Registry
We know that detection is just one of the first steps necessary to manage risks. We’re continually expanding Artifact Analysis capabilities and in 2025 we’ll be integrating Artifact Registry vulnerability findings with Google Cloud’s Security Command Center. Through Security Command Center customers can maintain a more comprehensive vulnerability management program, and prioritize risk across a number of different dimensions.
Announcing the launch of Vanir: Open-source Security Patch Validation
Today, we are announcing the availability of Vanir, a new open-source security patch validation tool. Introduced at Android Bootcamp in April, Vanir gives Android platform developers the power to quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches. Vanir significantly accelerates patch validation by automating this process, allowing OEMs to ensure devices are protected with critical security updates much faster than traditional methods. This strengthens the security of the Android ecosystem, helping to keep Android users around the world safe.
By open-sourcing Vanir, we aim to empower the broader security community to contribute to and benefit from this tool, enabling wider adoption and ultimately improving security across various ecosystems. While initially designed for Android, Vanir can be easily adapted to other ecosystems with relatively small modifications, making it a versatile tool for enhancing software security across the board. In collaboration with the Google Open Source Security Team, we have incorporated feedback from our early adopters to improve Vanir and make it more useful for security professionals. This tool is now available for you to start developing on top of, and integrating into, your systems.
The need for Vanir
The Android ecosystem relies on a multi-stage process for vulnerability mitigation. When a new vulnerability is discovered, upstream AOSP developers create and release upstream patches. The downstream device and chip manufacturers then assess the impact on their specific devices and backport the necessary fixes. This process, while effective, can present scalability challenges, especially for manufacturers managing a diverse range of devices and old models with complex update histories. Managing patch coverage across diverse and customized devices often requires considerable effort due to the manual nature of backporting.
To streamline the vital security workflow, we developed Vanir. Vanir provides a scalable and sustainable solution for security patch adoption and validation, helping to ensure Android devices receive timely protection against potential threats.
The power of Vanir
Source-code-based static analysis
Vanir’s first-of-its-kind approach to Android security patch validation uses source-code-based static analysis to directly compare the target source code against known vulnerable code patterns. Vanir does not rely on traditional metadata-based validation mechanisms, such as version numbers, repository history and build configs, which can be prone to errors. This unique approach enables Vanir to analyze entire codebases with full history, individual files, or even partial code snippets.
A main focus of Vanir is to automate the time consuming and costly process of identifying missing security patches in the open source software ecosystem. During the early development of Vanir, it became clear that manually identifying a high-volume of missing patches is not only labor intensive but also can leave user devices inadvertently exposed to known vulnerabilities for a period of time. To address this, Vanir utilizes novel automatic signature refinement techniques and multiple pattern analysis algorithms, inspired by the vulnerable code clone detection algorithms proposed by Jang et al. [1] and Kim et al. [2]. These algorithms have low false-alarm rates and can effectively handle broad classes of code changes that might appear in code patch processes. In fact, based on our 2-year operation of Vanir, only 2.72% of signatures triggered false alarms. This allows Vanir to efficiently find missing patches, even with code changes, while minimizing unnecessary alerts and manual review efforts.
Vanir’s source-code-based approach also enables rapid scaling across any ecosystem. It can generate signatures for any source files written in supported languages. Vanir’s signature generator automatically generates, tests, and refines these signatures, allowing users to quickly create signatures for new vulnerabilities in any ecosystem simply by providing source files with security patches.
Android’s successful use of Vanir highlights its efficiency compared to traditional patch verification methods. A single engineer used Vanir to generate signatures for over 150 vulnerabilities and verify missing security patches across its downstream branches – all within just five days.
Vanir for Android
Currently Vanir supports C/C++ and Java targets and covers 95% of Android kernel and userspace CVEs with public security patches. Google Android Security team consistently incorporates the latest CVEs into Vanir’s coverage to provide a complete picture of the Android ecosystem’s patch adoption risk profile.
The Vanir signatures for Android vulnerabilities are published through the Open Source Vulnerabilities (OSV) database. This allows Vanir users to seamlessly protect their codebases against latest Android vulnerabilities without any additional updates. Currently, there are over 2,000 Android vulnerabilities in OSV, and finishing scanning an entire Android source tree can take 10-20 minutes with a modern PC.
Flexible integration, adoption and expansion.
Vanir is developed not only as a standalone application but also as a Python library. Users who want to integrate automated patch verification processes with their continuous build or test chain may easily achieve it by wiring their build integration tool with Vanir scanner libraries. For instance, Vanir is integrated with a continuous testing pipeline in Google, ensuring all security patches are adopted in ever-evolving Android codebase and their first-party downstream branches.
Vanir is also fully open-sourced, and under BSD-3 license. As Vanir is not fundamentally limited to the Android ecosystem, you may easily adopt Vanir for the ecosystem that you want to protect by making relatively small modifications in Vanir. In addition, since Vanir’s underlying algorithm is not limited to security patch validation, you may modify the source and use it for different purposes such as licensed code detection or code clone detection. The Android Security team welcomes your contributions to Vanir for any direction that may expand its capability and scope. You can also contribute to Vanir by providing vulnerability data with Vanir signatures to OSV.
Vanir Results
Since early last year, we have partnered with several Android OEMs to test the tool’s effectiveness. Internally we have been able to integrate the tool into our build system continuously testing against over 1,300 vulnerabilities. Currently Vanir covers 95% of all Android, Wear, and Pixel vulnerabilities with public fixes across Android Kernel and Userspace. It has a 97% accuracy rate, which has saved our internal teams over 500 hours to date in patch fix time.
Next steps
We are happy to announce that Vanir is now available for public use. Vanir is not technically limited to Android, and we are also actively exploring problems that Vanir may help address, such as general C/C++ dependency management via integration with OSV-scanner. If you are interested in using or contributing to Vanir, please visit github.com/google/vanir. Please join our public community to submit your feedback and questions on the tool.
We look forward to working with you on Vanir!
Notes
[1] J. Jang, A. Agrawal and D. Brumley, “ReDeBug: Finding Unpatched Code Clones in Entire OS Distributions,” 2012 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 2012, pp. 48-62, doi: 10.1109/SP.2012.13.
[2] S. Kim, S. Woo, H. Lee and H. Oh, “VUDDY: A Scalable Approach for Vulnerable Code Clone Discovery,” 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2017, pp. 595-614, doi: 10.1109/SP.2017.62.
Leveling Up Fuzzing: Finding more vulnerabilities with AI
Posted by Oliver Chang, Dongge Liu and Jonathan Metzman, Google Open Source Security Team
Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project.
But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite.
This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere.
The story so far
In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities.
The ideal solution would be to completely automate the manual process of developing a fuzz target end to end:
Drafting an initial fuzz target.
Fixing any compilation issues that arise.
Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues.
Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause.
Fixing vulnerabilities.
In August 2023, we covered our efforts to use an LLM to handle the first two steps. We were able to use an iterative process to generate a fuzz target with a simple prompt including hardcoded examples and compilation errors.
In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target.
To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4.
New results: More code coverage and discovered vulnerabilities
We’re now able to automatically gain more coverage in 272 C/C++ projects on OSS-Fuzz (up from 160), adding 370k+ lines of new code coverage. The top coverage improvement in a single project was an increase from 77 lines to 5434 lines (a 7000% increase).
This led to the discovery of 26 new vulnerabilities in projects on OSS-Fuzz that already had hundreds of thousands of hours of fuzzing. The highlight is CVE-2024-9143 in the critical and well-tested OpenSSL library. We reported this vulnerability on September 16 and a fix was published on October 16. As far as we can tell, this vulnerability has likely been present for two decades and wouldn’t have been discoverable with existing fuzz targets written by humans.
Another example was a bug in the project cJSON, where even though an existing human-written harness existed to fuzz a specific function, we still discovered a new vulnerability in that same function with an AI-generated target.
One reason that such bugs could remain undiscovered for so long is that line coverage is not a guarantee that a function is free of bugs. Code coverage as a metric isn’t able to measure all possible code paths and states—different flags and configurations may trigger different behaviors, unearthing different bugs. These examples underscore the need to continue to generate new varieties of fuzz targets even for code that is already fuzzed, as has also been shown by Project Zero in the past (1, 2).
New improvements
To achieve these results, we’ve been focusing on two major improvements:
Automatically generate more relevant context in our prompts. The more complete and relevant information we can provide the LLM about a project, the less likely it would be to hallucinate the missing details in its response. This meant providing more accurate, project-specific context in prompts, such as function, type definitions, cross references, and existing unit tests for each project. To generate this information automatically, we built new infrastructure to index projects across OSS-Fuzz.
LLMs turned out to be highly effective at emulating a typical developer’s entire workflow of writing, testing, and iterating on the fuzz target, as well as triaging the crashes found. Thanks to this, it was possible to further automate more parts of the fuzzing workflow. This additional iterative feedback in turn also resulted in higher quality and greater number of correct fuzz targets.
The workflow in action
Our LLM can now execute the first four steps of the developer’s process (with the fifth soon to come).
A developer might check the source code, existing documentation and unit tests, as well as usages of the target function when to draft an initial fuzz target. An LLM can fulfill this role here, if we provide a prompt with this information and ask it to come up with a fuzz target.
Prompt:
Your goal is to write a fuzzing harness for the provided function-under-test signature using <code>LLVMFuzzerTestOneInput</code>. It is important that the provided solution compiles and actually calls the function-under-test specified by the function signature:
<function signature>
unsigned char * buffer_append_base64_decode(buffer *, const char *, size_t, base64_charset)
</function signature>
Here is the source code of the function being tested:
<code>
unsigned char* buffer_append_base64_decode(buffer *out, const char* in, size_t in_length, base64_charset charset) {
const size_t reserve = 3*(in_length/4) + 3;
unsigned char * const result = (unsigned char *)
buffer_string_prepare_append(out, reserve);
const size_t out_pos =
li_base64_dec(result, reserve, in, in_length, charset);
buffer_commit(out, out_pos);
return (out_pos || !in_length) ? result : NULL;
}
</code>
Here is the source code for functions which reference the function being tested:
<code>
…
Once a developer has a candidate target, they would try to compile it and look at any compilation issues that arise. Again, we can prompt an LLM with details of the compilation errors so it can provide fixes.
Prompt:
Below is the error to fix:
The code has the following build issues:
<error>
server_fuzzer.cc:12:46: error: only virtual member functions can be marked ‘override’
12 | bool OnData(const char* data, size_t size) override {
| ^~~~~~~~
server_fuzzer.cc:22:39: error: no type named ‘MultiPartFile’ in namespace ‘httplib’
22 | const httplib::MultiPartFile& file) override {
| ~~~~~~~~~^
server_fuzzer.cc:12:27: warning: unused parameter ‘data’ [-Wunused-parameter]
12 | bool OnData(const char* data, size_t size) override {
| ^
server_fuzzer.cc:37:5: error: use of undeclared ‘operator()’
37 | operator()(request_data.data(), receiver);
| ^
server_fuzzer.cc:27:23: warning: private field ‘provider_’ is not used [-Wunused-private-field]
27 | FuzzedDataProvider& provider_;
| ^
</error>
Example of compilation errors that an LLM was able to fix
Once all compilation errors are fixed, a developer would try running the fuzz target for a short period of time to see if there were any mistakes that led it to instantly crash, suggesting an error with the target rather than a bug discovered in the project.
The following is an example of an LLM fixing a semantic issue with the fuzzing setup:
LLM Response:
<explanation>
The issue is that `sqlite3_blob_open` requires the table to exist before you can open a blob on it. The fix ensures that the table exists by executing a `CREATE TABLE IF NOT EXISTS` statement before calling `sqlite3_blob_open`.
Here’s a breakdown of the changes:
1. **Create Table Query:** A string `createTableQuery` is constructed to hold the SQL statement for creating the table. It uses the fuzzed values for the table and column names.
2. **Prepare Statement:** `sqlite3_prepare_v2` is used to prepare the SQL statement from `createTableQuery`. This compiles the SQL into a byte-code program that SQLite can execute.
3. **Execute Statement:** `sqlite3_step` executes the prepared statement. Since it’s a `CREATE TABLE` statement, it doesn’t return any data, but it creates the table if it doesn’t exist.
4. **Finalize Statement:** `sqlite3_finalize` destroys the prepared statement object when it’s no longer needed.
By adding these steps before calling `sqlite3_blob_open`, we ensure that the table exists, preventing the null pointer access issue.
</explanation>
4. Running the corrected fuzz target for a longer period of time, and triaging any crashes.
At this point, the fuzz target is ready to run for an extended period of time on a suitable fuzzing infrastructure, such as ClusterFuzz.
Any discovered crashes would then need to be triaged, to determine the root causes and whether they represented legitimate vulnerabilities (or bugs in the fuzz target). An LLM can be prompted with the relevant context (stacktraces, fuzz target source code, relevant project source code) to perform this triage.
Prompt: Given the following crash report, fuzz driver code and relevant project function code, analyze the cause of the crash.
First, only answer “Crash is caused by bug in fuzz driver” or “Crash is caused by bug in project”. Second, offer a succinct and to-the-point analysis.
Below is crash report:
<log>
AddressSanitizer: heap-use-after-free on address 0x502000000630 at pc 0x560036939242 bp 0x7ffd65dca850 sp 0x7ffd65dca010
READ of size 1 at 0x502000000630 thread T0
SCARINESS: 40 (1-byte-read-heap-use-after-free)
#0 0x560036939241 in __asan_memcpy /src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:63:3
#1 0x56003697d30b in EmitLiteral<false> /src/snappy/snappy.cc:661:5
#2 0x56003697d30b in snappy::internal::CompressFragment(char const*, unsigned long, char*, unsigned short*, int) /src/snappy/snappy.cc:954:10
In this example, the LLM correctly determines this is a bug in the fuzz target, rather than a bug in the project being fuzzed.
LLM Response: Crash is caused by bug in fuzz driver.
The fuzz driver creates a vector of `iovec` structures, each pointing to a chunk of memory owned by the `FuzzedDataProvider`. However, the `data_chunk` vector, which holds the actual data pointed to by `iov_base`, is deallocated at the end of each fuzzer iteration when it goes out of scope. This leaves the `iov_base` pointers dangling. When `snappy::CompressFromIOVec` tries to access the memory pointed to by these dangling pointers, a heap-use-after-free occurs.
5. Fixing vulnerabilities.
The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability. We don’t have anything we can share here today, but we’re collaborating with various researchers to make this a reality and look forward to sharing results soon.
Up next
Improving automated triaging: to get to a point where we’re confident about not requiring human review. This will help automatically report new vulnerabilities to project maintainers. There are likely more than the 26 vulnerabilities we’ve already reported upstream hiding in our results.
Agent-based architecture: which means letting the LLM autonomously plan out the steps to solve a particular problem by providing it with access to tools that enable it to get more information, as well as to check and validate results. By providing LLM with interactive access to real tools such as debuggers, we’ve found that the LLM is more likely to arrive at a correct result.
Integrating our research into OSS-Fuzz as a feature: to achieve a more fully automated end-to-end solution for vulnerability discovery and patching. We hope OSS-Fuzz will be useful for other researchers to evaluate AI-powered vulnerability discovery ideas and ultimately become a tool that will enable defenders to find more vulnerabilities before they get exploited.
For more information, check out our open source framework at oss-fuzz-gen. We’re hoping to continue to collaborate on this area with other researchers. Also, be sure to check out the OSS-Fuzz blog for more technical updates.
Retrofitting spatial safety to hundreds of millions of lines of C++
Posted by Alex Rebert and Max Shavrick, Security Foundations, and Kinuko Yasuda, Core Developer
Attackers regularly exploit spatial memory safety vulnerabilities, which occur when code accesses a memory allocation outside of its intended bounds, to compromise systems and sensitive data. These vulnerabilities represent a major security risk to users.
Based on an analysis of in-the-wild exploits tracked by Google’s Project Zero, spatial safety vulnerabilities represent 40% of in-the-wild memory safety exploits over the past decade:
Breakdown of memory safety CVEs exploited in the wild by vulnerability class.1
Google is taking a comprehensive approach to memory safety. A key element of our strategy focuses on Safe Coding and using memory-safe languages in new code. This leads to an exponential decline in memory safety vulnerabilities and quickly improves the overall security posture of a codebase, as demonstrated by our post about Android’s journey to memory safety.
However, this transition will take multiple years as we adapt our development practices and infrastructure. Ensuring the safety of our billions of users therefore requires us to go further: we’re also retrofitting secure-by-design principles to our existing C++ codebase wherever possible.
To that end, we’re working towards bringing spatial memory safety into as many of our C++ codebases as possible, including Chrome and the monolithic codebase powering our services.
We’ve begun by enabling hardened libc++, which adds bounds checking to standard C++ data structures, eliminating a significant class of spatial safety bugs. While C++ will not become fully memory-safe, these improvements reduce risk as discussed in more detail in our perspective on memory safety, leading to more reliable and secure software.
This post explains how we’re retrofitting hardened libc++ across our codebases and showcases the positive impact it’s already having, including preventing exploits, reducing crashes, and improving code correctness.
Bounds-checked data structures: The foundation for spatial safety
One of our primary strategies for improving spatial safety in C++ is to implement bounds checking for common data structures, starting with hardening the C++ standard library (in our case, LLVM’s libc++). Hardened libc++, recently added by open source contributors, introduces a set of security checks designed to catch vulnerabilities such as out-of-bounds accesses in production.
For example, hardened libc++ ensures that every access to an element of a std::vector stays within its allocated bounds, preventing attempts to read or write beyond the valid memory region. Similarly, hardened libc++ checks that a std::optional isn’t empty before allowing access, preventing access to uninitialized memory.
This approach mirrors what’s already standard practice in many modern programming languages like Java, Python, Go, and Rust. They all incorporate bounds checking by default, recognizing its crucial role in preventing memory errors. C++ has been a notable exception, but efforts like hardened libc++ aim to close this gap in our infrastructure. It’s also worth noting that similar hardening is available in other C++ standard libraries, such as libstdc++.
Raising the security baseline across the board
Building on the successful deployment of hardened libc++ in Chrome in 2022, we’ve now made it default across our server-side production systems. This improves spatial memory safety across our services, including key performance-critical components of products like Search, Gmail, Drive, YouTube, and Maps. While a very small number of components remain opted out, we’re actively working to reduce this and raise the bar for security across the board, even in applications with lower exploitation risk.
The performance impact of these changes was surprisingly low, despite Google’s modern C++ codebase making heavy use of libc++. Hardening libc++ resulted in an average 0.30% performance impact across our services (yes, only a third of a percent).
This is due to both the compiler’s ability to eliminate redundant checks during optimization, and the efficient design of hardened libc++. While a handful of performance-critical code paths still require targeted use of explicitly unsafe accesses, these instances are carefully reviewed for safety. Techniques like profile-guided optimizations further improved performance, but even without those advanced techniques, the overhead of bounds checking remains minimal.
We actively monitor the performance impact of these checks and work to minimize any unnecessary overhead. For instance, we identified and fixed an unnecessary check, which led to a 15% reduction in overhead (reduced from 0.35% to 0.3%), and contributed the fix back to the LLVM project to share the benefits with the broader C++ community.
While hardened libc++’s overhead is minimal for individual applications in most cases, deploying it at Google’s scale required a substantial commitment of computing resources. This investment underscores our dedication to enhancing the safety and security of our products.
From tests to production
Enabling libc++ hardening wasn’t a simple flip of a switch. Rather, it required a multi-stage rollout to avoid accidentally disrupting users or creating an outage:
Quantifiable impact
In just a few months since enabling hardened libc++ by default, we’ve already seen benefits.
Preventing exploits: Hardened libc++ has already disrupted an internal red team exercise and would have prevented another one that happened before we enabled hardening, demonstrating its effectiveness in thwarting exploits. The safety checks have uncovered over 1,000 bugs, and would prevent 1,000 to 2,000 new bugs yearly at our current rate of C++ development.
Improved reliability and correctness: The process of identifying and fixing bugs uncovered by hardened libc++ led to a 30% reduction in our baseline segmentation fault rate across production, indicating improved code reliability and quality. Beyond crashes, the checks also caught errors that would have otherwise manifested as unpredictable behavior or data corruption.
Moving average of segfaults across our fleet over time, before and after enablement.
Easier debugging: Hardened libc++ enabled us to identify and fix multiple bugs that had been lurking in our code for more than a decade. The checks transform many difficult-to-diagnose memory corruptions into immediate and easily debuggable errors, saving developers valuable time and effort.
Bridging the gap with memory-safe languages
While libc++ hardening provides immediate benefits by adding bounds checking to standard data structures, it’s only one piece of the puzzle when it comes to spatial safety.
We’re expanding bounds checking to other libraries and working to migrate our code to Safe Buffers, requiring all accesses to be bounds checked. For spatial safety, both hardened data structures, including their iterators, and Safe Buffers are necessary.
Beyond improving the safety of our C++, we’re also focused on making it easier to interoperate with memory-safe languages. Migrating our C++ to Safe Buffers shrinks the gap between the languages, which simplifies interoperability and potentially even an eventual automated translation.
Building a safer C++ ecosystem
Hardened libc++ is a practical and effective way to enhance the safety, reliability, and debuggability of C++ code with minimal overhead. Given this, we strongly encourage organizations using C++ to enable their standard library’s hardened mode universally by default.
At Google, enabling hardened libc++ is only the first step in our journey towards a spatially safe C++ codebase. By expanding bounds checking, migrating to Safe Buffers, and actively collaborating with the broader C++ community, we aim to create a future where spatial safety is the norm.
Acknowledgements
We’d like to thank Emilia Kasper, Chandler Carruth, Duygu Isler, Matthew Riley, and Jeff Vander Stoep for their helpful feedback. We also extend our thanks to the libc++ community for developing the hardening mode that made this work possible.
Based on manual analysis of CVEs from July 15, 2014 to Dec 14, 2023. Note that we could not classify 11% of CVEs.. ↩
Safer with Google: New intelligent, real-time protections on Android to keep you safe
Posted by Lyubov Farafonova, Product Manager and Steve Kafka, Group Product Manager, Android
User safety is at the heart of everything we do at Google. Our mission to make technology helpful for everyone means building features that protect you while keeping your privacy top of mind. From Gmail’s defenses that stop more than 99.9% of spam, phishing and malware, to Google Messages’ advanced security that protects users from 2 billion suspicious messages a month and beyond, we’re constantly developing and expanding protection features that help keep you safe.
We’re introducing two new real-time protection features that enhance your safety, all while safeguarding your privacy: Scam Detection in Phone by Google to protect you from scams and fraud, and Google Play Protect live threat detection with real-time alerts to protect you from malware and dangerous apps.
These new security features are available first on Pixel, and are coming soon to more Android devices.
More intelligent AI-powered protection against scams
Scammers steal over $1 trillion dollars a year from people, and phone calls are their favorite way to do it. Even more alarming, scam calls are evolving, becoming increasingly more sophisticated, damaging and harder to identify. That’s why we’re using the best of Google AI to identify and stop scams before they can do harm with Scam Detection.
Real-time protection, built with your privacy in mind.
We’re now rolling out Scam Detection to English-speaking Phone by Google public beta users in the U.S. with a Pixel 6 or newer device.
To provide feedback on your experience, please click on Phone by Google App -> Menu -> Help & Feedback -> Send Feedback. We look forward to learning from this beta and your feedback, and we’ll share more about Scam Detection in the months ahead.
More real-time alerts to protect you from bad apps
Google Play Protect works non-stop to protect you in real-time from malware and unsafe apps. Play Protect analyzes behavioral signals related to the use of sensitive permissions and interactions with other apps and services.
With live threat detection, if a harmful app is found, you’ll now receive a real-time alert, allowing you to take immediate action to protect your device. By looking at actual activity patterns of apps, live threat detection can now find malicious apps that try extra hard to hide their behavior or lie dormant for a time before engaging in suspicious activity.
At launch, live threat detection will focus on stalkerware, code that may collect personal or sensitive data for monitoring purposes without user consent, and we will explore expanding its detection to other types of harmful apps in the future. All of this protection happens on your device in a privacy preserving way through Private Compute Core, which allows us to protect users without collecting data.
Live threat detection with real-time alerts in Google Play Protect are now available on Pixel 6+ devices and will be coming to additional phone makers in the coming months.
Life on a crooked RedLine: Analyzing the infamous infostealer’s backend
Following the takedown of RedLine Stealer by international authorities, ESET researchers are publicly releasing their research into the infostealer’s backend modules
ESET APT Activity Report Q2 2024–Q3 2024
An overview of the activities of selected APT groups investigated and analyzed by ESET Research in Q2 2024 and Q3 2024
Jane Goodall: Reasons for hope | Starmus highlights
The trailblazing scientist shares her reasons for hope in the fight against climate change and how we can tackle seemingly impossible problems and keep going in the face of adversity
Month in security with Tony Anscombe – October 2024 edition
Election interference, American Water and the Internet Archive breaches, new cybersecurity laws, and more – October saw no shortage of impactful cybersecurity news stories
How to remove your personal information from Google Search results
Have you ever googled yourself? Were you happy with what came up? If not, consider requesting the removal of your personal information from search results.