By contrast, two web browsers share identifiers that are tied to the device hardware and so persist even across fresh installs

The post Brave comes out on top in browser privacy study appeared first on WeLiveSecurity

From competitive salaries to ever-evolving job descriptions, there are myriad reasons why a cybersecurity career could be right for you

The post 5 reasons to consider a career in cybersecurity appeared first on WeLiveSecurity


We are excited to launch FuzzBench, a fully automated, open source, free service for evaluating fuzzers. The goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt.
Fuzzing is an important bug finding technique. At Google, we’ve found tens of thousands of bugs (1, 2) with fuzzers like libFuzzer and AFL. There are numerous research papers that either improve upon these tools (e.g. MOpt-AFL, AFLFast, etc) or introduce new techniques (e.g. Driller, QSYM, etc) for bug finding. However, it is hard to know how well these new tools and techniques generalize on a large set of real world programs. Though research normally includes evaluations, these often have shortcomings—they don’t use a large and diverse set of real world benchmarks, use few trials, use short trials, or lack statistical tests to illustrate if findings are significant. This is understandable since full scale experiments can be prohibitively expensive for researchers. For example, a 24-hour, 10-trial, 10 fuzzer, 20 benchmark experiment would require 2,000 CPUs to complete in a day.
To help solve these issues the OSS-Fuzz team is launching FuzzBench, a fully automated, open source, free service. FuzzBench provides a framework for painlessly evaluating fuzzers in a reproducible way. To use FuzzBench, researchers can simply integrate a fuzzer and FuzzBench will run an experiment for 24 hours with many trials and real world benchmarks. Based on data from this experiment, FuzzBench will produce a report comparing the performance of the fuzzer to others and give insights into the strengths and weaknesses of each fuzzer. This should allow researchers to focus more of their time on perfecting techniques and less time setting up evaluations and dealing with existing fuzzers.
Integrating a fuzzer with FuzzBench is simple as most integrations are less than 50 lines of code (example). Once a fuzzer is integrated, it can fuzz almost all 250+ OSS-Fuzz projects out of the box. We have already integrated ten fuzzers, including AFL, LibFuzzer, Honggfuzz, and several academic projects such as QSYM and Eclipser.
Reports include statistical tests to give an idea how likely it is that performance differences between fuzzers are simply due to chance, as well as the raw data so researchers can do their own analysis. Performance is determined by the amount of covered program edges, though we plan on adding crashes as a performance metric. You can view a sample report here.
How to Participate
Our goal is to develop FuzzBench with community contributions and input so that it becomes the gold standard for fuzzer evaluation. We invite members of the fuzzing research community to contribute their fuzzers and techniques, even while they are in development. Better evaluations will lead to more adoption and greater impact for fuzzing research.
We also encourage contributions of better ideas and techniques for evaluating fuzzers. Though we have made some progress on this problem, we have not solved it and we need the community’s help in developing these best practices.
Please join us by contributing to the FuzzBench repo on GitHub.

And how would you know if the algorithm was tampered with?

The post RSA 2020 – Is your machine learning/quantum computer lying to you? appeared first on WeLiveSecurity

ESET research uncovers a vulnerability in Wi-Fi chips – How to protect yourself against tax refund fraud – Clearview AI suffers a data breach

The post Week in security with Tony Anscombe appeared first on WeLiveSecurity

Users in other parts of the world also have the option to flip on DNS encryption

The post Firefox turns on DNS over HTTPS by default for US users appeared first on WeLiveSecurity

The digital age has added a whole new dimension to hurtful behavior, and we look at some of the key features that set in-person and online bullying apart

The post Cyberbullying: How is it different from face‑to‑face bullying? appeared first on WeLiveSecurity


User trust is critical to the success of developers of every size. On the Google Play Store, we aim to help developers boost the trust of their users, by surfacing signals in the Developer Console about how to improve their privacy posture. Towards this aim, we surface a message to developers when we think their app is asking for permission that is likely unnecessary.
This is important because numerous studies have shown that user trust can be affected when the purpose of a permission is not clear.1 In addition, research has shown that when users are given a choice between similar apps, and one of them requests fewer permissions than the other, they choose the app with fewer permissions.2
Determining whether or not a permission request is necessary can be challenging. Android developers request permissions in their apps for many reasons – some related to core functionality, and others related to personalization, testing, advertising, and other factors. To do this, we identify a peer set of apps with similar functionality and compare a developer’s permission requests to that of their peers. If a very large percentage of these similar apps are not asking for a permission, and the developer is, we then let the developer know that their permission request is unusual compared to their peers. Our determination of the peer set is more involved than simply using Play Store categories. Our algorithm combines multiple signals that feed Natural Language Processing (NLP) and deep learning technology to determine this set. A full explanation of our method is outlined in our recent publication, entitled “Reducing Permissions Requests in Mobile Apps” that appeared in the Internet Measurement Conference (IMC) in October 2019.3 (Note that the threshold for surfacing the warning signal, as stated in this paper, is subject to change.)
We surface this information to developers in the Play Console and we let the developer make the final call as to whether or not the permission is truly necessary. It is possible that the developer has a feature unlike all of its peers. Once a developer removes a permission, they won’t see the warning any longer. Note that the warning is based on our computation of the set of peer apps similar to the developers. This is an evolving set, frequently recomputed, so the message may go away if there is an underlying change to the set of peers apps and their behavior. Similarly, even if a developer is not currently seeing a warning about a permission, they might in the future if the underlying peer set and its behavior changes. An example warning is depicted below.

This warning also helps to remind developers that they are not obligated to include all of the permission requests occurring within the libraries they include inside their apps. We are pleased to say that in the first year after deployment of this advice signal nearly 60% of warned apps removed permissions. Moreover, this occurred across all Play Store categories and all app popularity levels. The breadth of this developer response impacted over 55 billion app installs.3 This warning is one component of Google’s larger strategy to help protect users and help developers achieve good security and privacy practices, such as Project Strobe, our guidelines on permissions best practices, and our requirements around safe traffic handling.
Acknowledgements
Giles Hogben, Android Play Dashboard and Pre-Launch Report teams

References

[1] Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings, by J. Lin B. Liu, N. Sadeh and J. Hong. In Proceedings of Usenix Symposium on Privacy & Security (SOUPS) 2014.
[2] Using Personal Examples to Improve Risk Communication for Security & Privacy Decisions, by M. Harbach, M. Hettig, S. Weber, and M. Smith. In Proceedings of the SIGCHI Conference on Human Computing Factors in Computing Systems, 2014.
[3] Reducing Permission Requests in Mobile Apps, by S. T. Peddinti, I. Bilogrevic, N. Taft, M Pelikan, U. Erlingsson, P. Anthonysamy and G. Hogben. In Proceedings of ACM Internet Measurement Conference (IMC) 2019.

The startup came under scrutiny after it emerged that it had amassed 3 billion photos from social media for facial recognition software

The post Facial recognition company Clearview AI hit by data theft appeared first on WeLiveSecurity

What the human battle against biological viruses can teach us about fighting computer infections – and vice versa

The post RSA 2020 – Hacking humans appeared first on WeLiveSecurity