How do vishing scams work, how do they impact businesses and individuals, and how can you protect yourself, your family and your business?
The post Vishing: What is it and how do I avoid getting scammed? appeared first on WeLiveSecurity
How do vishing scams work, how do they impact businesses and individuals, and how can you protect yourself, your family and your business?
The post Vishing: What is it and how do I avoid getting scammed? appeared first on WeLiveSecurity
ESET Research dissects campaigns by the Gelsemium and BackdoorDiplomacy APT groups – Hacking an orbiting satellite isn’t necessarily the stuff of Hollywood
The post Week in security with Tony Anscombe appeared first on WeLiveSecurity
Should we expect cybercriminals to ditch the pseudonymous cryptocurrency for other forms of payment that may be better at throwing law enforcement off the scent?
The post Tracking ransomware cryptocurrency payments: What now for Bitcoin? appeared first on WeLiveSecurity
The latest Chrome update patches a bumper crop of security flaws across the browser’s desktop versions
The post Google fixes actively exploited Chrome zero‑day appeared first on WeLiveSecurity
ESET researchers discover a new campaign that evolved from the Quarian backdoor
The post BackdoorDiplomacy: Upgrading from Quarian to Turian appeared first on WeLiveSecurity
ESET researchers shed light on new campaigns from the quiet Gelsemium group
The post Gelsemium: When threat actors go gardening appeared first on WeLiveSecurity
Law enforcement around the world used a messaging app called AN0M to monitor the communications of alleged criminals
The post Hundreds of suspected criminals arrested after being tricked into using FBI‑run chat app appeared first on WeLiveSecurity
One of the main challenges of evaluating Rust for use within the Android platform was ensuring we could provide sufficient interoperability with our existing codebase. If Rust is to meet its goals of improving security, stability, and quality Android-wide, we need to be able to use Rust anywhere in the codebase that native code is required. To accomplish this, we need to provide the majority of functionality platform developers use. As we discussed previously, we have too much C++ to consider ignoring it, rewriting all of it is infeasible, and rewriting older code would likely be counterproductive as the bugs in that code have largely been fixed. This means interoperability is the most practical way forward.
Before introducing Rust into the Android Open Source Project (AOSP), we needed to demonstrate that Rust interoperability with C and C++ is sufficient for practical, convenient, and safe use within Android. Adding a new language has costs; we needed to demonstrate that Rust would be able to scale across the codebase and meet its potential in order to justify those costs. This post will cover the analysis we did more than a year ago while we evaluated Rust for use in Android. We also present a follow-up analysis with some insights into how the original analysis has held up as Android projects have adopted Rust.
Existing language interoperability in Android focuses on well defined foreign-function interface (FFI) boundaries, which is where code written in one programming language calls into code written in a different language. Rust support will likewise focus on the FFI boundary as this is consistent with how AOSP projects are developed, how code is shared, and how dependencies are managed. For Rust interoperability with C, the C application binary interface (ABI) is already sufficient.
Interoperability with C++ is more challenging and is the focus of this post. While both Rust and C++ support using the C ABI, it is not sufficient for idiomatic usage of either language. Simply enumerating the features of each language results in an unsurprising conclusion: many concepts are not easily translatable, nor do we necessarily want them to be. After all, we’re introducing Rust because many features and characteristics of C++ make it difficult to write safe and correct code. Therefore, our goal is not to consider all language features, but rather to analyze how Android uses C++ and ensure that interop is convenient for the vast majority of our use cases.
We analyzed code and interfaces in the Android platform specifically, not codebases in general. While this means our specific conclusions may not be accurate for other codebases, we hope the methodology can help others to make a more informed decision about introducing Rust into their large codebase. Our colleagues on the Chrome browser team have done a similar analysis, which you can find here.
This analysis was not originally intended to be published outside of Google: our goal was to make a data-driven decision on whether or not Rust was a good choice for systems development in Android. While the analysis is intended to be accurate and actionable, it was never intended to be comprehensive, and we’ve pointed out a couple of areas where it could be more complete. However, we also note that initial investigations into these areas showed that they would not significantly impact the results, which is why we decided to not invest the additional effort.
Exported functions from Rust and C++ libraries are where we consider interop to be essential. Our goals are simple:
While making Rust functions callable from C++ is a goal, this analysis focuses on making C++ functions available to Rust so that new Rust code can be added while taking advantage of existing implementations in C++. To that end, we look at exported C++ functions and consider existing and planned compatibility with Rust via the C ABI and compatibility libraries. Types are extracted by running objdump
on shared libraries to find external C++ functions they use1 and running c++filt
to parse the C++ types. This gives functions and their arguments. It does not consider return values, but a preliminary analysis2 of those revealed that they would not significantly affect the results.
We then classify each of these types into one of the following buckets:
These are generally simple types involving primitives (including pointers and references to them). For these types, Rust’s existing FFI will handle them correctly, and Android’s build system will auto-generate the bindings.
These are handled by the cxx crate. This currently includes std::string
, std::vector,
and C++ methods (including pointers/references to these types). Users simply have to define the types and functions they want to share across languages and cxx will generate the code to do that safely.
These types are not directly supported, but the interfaces that use them have been manually reworked to add Rust support. Specifically, this includes types used by AIDL and protobufs.
We have also implemented a native interface for StatsD as the existing C++ interface relies on method overloading, which is not well supported by bindgen and cxx3. Usage of this system does not show up in the analysis because the C++ API does not use any unique types.
This is currently common data structures such as std::optional
and std::chrono::duration
and custom string and vector implementations.
These can either be supported natively by a future contribution to cxx, or by using its ExternType facilities. We have only included types in this category that we believe are relatively straightforward to implement and have a reasonable chance of being accepted into the cxx project.
Some types are exposed in today’s C++ APIs that are either an implicit part of the API, not an API we expect to want to use from Rust, or are language specific. Examples of types we do not intend to support include:
native_handle
– this is a JNI interface type, so it is inappropriate for use in Rust/C++ communication. std::locale&
– Android uses a separate locale system from C++ locales. This type primarily appears in output due to e.g., cout
usage, which would be inappropriate to use in Rust. Overall, this category represents types that we do not believe a Rust developer should be using.
Android is in the process of deprecating HIDL and migrating to AIDL for HALs for new services.We’re also migrating some existing implementations to stable AIDL. Our current plan is to not support HIDL, preferring to migrate to stable AIDL instead. These types thus currently fall into the “We don’t need/intend to support” bucket above, but we break them out to be more specific. If there is sufficient demand for HIDL support, we may revisit this decision later.
This contains all types that do not fit into any of the above buckets. It is currently mostly std::string
being passed by value, which is not supported by cxx.
One of the primary reasons for supporting interop is to allow reuse of existing code. With this in mind, we determined the most commonly used C++ libraries in Android: liblog
, libbase
, libutils
, libcutils
, libhidlbase
, libbinder
, libhardware
, libz
, libcrypto
, and libui
. We then analyzed all of the external C++ functions used by these libraries and their arguments to determine how well they would interoperate with Rust.
Overall, 81% of types are in the first three categories (which we currently fully support) and 87% are in the first four categories (which includes those we believe we can easily support). Almost all of the remaining types are those we believe we do not need to support.
In addition to analyzing popular C++ libraries, we also examined Mainline modules. Supporting this context is critical as Android is migrating some of its core functionality to Mainline, including much of the native code we hope to augment with Rust. Additionally, their modularity presents an opportunity for interop support.
We analyzed 64 binaries and libraries in 21 modules. For each analyzed library we examined their used C++ functions and analyzed the types of their arguments to determine how well they would interoperate with Rust in the same way we did above for the top 10 libraries.
Here 88% of types are in the first three categories and 90% in the first four, with almost all of the remaining being types we do not need to handle.
With almost a year of Rust development in AOSP behind us, and more than a hundred thousand lines of code written in Rust, we can now examine how our original analysis has held up based on how C/C++ code is currently called from Rust in AOSP.4
The results largely match what we expected from our analysis with bindgen handling the majority of interop needs. Extensive use of AIDL by the new Keystore2 service results in the primary difference between our original analysis and actual Rust usage in the “Native Support” category.
A few current examples of interop are:
Bindgen and cxx provide the vast majority of Rust/C++ interoperability needed by Android. For some of the exceptions, such as AIDL, the native version provides convenient interop between Rust and other languages. Manually written wrappers can be used to handle the few remaining types and functions not supported by other options as well as to create ergonomic Rust APIs. Overall, we believe interoperability between Rust and C++ is already largely sufficient for convenient use of Rust within Android.
If you are considering how Rust could integrate into your C++ project, we recommend doing a similar analysis of your codebase. When addressing interop gaps, we recommend that you consider upstreaming support to existing compat libraries like cxx.
Our first attempt at quantifying Rust/C++ interop involved analyzing the potential mismatches between the languages. This led to a lot of interesting information, but was difficult to draw actionable conclusions from. Rather than enumerating all the potential places where interop could occur, Stephen Hines suggested that we instead consider how code is currently shared between C/C++ projects as a reasonable proxy for where we’ll also likely want interop for Rust. This provided us with actionable information that was straightforward to prioritize and implement. Looking back, the data from our real-world Rust usage has reinforced that the initial methodology was sound. Thanks Stephen!
Also, thanks to:
We used undefined symbols of function type as reported by objdump
to perform this analysis. This means that any header-only functions will be absent from our analysis, and internal (non-API) functions which are called by header-only functions may appear in it. ↩
We extracted return values by parsing DWARF symbols, which give the return types of functions. ↩
Even without automated binding generation, manually implementing the bindings is straightforward. ↩
In the case of handwritten C/C++ wrappers, we analyzed the functions they call, not the wrappers themselves. For all uses of our native AIDL library, we analyzed the types used in the C++ version of the library. ↩
If you’ve been paying attention to the news at all lately, you’ve probably noticed that software supply chain attacks are rapidly becoming a big problem. Whether you’re trying to prevent these attacks, responding to an ongoing one or recovering from one, you understand that knowing what is happening in your CI/CD pipeline is critical.
Fortunately, the Kubernetes-native Tekton project – an open-source framework for creating CI/CD systems – was designed with security in mind from Day One, and the new Tekton Chains project is here to help take it to the next level. Tekton Chains securely captures metadata for CI/CD pipeline executions. We made two really important design decisions early on in Tekton that make supply chain security easy: declarative pipeline definitions and explicit state transitions. This next section will explain what these mean in practice and how they make it easy to build a secure delivery pipeline.
Since the initial whiteboard sketches, the Pipeline and Task CRDs in Tekton were designed to allow users to define each step of their pipeline at a granular level. These types include support for mandatory declared inputs, outputs, and build environments. This means you can track exactly what sources went into a build, what tools were used during the build itself and what artifacts came out at the end. By breaking up a large monolithic pipeline into a series of smaller, reusable steps, you can increase visibility into the overall system. This makes it easier to understand your exposure to supply chain attacks, detect issues when they do happen and recover from them after.
Event-based or edge-triggered systems are easy to reason about, but can be tricky to manage at scale. They also make it much harder to track an artifact as it flows through the entire system. Each step in the pipeline only knows about the one immediately before it; no step is responsible for tracking the entire execution. This can become problematic when you try to understand the security posture of your delivery pipeline.
Tekton was designed with the opposite approach in mind – level-triggered. Instead of a Rube-Goldberg machine tied together with duct tape and clothespins, Tekton is more like an explicit assembly-line. Level-triggered systems like Tekton move from state-to-state in a calculated manner by a central orchestrator. They require more explicit-design up front, but they are easier to observe and reason about after. Supply chains that use systems like Tekton are more secure.
By observing the execution of a Task or a Pipeline and paying careful attention to the inputs, outputs, and steps along the way, we can make it easier to track down what happened and why later on. This “observer” can be run in a separate trust domain and cryptographically sign all of this captured metadata as it’s stored, leaving a tamper-proof activity ledger. This technique is called “verifiable builds.” This securely generated metadata can be used in a number of ways, from audit logging to recovering from security breaches to pre-deployment policy enforcement.
You can install Chains into any Tekton-enabled cluster and configure it to generate this cryptographically-signed supply chain metadata for your builds. Chains supports pluggable signature systems like PGP, x509 and Cloud KMS’s. Payloads can be generated in a few different industry-standard formats like the RedHat Simple-Signing and the In-Toto Provenance specifications. The full documentation is available here, but you can get started quickly with something like this:
For this tutorial, you’ll need access to a GKE Kubernetes cluster and a GCR registry with push credentials. The cluster should already have Tekton Pipelines installed.
$ kubectl apply –filename https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml
Next, you’ll set up registry authentication for the Tekton Chains controller, so that it can push OCI image signatures to your registry. To set up authentication, you’ll create a Service Account and download credentials:
$ export PROJECT_ID=<GCP Project ID>
$ gcloud iam service-accounts create tekton-chains
$ gcloud iam service-accounts keys create credentials.json –iam-account=tekton-chains@${PROJECT_ID}.iam.gserviceaccount.com
Now, create a Kubernetes Secret from your credentials file so the Chains controller can access it:
$ kubectl create secret docker-registry registry-credentials \
–docker-server=gcr.io \
–docker-username=_json_key \
–docker-email=tekton@chains.com \
–docker-password=”$(cat credentials.json)” \
-n tekton-chains
$ kubectl patch serviceaccount tekton-chains-controller \
-p “{\”imagePullSecrets\”: [{\”name\”: \”registry-credentials\”}]}” -n tekton-chains
We can use cosign to generate a keypair as a Kubernetes secret, which the Chains controller will use for signing. Cosign will ask for a password, which will be stored in the secret:
Next, you’ll need to set up authentication to your GCR registry for the kaniko task as another Kubernetes Secret.
$ export CREDENTIALS_SECRET=kaniko-credentials
$ kubectl create secret generic $CREDENTIALS_SECRET –from-file credentials.json
Now, we’ll create a kaniko-chains task which will build and push a container image to your registry. Tekton Chains will recognize that an image has been built, and sign it automatically.$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/kaniko/gcp/kaniko.yaml
$ cat <<EOF | kubectl apply -f –
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: kaniko-run
spec:
taskRef:
name: kaniko-gcp
params:
– name: IMAGE
value: gcr.io/${PROJECT_ID}/kaniko-chains
workspaces:
– name: source
emptyDir: {}
– name: credentials
secret:
secretName: ${CREDENTIALS_SECRET}
EOF
Wait for the TaskRun to complete, and give the Tekton Chains controller a few seconds to sign the image and store the signature. You should be able to verify the signature with cosign and your public key:
Congratulations! You’ve successfully signed and verified an OCI image with Tekton Chains and cosign.
In Tekton Pipelines, we plan on finishing up TEP-0025 (Hermekton) to enable the support for hermetic build execution. If you want to play around with it now, hermekton can be run as an alpha feature in experimental mode. When hermekton is enabled, a build runs in a locked-down environment without network connectivity. Hermetic builds guarantee all inputs have been explicitly declared ahead-of-time, providing for a more auditable supply-chain. Hermetic builds and Chains align well, because the hermeticity build property is contained in the full build provenance captured by Chains. Chains can generate and attest to metadata specifying exactly which sections of a build had network access.
This means policy can be defined around exactly which build tools are allowed to access the network and which ones are not. This metadata can be used in policies at build time (banning compilers with security vulnerabilities) or stored and used by policy engines at deploy time (only code-reviewed and verifiably built containers are allowed to run).
We believe supply-chain security must be built-in and by default. No task orchestrator can promise perfect supply-chain security, but TektonCD was designed with unique features in mind that make it easier to do the right thing. We’re always looking for feedback on the design, goals and requirements. You can reach out on GitHub or the #chains Slack channel.
Hacking an orbiting satellite is not light years away – here’s how things can go wrong in outer space
The post Hacking space: How to pwn a satellite appeared first on WeLiveSecurity