Imagine Being on Trial. With Exonerating Evidence Trapped on Your Phone.
> Public defenders lack access to gadgets and software that could keep their clients out of jail.
> This tech gap has two basic forms. First, law enforcement agencies can use warrants and court orders to compel companies to turn over emails, photos and other communications, but defense lawyers have no such power. And second, the government has access to forensic technology that makes digital investigations easier. Over the last two decades, the machines and software designed to extract data from computers and smartphones were primarily made for and sold to law enforcement.
> To successfully defend its clients, the Legal Aid Society, New York City’s largest public defender office, realized in 2013 that it needed to buy the same tools the police had: forensic devices and software from companies including Cellebrite, Magnet Forensics and Guidance Software. Not only does the expensive technology unearth digital evidence that is otherwise hard or impossible to find, it captures it in a format that can hold up in court, as opposed to evidence that could have been tampered with or forged.
Motorola Brings Back The Razr: Flip-Phone In 2020
> Motorola has today announced a modern successor to one of the most iconic phones ever released: the Razr V3. The popular flip-phone was first released in 2004 and had been a huge success for the company as it went on to sell over a 100M units. The clamshell design was immensely popular as it was a lot thinner and had a unique design. The new Razr takes the core aspects of this design and ports it over to the latest 2019 technologies. At the heart of the new smartphone lies Motorola’s take on foldable displays, giving the new Razr a proper modern “full body screen” experience.
A nice look at how they got the fold to work. We’ll see.
Analyzing Android's CVE-2019-2215 (/dev/binder UAF)
> Over the past few weeks, those of you who frequent the DAY streams over on our Twitch may have seen me working on trying to understand the recent Android Binder Use-After-Free (UAF) published by Google’s Project Zero (p0). This bug is actually not new, the issue was discovered and fixed in the mainline kernel in February 2018, however, p0 discovered many popular devices did not receive the patch downstream. Some of these devices include the Pixel 2, the Huawei P20, and Samsung Galaxy S7, S8, and S9 phones. I believe many of these devices received security patches within the last couple weeks that finally killed the bug.
> After a few streams of poking around with a kernel debugger on a virtual machine (running Android-x86), and testing with a vulnerable Pixel 2, I’ve came to understand the exploit written by Jann Horn and Maddie Stone pretty well. Without an understanding of Binder (the binder_thread object specifically), as well as how Vectored I/O works, the exploit can be pretty confusing. It’s also quite clever how they exploited this issue, so I thought it would be cool to write up how the exploit works.
It’s super easy to bypass Android’s hidden API restrictions
> The API blacklist tracks who’s calling a function. If the source isn’t exempt, it crashes. In the first example, the source is the app. However, in the second example, the source is the system itself. Instead of using reflection to get what we want directly, we’re using it to tell the system to get what we want. Since the source of the call to the hidden function is the system, the blacklist doesn’t affect us anymore.
The call is coming from inside the system!
Inside the Phone Company Secretly Run By Drug Traffickers
> All over the world, in Dutch clubs like the one Kok frequented, or Australian biker hangouts and Mexican drug safe houses, there is an underground trade of custom-engineered phones. These phones typically run software for sending encrypted emails or messages, and use their own server infrastructure for routing communications.
> For MPC, the process of setting up the devices was relatively simple: MPC would take a Google Nexus 5 or Nexus 5X Android phone, and then add its own security features and operating system, according to social media posts from MPC and a source with knowledge of the process. MPC then created the customer’s messaging accounts, added a data-only SIM card (which MPC paid about £20 a month for), and then sold the phone to the customer at £1,200. Six-month renewals cost £700, the source added. MPC only sold around 5,000 phones, the source said, but that still indicates the business netted the company some £6 million. At one point, a version of MPC’s phones also used code from an open-source, security-focused Android fork called CopperheadOS, three sources said.
How a double-free bug in WhatsApp turns to RCE
> In this blog post, I’m going to share about a double-free vulnerability that I discovered in WhatsApp for Android, and how I turned it into an RCE.
> Double-free vulnerability in DDGifSlurp in decoding.c in libpl_droidsonroids_gif
50 ways to leak your data: an exploration of apps’ circumvention of the Android permissions system
> This paper is a study of Android apps in the wild that leak permission protected data (identifiers which can be used for tracking, and location information), where those apps should not have been able to see such data due to a lack of granted permissions. By detecting such leakage and analysing the responsible apps, the authors uncover a number of covert and side channels in real-world use.
Adopting the Arm Memory Tagging Extension in Android
> As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9.
> We believe that memory tagging will detect the most common classes of memory safety bugs in the wild, helping vendors identify and fix them, discouraging malicious actors from exploiting them. During the past year, our team has been working to ensure readiness of the Android platform and application software for MTE. We have deployed HWASAN, a software implementation of the memory tagging concept, to test our entire platform and a few select apps. This deployment has uncovered close to 100 memory safety bugs. The majority of these bugs were detected on HWASAN enabled phones in everyday use. MTE will greatly improve upon this in terms of overhead, ease of deployment, and scale. In parallel, we have been working on supporting MTE in the LLVM compiler toolchain and in the Linux kernel. The Android platform support for MTE will be complete by the time of silicon availability.
The lifetime of an Android API vulnerability
> When we published our paper in 2015, we predicted that this vulnerability would not be patched on 95% of devices in the Android ecosystem until January 2018 (plus or minus a standard deviation of 1.23 years). Since this date has now passed, we decided to check whether our prediction was correct.
> The good news is that we found the operating system update requirements crossed the 95% threshold in May 2017, seven months earlier than our best estimate, and within one standard deviation of our prediction. The most recent data for May 2019 shows deployment has reached 98.2% of devices in use. Nevertheless, fixing this aspect of the vulnerability took well over 4 years to reach 95% of devices.
SensorID Sensor Calibration Fingerprinting for Smartphones
> We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint.
Private Key Extraction from Qualcomm Hardware-backed Keystores
> A side-channel attack can extract private keys from certain versions of Qualcomm’s secure keystore. Recent Android devices include a hardware-backed keystore, which developers can use to protect their cryptographic keys with secure hardware. On some devices, Qualcomm’s TrustZone-based keystore leaks sensitive information through the branch predictor and memory caches, enabling recovery of 224 and 256-bit ECDSA keys. We demonstrate this by extracting an ECDSA P-256 private key from the hardware-backed keystore on the Nexus 5X.
Tracking Phones, Google Is a Dragnet for the Police
> The new orders, sometimes called “geofence” warrants, specify an area and a time period, and Google gathers information from Sensorvault about the devices that were there. It labels them with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices they think belong to suspects or witnesses, Google reveals the users’ names and other information.
How I discovered an easter egg in Android's security and didn't land a job at Google
> The same thing (except without the happy ending) happened to me. Hidden messages where there definitely couldn’t be any, reversing Java code and its native libraries, a secret VM, a Google interview — all of that is below.
> In the end I got an app that emulated the entire DroidGuard process: makes a request to the anti-abuse service, downloads the .apk, unpacks it, parses the native library, extracts the required constants, picks out the mapping of VM commands and interprets the byte code. I compiled it all and sent it off to Google.
> The answer didn’t take long. An email from a member of the DroidGuard team simply read: “Why are you even doing this?”
Websites that Keep Loading, and Loading and Loading….
> Websites (or 3rd party content) that continually pings a server will not allow the cellular radio to turn off, and can lead to battery drain. In this post, I have shown 4 different scenarios where my Android device continues to download content for 5 minutes after the page has been minimized on the phone.
Introducing Adiantum: Encryption for the Next Billion Users
> Where AES is used, the conventional solution for disk encryption is to use the XTS or CBC-ESSIV modes of operation, which are length-preserving. Currently Android supports AES-128-CBC-ESSIV for full-disk encryption and AES-256-XTS for file-based encryption. However, when AES performance is insufficient there is no widely accepted alternative that has sufficient performance on lower-end ARM processors.
> To solve this problem, we have designed a new encryption mode called Adiantum. Adiantum allows us to use the ChaCha stream cipher in a length-preserving mode, by adapting ideas from AES-based proposals for length-preserving encryption such as HCTR and HCH. On ARM Cortex-A7, Adiantum encryption and decryption on 4096-byte sectors is about 10.6 cycles per byte, around 5x faster than AES-256-XTS.
Paper link: https://tosc.iacr.org/index.php/ToSC/article/view/7360
Cross DSO CFI - LLVM and Android
> Control Flow Integrity is an exploit mitigation that helps raise the cost of writing reliable exploits for memory safety vulnerabilities. There are various CFI schemes available today, and most are quite well documented and understood. One of the important areas where CFI can be improved is protecting indirect calls across DSO (Dynamic Shared Object) boundaries. This is a difficult problem to solve as only the library itself can validate call targets and consumers of the library may be compiled and linked against the library long after it was built. This requires a low level ABI compatability between the caller and the callee. The LLVM project has documented their design for this here. The remainder of this post looks at that design, it’s drawbacks, and then briefly explores how the Android PIE Bionic linker implements it.
A big LITTLE Problem: A Tale of big.LITTLE Gone Wrong
> ARM decided that instead of a one-size-fits-all design, building 2 kinds of cores would be the best way to solve this problem. Power-hungry “big” cores handle applications like games which demand speed at all costs, and low-power “LITTLE” cores handle light loads without draining the battery. This system is called “big.LITTLE” (and the modern variant of this is also called Heterogeneous Multi-Processing). It is transparent to applications — they start on a LITTLE core, and are moved to a big core as soon as the operating system detects that they are using a lot of processing power.
> Well, at least that is the theory. Nobody and nothing is perfect.
Today it’s go’s turn to find out Exynos isn’t perfect.
CVE-2018-9411: New critical vulnerability in multiple high-privileged Android services
> As part of our platform research in Zimperium zLabs, I have recently disclosed a critical vulnerability affecting multiple high-privileged Android services to Google. Google designated it as CVE-2018-9411 and patched it in the July security update (2018-07-01 patch level), including additional patches in the September security update (2018-09-01 patch level).
> In this blog post, I will cover the technical details of the vulnerability and the exploit. I will start by explaining some background information related to the vulnerability, followed by the details of the vulnerability itself. I will then describe why I chose a particular service as the target for the exploit over other services that are affected by the vulnerability. I will also analyze the service itself in relation to the vulnerability. Lastly, I will cover the details of the exploit I wrote.
Too many moving parts.
Building a Titan: Better security through a tiny chip
> Titan M is a second-generation, low-power security module designed and manufactured by Google, and is a part of the Titan family. As described in the Keyword Blog post, Titan M performs several security sensitive functions,
OATmeal on the Universal Cereal Bus: Exploiting Android phones over USB
> Recently, there has been some attention around the topic of physical attacks on smartphones, where an attacker with the ability to connect USB devices to a locked phone attempts to gain access to the data stored on the device. This blogpost describes how such an attack could have been performed against Android devices (tested with a Pixel 2).
Interesting exploit chain and pivot. Lots of little bugs here, there, everywhere.
> As an attacker, it normally shouldn’t be possible to mount an ext4 filesystem this way because phones aren’t usually set up with any such keys; and even if there is such a key, you’d still have to know what the correct partition GUID is and what the key is. However, we can mount a vfat filesystem over /data/misc and put our own key there, for our own GUID. Then, while the first malicious USB mass storage device is still connected, we can connect a second one that is mounted as PrivateVolume using the keys vold will read from the first USB mass storage device.
> Notably, this attack crosses two weakly-enforced security boundaries: The boundary from blkid_untrusted to vold (when vold uses the UUID provided by blkid_untrusted in a pathname without checking that it resembles a valid UUID) and the boundary from the zygote to the TCB (by abusing the zygote’s CAP_SYS_ADMIN capability). Software vendors have, very rightly, been stressing for quite some time that it is important for security researchers to be aware of what is, and what isn’t, a security boundary - but it is also important for vendors to decide where they want to have security boundaries and then rigorously enforce those boundaries. Unenforced security boundaries can be of limited use - for example, as a development aid while stronger isolation is in development -, but they can also have negative effects by obfuscating how important a component is for the security of the overall system.