2020 Chrome Extension Performance Report
I tested how the 1000 most popular Chrome extensions affect browser performance. The main metrics I’ll consider are CPU consumption, memory consumption, and whether the extension makes pages render more slowly.
Some results are terrible. Some are worse.
Porting Firefox to Apple Silicon
Even with all the pieces in place, quite a bit of work to do.
The release of Apple Silicon-based Macs at the end of last year generated a flurry of news coverage and some surprises at the machine’s performance. This post details some background information on the experience of porting Firefox to run natively on these CPUs.
We’ll start with some background on the Mac transition and give an overview of Firefox internals that needed to know about the new architecture, before moving on to the concept of Universal Binaries.
We’ll then explain how DRM/EME works on the new platform, talk about our experience with macOS Big Sur, and discuss various updater problems we had to deal with. We’ll conclude with the release and an overview of various other improvements that are in the pipeline.
Tales of Favicons and Caches: Persistent Tracking in Modern Browsers
The privacy threats of online tracking have garnered considerable attention in recent years from researchers and practitioners alike. This has resulted in users becoming more privacy-cautious and browser vendors gradually adopting countermeasures to mitigate certain forms of cookie-based and cookie-less tracking. Nonetheless, the complexity and feature-rich nature of modern browsers often lead to the deployment of seemingly innocuous functionality that can be readily abused by adversaries. In this paper we introduce a novel tracking mechanism that misuses a simple yet ubiquitous browser feature: favicons. In more detail, a website can track users across browsing sessions by storing a tracking identifier as a set of entries in the browser’s dedicated favicon cache, where each entry corresponds to a specific subdomain. In subsequent user visits the website can reconstruct the identifier by observing which favicons are requested by the browser while the user is automatically and rapidly redirected through a series of subdomains. More importantly, the caching of favicons in modern browsers exhibits several unique characteristics that render this tracking vector particularly powerful, as it is persistent (not affected by users clearing their browser data), non-destructive (reconstructing the identifier in subsequent visits does not alter the existing combination of cached entries), and even crosses the isolation of the incognito mode. We experimentally evaluate several aspects of our attack, and present a series of optimization techniques that render our attack practical. We find that combining our favicon-based tracking technique with immutable browser-fingerprinting attributes that do not change over time allows a website to reconstruct a 32-bit tracking identifier in 2 seconds. Furthermore, our attack works in all major browsers that use a favicon cache, including Chrome and Safari. Due to the severity of our attack we propose changes to browsers’ favicon caching behavior that can prevent this form of tracking, and have disclosed our findings to browser vendors who are currently exploring appropriate mitigation strategies.
Introducing the In-the-Wild Series
Leaking silhouettes of cross-origin images
This is a writeup of a vulnerability I found in Chromium and Firefox that could allow a malicious page to read some parts of an image located on an origin it is not supposed to be able to access. Although technically interesting, it is quite limited in scope—I am not aware of any major websites it could’ve been used against. As of November 17th, 2020, the vulnerability has been fixed in the most recent versions of both browsers.
The time that it takes CanvasRenderingContext2D.drawImage to draw a pixel depends on whether it is fully transparent, opaque, or semi-transparent. By timing a bunch of calls to drawImage, we can reliably infer the transparency of each pixel in a cross-origin image, which is enough to, for example, read text on a transparent background, like this:
NAT Slipstreaming allows an attacker to remotely access any TCP/UDP service bound to a victim machine, bypassing the victim’s NAT/firewall (arbitrary firewall pinhole control), just by the victim visiting a website.
This is neat, although you have to dig in a bit to learn it requires the NAT gateway to do some fancy SIP proxying.
Floating Point in the Browser, Part 3: When x+y=x
That is, if you add a small number to a large number then if the small number is “too small” then the large number may (in the default/sane round-to-nearest mode) stay at the same value.
Because of this the loop spins endlessly and the push command runs until the array hits the size limits. If there were no size limits then the push command would keep running until the entire machine ran out of memory, so, yay?
Chromium’s impact on root DNS traffic
The root server system is, out of necessity, designed to handle very large amounts of traffic. As we have shown here, under normal operating conditions, half of the traffic originates with a single library function, on a single browser platform, whose sole purpose is to detect DNS interception. Such interception is certainly the exception rather than the norm. In almost any other scenario, this traffic would be indistinguishable from a distributed denial of service (DDoS) attack.
Certificate Transparency: a bird's-eye view
Certificate Transparency (CT) is a still-evolving technology for detecting incorrectly issued certificates on the web. It’s cool and interesting, but complicated. I’ve given talks about CT, I’ve worked on Chrome’s CT implementation, and I’m actively involved in tackling ongoing deployment challenges – even so, I still sometimes lose track of how the pieces fit together. I find it easy to forget how the system defends against particular attacks, or what the purpose of some particular mechanism is.
Improving Chromium's browser compatibility in 2020
It is clear that it is still painful to develop a website or web app that works reliably across browsers.
Why is This Website Port Scanning me?
Recently, I was tipped off about certain sites performing localhost port scans against visitors, presumably as part of a user fingerprinting and tracking or bot detection. This didn’t sit well with me, so I went about investigating the practice, and it seems many sites are port scanning visitors for dubious reasons.
The Original Cookie specification from 1997 was GDPR compliant
We were never supposed to be able to do what most publishers and tech companies do today. In fact, what if I were to tell you that the original specification for how cookies should be implemented in browsers pretty much defined what GDPR is today?
Imagine back to a time when people thought user agents would be agents for the user.
The story of how I gained unauthorized Camera access on iOS and macOS
We are beginning to form the attack plan - if we can somehow trick Safari into thinking our evil website is in the “secure context” of a trusted website, we can leverage Safari’s camera permission to access the webcam via the mediaDevices API.
firefox's low-latency webassembly compiler
The goals of high throughput and low latency conflict with each other. To get best throughput, a compiler needs to spend time on code motion, register allocation, and instruction selection; to get low latency, that’s exactly what a compiler should not do. Web browsers therefore take a two-pronged approach: they have a compiler optimized for throughput, and a compiler optimized for latency. As a WebAssembly file is being downloaded, it is first compiled by the quick-and-dirty low-latency compiler, with the goal of producing machine code as soon as possible. After that “baseline” compiler has run, the “optimizing” compiler works in the background to produce high-throughput code. The optimizing compiler can take more time because it runs on a separate thread. When the optimizing compiler is done, it replaces the baseline code. (The actual heuristics about whether to do baseline + optimizing (“tiering“) or just to go straight to the optimizing compiler are a bit hairy, but this is a summary.)
This article is about the WebAssembly baseline compiler in Firefox. It’s a surprising bit of code and I learned a few things from it.
Don't touch my clipboard
You can (but shouldn’t) change how people copy text from your website.
Escaping the Chrome Sandbox with RIDL
Vulnerabilities that leak cross process memory can be exploited to escape the Chrome sandbox. An attacker is still required to compromise the renderer prior to mounting this attack. To protect against attacks on affected CPUs make sure your microcode is up to date and disable hyper-threading (HT).
This is a pretty clear write-up and comes with a nice footnote:
When I started working on this I was surprised that it’s still exploitable even though the vulnerabilities have been public for a while. If you read guidance on the topic, they will usually talk about how these vulnerabilities have been mitigated if your OS is up to date with a note that you should disable hyper threading to protect yourself fully. The focus on mitigations certainly gave me a false sense that the vulnerabilities have been addressed and I think these articles could be more clear on the impact of leaving hyper threading enabled.
Information Leaks via Safari's Intelligent Tracking Prevention
Intelligent Tracking Prevention (ITP) is a privacy mechanism implemented by Apple’s Safari browser, released in October 2017. ITP aims to reduce the cross-site tracking of web users by limiting the capabilities of cookies and other website data. As part of a routine security review, the Information Security Engineering team at Google has identified multiple security and privacy issues in Safari’s ITP design. These issues have a number of unexpected consequences, including the disclosure of the user’s web browsing habits, allowing persistent cross-site tracking, and enabling cross-site information leaks (including cross-site search). This report is a modestly expanded version of our original vulnerability submission to Apple (WebKit bug #201319), providing additional context and edited for clarity. A number of the issues discussed here have been addressed in Safari 13.0.4 and iOS 13.3, released in December 2019.
The Curious Case of WebCrypto Diffie-Hellman on Firefox - Small Subgroups Key Recovery Attack on DH
Mozilla Firefox prior to version 72 suffers from Small Subgroups Key Recovery Attack on DH in the WebCrypto’s API. The Firefox’s team fixed the issue removing completely support for DH over finite fields (that is not in the WebCrypto standard). If you find this interesting read further below.
Real-world measurements of structured-lattices and supersingular isogenies in TLS
This is the third in a series of posts about running experiments on post-quantum confidentiality in TLS. The first detailed experiments that measured the estimated network overhead of three families of post-quantum key exchanges. The second detailed the choices behind a specific structured-lattice scheme. This one gives details of a full, end-to-end measurement of that scheme and a supersingular isogeny scheme, SIKE/p434. This was done in collaboration with Cloudflare, who integrated Microsoft’s SIKE code into BoringSSL for the tests, and ran the server-side of the experiment.
Because optimised assembly implementations are labour-intensive to write, they were only available/written for AArch64 and x86-64. Because SIKE is computationally expensive, it wasn’t feasible to enable it without an assembly implementation, thus only AArch64 and x86-64 clients were included in the experiment and ARMv7 and x86 clients did not contribute to the results even if they were assigned to one of the experiment groups.
Dramatically reduced power usage in Firefox 70 on macOS with Core Animation
In Firefox 70 we changed how pixels get to the screen on macOS. This allows us to do less work per frame when only small parts of the screen change. As a result, Firefox 70 drastically reduces the power usage during browsing.
Every Firefox window contains one OpenGL context, which covers the entire window. Firefox 69 was using the API described above. So we were always redrawing the whole window on every change, and the window manager was always copying our entire window to the screen on every change. This turned out to be a problem despite the fact that these draws were fully hardware accelerated.
Core Animation is the name of an Apple framework which lets you create a tree of layers (CALayer). These layers usually contain textures with some pixel content. The layer tree defines the positions, sizes, and order of the layers within the window. Starting with macOS 10.14, all windows use Core Animation by default, as a way to share their rendering with the window manager.