> Modern processors are being pushed to perform faster than ever before - and with this comes increases in heat and power consumption. To manage this, many chip manufacturers allow frequency and voltage to be adjusted as and when needed. But more than that, they offer the user the opportunity to modify the frequency and voltage through priviledged software interfaces. With Plundervolt we showed that these software interfaces can be exploited to undermine the system’s security. We were able to corrupt the integrity of Intel SGX on Intel Core processors by controling the voltage when executing enclave computations. This means that even Intel SGX’s memory encryption/authentication technology cannot protect against Plundervolt.
Not sure anyone should care about SGX anymore, all things considered, but for completeness, here’s another one.
TPM—Fail TPM meets Timing and Lattice Attacks
> We discovered timing leakage on Intel firmware-based TPM (fTPM) as well as in STMicroelectronics’ TPM chip. Both exhibit secret-dependent execution times during cryptographic signature generation. While the key should remain safely inside the TPM hardware, we show how this information allows an attacker to recover 256-bit private keys from digital signature schemes based on elliptic curves.
> This research shows that even rigorous testing as required by Common Criteria certification is not flawless and may miss attacks that have explicitly been checked for. The STMicroelectronics TPM chip is Common Criteria certified at EAL4+ for the TPM protection profiles and FIPS 140-2 certified at level 2, while the Intel TPM is certified according to FIPS 140-2. However, the certification has failed to protect the product against an attack that is considered by the protection profile.
Snap: a microkernel approach to host networking
> This paper describes the networking stack, Snap, that has been running in production at Google for the last three years+. It’s been clear for a while that software designed explicitly for the data center environment will increasingly want/need to make different design trade-offs to e.g. general-purpose systems software that you might install on your own machines. But wow, I didn’t think we’d be at the point yet where we’d be abandoning TCP/IP! You need a lot of software engineers and the willingness to rewrite a lot of software to entertain that idea.
> Light Commands is a vulnerability of MEMS microphones that allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light.
> In our paper we demonstrate this effect, successfully using light to inject malicious commands into several voice controlled devices such as smart speakers, tablets, and phones across large distances and through glass windows.
An analysis of performance evolution of Linux’s core operations
> When you get into the details I found it hard to come away with any strongly actionable takeaways though. Perhaps the most interesting lesson/reminder is this: it takes a lot of effort to tune a Linux kernel. For example:
> “Red Hat and Suse normally required 6-18 months to optimise the performance an an upstream Linux kernel before it can be released as an enterprise distribution”, and
> “Google’s data center kernel is carefully performance tuned for their workloads. This task is carried out by a team of over 100 engineers, and for each new kernel, the effort can also take 6-18 months.”
Research based on the .NET Runtime
> Over the last few years, I’ve come across more and more research papers based, in some way, on the ‘Common Language Runtime’ (CLR). So armed with Google Scholar and ably assisted by Semantic Scholar, I put together the list below.
Minerva: Lattice attacks strike again
> This page describes our discovery of a group of side-channel vulnerabilities in implementations of ECDSA/EdDSA in programmable smart cards and cryptographic software libraries. Our attack allows for practical recovery of the long-term private key. We have found implementations which leak the bit-length of the scalar during scalar multiplication on an elliptic curve. This leakage might seem minuscule as the bit-length presents a very small amount of information present in the scalar. However, in the case of ECDSA/EdDSA signature generation, the leaked bit-length of the random nonce is enough for full recovery of the private key used after observing a few hundreds to a few thousands of signatures on known messages, due to the application of lattice techniques.
USENIX Security '19 Technical Sessions
> The full Proceedings published by USENIX for the conference are available for download below. Individual papers can also be downloaded from the presentation page.
50 ways to leak your data: an exploration of apps’ circumvention of the Android permissions system
> This paper is a study of Android apps in the wild that leak permission protected data (identifiers which can be used for tracking, and location information), where those apps should not have been able to see such data due to a lack of granted permissions. By detecting such leakage and analysing the responsible apps, the authors uncover a number of covert and side channels in real-world use.
The secret-sharer: evaluating and testing unintended memorization in neural networks
> This is a really important paper for anyone working with language or generative models, and just in general for anyone interested in understanding some of the broader implications and possible unintended consequences of deep learning. There’s also a lovely sense of the human drama accompanying the discoveries that just creeps through around the edges.
> Disclosure of secrets is of particular concern in neural network models that classify or predict sequences of natural language text… even if sensitive or private training data text is very rare, one should assume that well-trained models have paid attention to its precise details…. The users of such models may discover— either by accident or on purpose— that entering certain text prefixes causes the models to output surprisingly revealing text completions.
Final Report on the August 14, 2003 Blackout
> We are pleased to submit the Final Report of the U.S.-Canada Power System Outage Task Force. As directed by you, the Task Force has completed a thorough investigation of the causes of the August 14, 2003 blackout and has recommended actions to minimize the likelihood and scope of similar events in the future.
> The report makes clear that this blackout could have been prevented and that immediate actions must be taken in both the United States and Canada to ensure that our electric system is more reliable. First and foremost, compliance with reliability rules must be made mandatory with substantial penalties for non-compliance.
The Legitimate Vulnerability Market
> Trading of 0-day computer exploits between hackers has been taking place for as long as computer exploits have existed. A black market for these exploits has developed around their illegal use. Recently, a trend has developed toward buying and selling these exploits as a source of legitimate income for security researchers. However, this emerging “0-day market” has some unique aspects that make this particularly difficult to accomplish in a fair manner. These problems, along with possible solutions will be discussed. These issues will be illustrated by following two case studies of attempted sales of 0-day exploits.
> May 6, 2007
3D Ken Burns Effect from a Single Image
> In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud.
The Synchronization of Periodic Routing Messages
> The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertently become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, we study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt ‘phase transition’: in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. We show that synchronization can be avoided by the addition of randomization to the traffic sources and quantify how much randomization is necessary. In addition, we argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks.
Cache Attacks on CTR_DRBG
> This post presents results from our paper “Pseudorandom Black Swans: Cache Attacks on CTR_DRBG”. We illustrate how omissions in the threat model of a U.S government’s standard lead to a practical, end-to-end attack on the most popular generator contained within.
Sinister right-handedness provides Canadian-born Major League Baseball players with an offensive advantage: A further test of the hockey influence on batting hypothesis
> Recent research has shown Major League Baseball (MLB) players that bat left-handed and throw right-handed, otherwise known as sinister right-handers, are more likely to have a career batting average (BA) of .299 or higher compared to players with other combinations of batting and throwing handedness.
> Since the inception of the MLB, the relative proportion of Canadian-born sinister right-handers is at least two times greater than players from other regions, although being Canadian-born does not provide a direct offensive advantage. Rather, results showed evidence of a significant indirect effect in that being Canadian-born increases the odds of being a sinister right-hander and in turn leads to greater performance across each offensive performance statistic. Collectively, findings provide further support for the hockey influence on batting hypothesis and suggest this effect extends to offensive performance.
Marty Weitzman’s Noah’s Ark Problem
> Marty Weitzman passed away suddenly yesterday. He was on many people’s shortlist for the Nobel. His work is marked by high-theory applied to practical problems. The theory is always worked out in great generality and is difficult even for most economists. Weitzman wanted to be understood by more than a handful of theorists, however, and so he also went to great lengths to look for special cases or revealing metaphors. Thus, the typical Weitzman paper has a dense middle section of math but an introduction and conclusion of sparkling prose that can be understood and appreciated by anyone for its insights.
> The Noah’s Ark Problem illustrates the model and is my favorite Weitzman paper.
In Memoriam: J. C. R. Licklider
Two papers. Man-Computer Symbiosis and The Computer as a Communication Device.
The first argues for interactive systems. The computer can’t be an extension of our mind if it’s not responsive.
The second is a vision for networked communications. It sounds a lot like today, but more optimistic. Where did we go wrong?
Anime4K - A High-Quality Real Time Anime Upscaler
> We present a state-of-the-art high-quality real-time SISR algorithm designed to work with japanese animation and cartoons that is extremely fast (~3ms with Vega 64 GPU), temporally coherent, simple to implement (~100 lines of code), yet very effective. We find it surprising that this method is not currently used ‘en masse’, since the intuition leading us to this algorithm is very straightforward. Remarkably, the proposed method does not use any machine-learning or statistical approach, and is tailored to content that puts importance to well defined lines/edges while tolerates a sacrifice of the finer textures.
Decades-Old Computer Science Conjecture Solved in Two Pages
> “This conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science,” wrote Scott Aaronson of the University of Texas, Austin, in a blog post. “The list of people who tried to solve it and failed is like a who’s who of discrete math and theoretical computer science,” he added in an email.
> Over the years, computer scientists have developed many ways to measure the complexity of a given Boolean function. Each measure captures a different aspect of how the information in the input string determines the output bit. For instance, the “sensitivity” of a Boolean function tracks, roughly speaking, the likelihood that flipping a single input bit will alter the output bit. And “query complexity” calculates how many input bits you have to ask about before you can be sure of the output.
> Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture