Minerva: Lattice attacks strike again
> This page describes our discovery of a group of side-channel vulnerabilities in implementations of ECDSA/EdDSA in programmable smart cards and cryptographic software libraries. Our attack allows for practical recovery of the long-term private key. We have found implementations which leak the bit-length of the scalar during scalar multiplication on an elliptic curve. This leakage might seem minuscule as the bit-length presents a very small amount of information present in the scalar. However, in the case of ECDSA/EdDSA signature generation, the leaked bit-length of the random nonce is enough for full recovery of the private key used after observing a few hundreds to a few thousands of signatures on known messages, due to the application of lattice techniques.
USENIX Security '19 Technical Sessions
> The full Proceedings published by USENIX for the conference are available for download below. Individual papers can also be downloaded from the presentation page.
50 ways to leak your data: an exploration of apps’ circumvention of the Android permissions system
> This paper is a study of Android apps in the wild that leak permission protected data (identifiers which can be used for tracking, and location information), where those apps should not have been able to see such data due to a lack of granted permissions. By detecting such leakage and analysing the responsible apps, the authors uncover a number of covert and side channels in real-world use.
The secret-sharer: evaluating and testing unintended memorization in neural networks
> This is a really important paper for anyone working with language or generative models, and just in general for anyone interested in understanding some of the broader implications and possible unintended consequences of deep learning. There’s also a lovely sense of the human drama accompanying the discoveries that just creeps through around the edges.
> Disclosure of secrets is of particular concern in neural network models that classify or predict sequences of natural language text… even if sensitive or private training data text is very rare, one should assume that well-trained models have paid attention to its precise details…. The users of such models may discover— either by accident or on purpose— that entering certain text prefixes causes the models to output surprisingly revealing text completions.
Final Report on the August 14, 2003 Blackout
> We are pleased to submit the Final Report of the U.S.-Canada Power System Outage Task Force. As directed by you, the Task Force has completed a thorough investigation of the causes of the August 14, 2003 blackout and has recommended actions to minimize the likelihood and scope of similar events in the future.
> The report makes clear that this blackout could have been prevented and that immediate actions must be taken in both the United States and Canada to ensure that our electric system is more reliable. First and foremost, compliance with reliability rules must be made mandatory with substantial penalties for non-compliance.
The Legitimate Vulnerability Market
> Trading of 0-day computer exploits between hackers has been taking place for as long as computer exploits have existed. A black market for these exploits has developed around their illegal use. Recently, a trend has developed toward buying and selling these exploits as a source of legitimate income for security researchers. However, this emerging “0-day market” has some unique aspects that make this particularly difficult to accomplish in a fair manner. These problems, along with possible solutions will be discussed. These issues will be illustrated by following two case studies of attempted sales of 0-day exploits.
> May 6, 2007
3D Ken Burns Effect from a Single Image
> In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud.
The Synchronization of Periodic Routing Messages
> The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertently become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, we study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt ‘phase transition’: in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. We show that synchronization can be avoided by the addition of randomization to the traffic sources and quantify how much randomization is necessary. In addition, we argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks.
Cache Attacks on CTR_DRBG
> This post presents results from our paper “Pseudorandom Black Swans: Cache Attacks on CTR_DRBG”. We illustrate how omissions in the threat model of a U.S government’s standard lead to a practical, end-to-end attack on the most popular generator contained within.
Sinister right-handedness provides Canadian-born Major League Baseball players with an offensive advantage: A further test of the hockey influence on batting hypothesis
> Recent research has shown Major League Baseball (MLB) players that bat left-handed and throw right-handed, otherwise known as sinister right-handers, are more likely to have a career batting average (BA) of .299 or higher compared to players with other combinations of batting and throwing handedness.
> Since the inception of the MLB, the relative proportion of Canadian-born sinister right-handers is at least two times greater than players from other regions, although being Canadian-born does not provide a direct offensive advantage. Rather, results showed evidence of a significant indirect effect in that being Canadian-born increases the odds of being a sinister right-hander and in turn leads to greater performance across each offensive performance statistic. Collectively, findings provide further support for the hockey influence on batting hypothesis and suggest this effect extends to offensive performance.
Marty Weitzman’s Noah’s Ark Problem
> Marty Weitzman passed away suddenly yesterday. He was on many people’s shortlist for the Nobel. His work is marked by high-theory applied to practical problems. The theory is always worked out in great generality and is difficult even for most economists. Weitzman wanted to be understood by more than a handful of theorists, however, and so he also went to great lengths to look for special cases or revealing metaphors. Thus, the typical Weitzman paper has a dense middle section of math but an introduction and conclusion of sparkling prose that can be understood and appreciated by anyone for its insights.
> The Noah’s Ark Problem illustrates the model and is my favorite Weitzman paper.
In Memoriam: J. C. R. Licklider
Two papers. Man-Computer Symbiosis and The Computer as a Communication Device.
The first argues for interactive systems. The computer can’t be an extension of our mind if it’s not responsive.
The second is a vision for networked communications. It sounds a lot like today, but more optimistic. Where did we go wrong?
Anime4K - A High-Quality Real Time Anime Upscaler
> We present a state-of-the-art high-quality real-time SISR algorithm designed to work with japanese animation and cartoons that is extremely fast (~3ms with Vega 64 GPU), temporally coherent, simple to implement (~100 lines of code), yet very effective. We find it surprising that this method is not currently used ‘en masse’, since the intuition leading us to this algorithm is very straightforward. Remarkably, the proposed method does not use any machine-learning or statistical approach, and is tailored to content that puts importance to well defined lines/edges while tolerates a sacrifice of the finer textures.
Decades-Old Computer Science Conjecture Solved in Two Pages
> “This conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science,” wrote Scott Aaronson of the University of Texas, Austin, in a blog post. “The list of people who tried to solve it and failed is like a who’s who of discrete math and theoretical computer science,” he added in an email.
> Over the years, computer scientists have developed many ways to measure the complexity of a given Boolean function. Each measure captures a different aspect of how the information in the input string determines the output bit. For instance, the “sensitivity” of a Boolean function tracks, roughly speaking, the likelihood that flipping a single input bit will alter the output bit. And “query complexity” calculates how many input bits you have to ask about before you can be sure of the output.
> Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture
Investigating sources of PII used in Facebook’s targeted advertising
> We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user, and use this technique to study how Facebook’s advertising service obtains users’ PII. We investigate a range of potential sources of PII, finding that phone numbers and email addresses added as profile attributes, those provided for security purposes such as two-factor authentication, those provided to the Facebook Messenger app for the purpose of messaging, and those included in friends’ uploaded contact databases are all used by Facebook to allow advertisers to target users. These findings hold despite all the relevant privacy controls on our test accounts being set to their most private settings.
Natural Adversarial Examples
> We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.
The lifetime of an Android API vulnerability
> When we published our paper in 2015, we predicted that this vulnerability would not be patched on 95% of devices in the Android ecosystem until January 2018 (plus or minus a standard deviation of 1.23 years). Since this date has now passed, we decided to check whether our prediction was correct.
> The good news is that we found the operating system update requirements crossed the 95% threshold in May 2017, seven months earlier than our best estimate, and within one standard deviation of our prediction. The most recent data for May 2019 shows deployment has reached 98.2% of devices in use. Nevertheless, fixing this aspect of the vulnerability took well over 4 years to reach 95% of devices.
The convoy phenomenon
> The duration of a lock is the average number of instructions executed while the lock is held. The execution interval of a lock is the average number of instructions executed between successive requests for that lock by a process. The collision cross section of the lock is the fraction of time it is granted, i.e., the lock grant probability.
> Most of us are stuck with a pre-emptive scheduler (i.e., general purpose operating system with virtual memory). Hence convoys will occur. The problem is to make them evaporate quickly when they do occur rather than have them persist forever.
Automatic Exploitation of Fully Randomized Executables
> We present Marten, a new end to end system for automatically discovering, exploiting, and combining information leakage and buffer overflow vulnerabilities to derandomize and exploit remote, fully randomized processes. Results from two case studies high- light Marten’s ability to generate short, robust ROP chain exploits that bypass address space layout randomization and other modern defenses to download and execute injected code selected by an attacker.
Weight Agnostic Neural Networks
> Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On supervised learning domain, we find architectures that can achieve much higher than chance accuracy on MNIST using random weights.
Some fun demos.