Invited Speakers


Dominique Bolignano

Formal Proof of a Secure OS Full Trusted Computing Base


Gernot Heiser

Time protection: principled prevention of timing channels

Timing channels provide information flow through timing of events, in violation of a system’s security policy. Their existence has been known for decades, but the dangers they pose were largely ignored, except in military-type systems. The Spectre exploits disclosed last year use timing channels to exfiltrate secret information exposed through speculative executions, and demonstrate that timing-channels are a first-order security concern.

In this talk I will report on our experience with developing principled, OS-enforced mandatory prevention of timing-channel leakage, through a set of mechanisms we collectively call *time protection*, in analogy to the well-established memory protection. I will present the design and implementation of time protection in the formally verified seL4 microkernel, as well as an evaluation of its efficacy and performance cost. We find that time protection is generally effective on simpler or somewhat older processors, but that latest-generation high-performance processors of both x86 and Arm instruction-set architectures (ISAs) hold state that is not accessible to software but exploitable as timing channels. We conclude that present processors are inherently insecure, as they cannot prevent timing channels, and that industry must agree to a new, security-oriented hardware-software contract that goes beyond the ISA. We define the properties such a contract must prescribe, and outline how time protection, and the absence of timing-channel leakage, can be verified with such a contract in place.

Short bio: Gernot Heiser is Scientia Professor and John Lions Chair of Operating Systems at UNSW Sydney and Chief Research Scientist at Data61, CSIRO. His research interests are in operating systems, real-time systems, security and safety. He is the founder and past leader of Data61’s Trustworthy Systems group, which pioneered large-scale formal verification of systems code, specifically the design, implementation and formal verification of the seL4 microkernel; seL4 is now being designed into real-world security- and safety-critical systems. Heiser's former company Open Kernel Labs, acquired by General Dynamics in 2012, marketed the OKL4 microkernel, which shipped on billions of mobile wireless chips and more recently ships on the secure enclave processor of all iOS devices. He presently serves as Chief Scientist, Software, of HENSOLDT Cyber, a Munich-based company providing a secure hardware-software stack for embedded and cyber-physical systems. Gernot is a Fellow of the ACM, the IEEE and the Australian Academy of Technology and Engineering (ATSE).


Frank Piessens

Proving the security of interrupt handling against interrupt-based side-channel attacks: a case study

Computer systems often provide hardware support for isolation mechanisms like privilege levels, virtual memory, or enclaved execution. But the past few years, several successful software-based side-channel attacks have been developed that break, or at least significantly weaken the isolation that these mechanisms offer. Extending a processor with new architectural or micro-architectural features, brings a risk of introducing new such side-channel attacks.

In this talk, we will discuss an approach to formally prove the security of processor extensions against these side-channel attacks, and instantiate that approach in a simple setting. More specifically, we will consider the concrete case of extending a microprocessor that supports enclaved execution with secure interruptibility of these enclaves. The base system we start from is the Sancus system (https://distrinet.cs.kuleuven.be/software/sancus/). We show how making enclaves interruptible can lead to side-channel attacks that weaken the isolation properties of enclaves. We then discuss how to formalize security against such attacks, and discuss the design and implementation of a provably secure interrupt handling mechanism for Sancus.

Short bio: Frank Piessens is a full professor in the Department of Computer Science at the Katholieke Universiteit Leuven, Belgium. His research field is software security, where he focuses on the development of high-assurance techniques to deal with implementation-level software vulnerabilities and bugs, including techniques such as software verification, run-time monitoring, hardware security architectures, type systems and programming language design. He studies the theory behind these techniques as well as their application in many types of software systems, including web applications, embedded software, and mobile applications. His achievements in the field of software security include contributions to: the development of verification techniques for C-like languages, the development of the secure multi-execution technique for enforcing information flow security, the development of a variety of countermeasures for memory safety related vulnerabilities, and the development of the embedded security architecture Sancus.


Peter Sewell

Nailing Down the Architectural Abstraction

Architecture specifications are the central interface between hardware and software, expressing the envelopes of allowed processor behaviour that correct hardware implementations must lie within, and that software must assume. Any formal verification of low-level software needs a precise and trustworthy definition of that envelope. For a small idealised architecture, this is straightforward, but precisely defining an industrial or production-scale research architecture raises many interesting challenges.

This talk will explain where we are after 10+ years, with well-validated instruction-set architecture (ISA) models for ARMv8.5-A, RISC-V, and CHERI, each complete enough to boot an OS and the first derived from the ARM-internal definition; well-established semantics for user-mode concurrency; work in progress on system-mode concurrency (for self-modifying code, interrupts, etc.); and formal models used as key design tools in the CHERI-MIPS and CHERI-RISC-V design processes. A companion talk will describe work on formal proof of security properties for CHERI above these models.

Sail ISA semantics: https://www.cl.cam.ac.uk/~pes20/sail/
RMEM concurrency exploration tool: http://www.cl.cam.ac.uk/users/pes20/rmem

This is joint work with Alasdair Armstrong, Alastair Reid, Alexandre Joannou, Anthony Fox, Ben Simner, Brian Campbell, Christopher Pulte, Ian Stark, Jon French, Kathryn E. Gray, Kyndylan Nienhuis, Luc Maranget, Mark Wassell, Michael Roe, Neel Krishnaswami, Peter Neumann, Prashanth Mundkur, Robert M. Norton, Robert Watson, Shaked Flur, Simon Moore, Susmit Sarkar, Thomas Bauereiss, and Will Deacon.

Short bio: Peter Sewell is a Professor of Computer Science at the University of Cambridge. His research aims to build rigorous foundations for the engineering of real-world computer systems, to make them better-understood, more robust, and more secure. His PhD was with Robin Milner in Edinburgh.


Online user: 1