Authors:
(1) Mathias Brossard, Systems Group, Arm Research;
(2) Guilhem Bryant, Systems Group, Arm Research;
(3) Basma El Gaabouri, Systems Group, Arm Research;
(4) Xinxin Fan, IoTeX.io;
(5) Alexandre Ferreira, Systems Group, Arm Research;
(6) Edmund Grimley-Evans, Systems Group, Arm Research;
(7) Christopher Haster, Systems Group, Arm Research;
(8) Evan Johnson, University of California, San Diego;
(9) Derek Miller, Systems Group, Arm Research;
(10) Fan Mo, Imperial College London;
(11) Dominic P. Mulligan, Systems Group, Arm Research;
(12) Nick Spinale, Systems Group, Arm Research;
(13) Eric van Hensbergen, Systems Group, Arm Research;
(14) Hugo J. M. Vincent, Systems Group, Arm Research;
(15) Shale Xiong, Systems Group, Arm Research.
Editor's note: this is part 2 of 6 of a study detailing the development of a framework to help people collaborate securely. Read the rest below.
Table of Links
- Abstract and 1 Introduction
- 2 Hardware-backed Confidential Computing
- 3 IceCap
- 4 Veracruz
- 4.1 Attestation
- 4.2 Programming model
- 4.3 Ad hoc acceleration
- 4.4 Threat model
- 5 Evaluation and 5.1 Case-study: deep learning
-
5.2 Case-study: video object detection
-
5.3 Further comparisons
-
- 6 Closing remarks and References
2 Hardware-backed Confidential Computing
In addition to the already widely-deployed Arm TrustZone® [7] and Intel Software Guard Extensions (SGX) [29], an emerging group of novel Confidential Computing technologies are being added to microprocessor architectures and cloud infrastructures, including AMD Secure Encrypted Virtualization (SEV) [50], Arm Confidential Computing Architecture (CCA) [6], AWS Nitro Enclaves [9], and Intel Trust Domain Extensions (TDX) [46]. All introduce a hardware-backed protected execution environment, which we call an isolate, providing strong confidentiality (the content of the isolate remains opaque to external observers) and integrity (the content of the isolate remains protected from interference by external observers) guarantees to code and data hosted within. These guarantees apply even in the face of a strong adversary, with any operating system or, in most cases even a hypervisor, outside of the isolate assumed hostile. Memory encryption may also be provided as a standard feature to protect against a class of physical attack. Isolates are often associated with an attestation protocol—e.g., EPID for Intel SGX [14, 15] and AWS Nitro Attestation for AWS Nitro Enclaves [9]. These permit a third party to garner strong, cryptographic evidence of the authenticity and configuration of a remote isolate.
Some isolate implementations have unfortunately fallen short of their promised confidentiality and integrity guarantees. A substantial body of academic work, demonstrating that side-channel (see e.g. [13, 16, 22, 30, 44, 64, 87, 88, 98, 101]) and fault injection attacks [24,67,84] can be used to exfiltrate secrets from isolates, now exists, and a perception— at least in the academic community and technical press— appears to be forming that isolates are fundamentally broken and any consequent research project that builds upon them need necessarily justify that decision. We argue that this emerging perception is an instance of the perfect being the enemy of the good.
First, we expect that many identified flaws will be gradually ironed out over time, either in point-fixes, iterated designs, or by the adoption of software models that avoid known vulnerabilities. For hardware, we have already seen some flaws fixed using microcode updates and other pointfixes by affected manufacturers (e.g, [27]). For software, research into methods designed to avoid known classes of side-channels is emerging, through implementation techniques such as constant-time algorithms, and dedicated typesystems such as FaCT [19] and CT-Wasm [95]. These may prove to be useful in implementing systems with isolates, and we summarize our own ongoing experimentation with these approaches in §6.
Second, we expect that industrial adoption of isolates will be widespread, and arguably this is already in evidence with the formation of consortia such as the LF’s Confidential Computing Consortium [25], and an emerging ecosystem of industrial users and startups. Researching systems that use isolates, and ease their deployment, is therefore not only justifiable, but very useful. Here, industrial users pragmatically evaluate isolate-based systems in comparison with the status quo, where delegated computations are—by and large— left completely unprotected, and we argue that it is this standard which should be applied when evaluating systems built around isolates, not comparison with side-channel free cryptography which is still impractical in an industrial context. In this light, forcing malefactors to resort to side-channel and fault injection attacks—many of which are impractical, or can be defended against using others means—to exfiltrate data from an isolate is a welcome, albeit incremental, improvement in the privacy-guarantees that real systems can offer users.
This paper is available on arxiv under CC BY 4.0 DEED license.