Aurélien Francillon Albena, Bulgaria, July 2013 · 2013-07-10 · Daemon (as root) Verify...
Transcript of Aurélien Francillon Albena, Bulgaria, July 2013 · 2013-07-10 · Daemon (as root) Verify...
How Can I Trust a Device? Aurélien Francillon
Albena, Bulgaria, July 2013
Context
Description of an attack
Static root of trust
Remote attestation and software-based attestation
SMART and more
10/07/2013 - - p 2
Embedded Systems Diversity
RFID
Sensors
SmartCards Smartphones
Industrial systems
Smart-{Cards,Home,Grid,Phones,Stuff?}
Embedded Devices : Security Challenges
Embedded devices are not anymore immune to malicious code Remote code injection, buggy software updates, malicious software present at
creation We assume that system code has exploitable bugs Proved to be a right assumption so many times…
Difficult to know the runtime status of a device The devices are not physically accessible…
Networking and wireless interfaces increases the attack surface
Sophisticated attacks demonstrated Harvard architectures SCADA controllers (StuxNet) car electronic control systems…
10/07/2013 - - p 4
HARVARD-BASED ARCHITECTURE DEVICES
One example attack on :
10/07/2013 - - p 5
Bootloader
6
Code Injection in Harvard Architecture [ACM CCS 2008]
SP
PC Core
Data bus
Data
Stack
Instruction bus Program memory
Instructions
Data memory
Data/instruction memories are independent Commonly believed to be resistant to code injection attacks
7
Code Injection in Harvard Architecture [ACM CCS 2008]
Data/instruction memories are independent Commonly believed to be resistant to code injection attacks Controlling execution from the stack Using Return Oriented Programming
SP
PC Core
Data bus
Data
Stack
Instruction bus Program memory
Instructions
Bootloader
Data memory
Return Oriented Programming
Generalization of “return to libc” attacks Introduced by H. Shacham at CCS 2007 on the x86
Executes code already present in the program memory
Chains sequences of instructions terminated by a return instruction Called gadgets
Gadgets can be chained to perform any action When a “Turing Complete” Gadget set is available
10/07/2013 - - p 8
Return Oriented Programming (quick overview)
9
Stack
sp
tmp_buff[3] tmp_buff[2] tmp_buff[1] tmp_buff[0]
@Return … … … …
@Gadget1
Vulnerable function Instr Instr Instr Ret
Instr Instr Instr Ret
pc
Instr Pop r7 Instr Instr Ret
@Gadget2 val
@Gadget3
Instr ret
3 2 1 0
•Executes chains of “gadgets” •By controlling the stack •To take over program execution
•Without code injection •Without calls to existing functions
10
Code Injection in Harvard Architecture [ACM CCS 2008]
Harvard architecture devices can be vulnerable to code injection
Need dedicated countermeasures
SP
PC Core
Data bus
Data
Stack
Instruction bus Data memory Program memory
Instructions
Bootloader
STATIC ROOT OF TRUST
Where we will see limitations of one of the most common security mechanisms:
10/07/2013 - - p 11
Static Root of Trust
Static Root of Trust (a.k.a SRTM) Provides a measurement of the code at loading time
SRTM Example 1: TPM v1.1 Hashes code before loading Stores the hash in TPM registers (“extend”)
– Platform Configuration Registers PCR Resulting hash can be used to prove the load time status of the system
SRTM Example 2: Secure Boot Very common on embedded systems A fixed bootloader (e.g. in ROM) Contains a public key Loads code Verify signature of the code
– if valid executes it otherwise stops execution (Brick!)
10/07/2013 - - p 12
Static Root of Trust : limitations
Verifies only static information Code at initial loading time
Difficult to know the runtime status of a device Exploits at runtime can compromise the system state
Long running applications Should we reboot them before doing sensitive operations ?
Even reboot not sufficient, example : permanent “jailbreaks” on the iPhone The system code is loaded with secure boot Only “secure code” is executed Configuration file loaded at boot runtime exploit a bug in a root
daemon
10/07/2013 - - p 13
Schematic view of SRTM limitations
10/07/2013 - - p 14
Kernel
Dynamic Memory (RAM)
Kernel
Bootloader Daemon
Daemon (as root)
Verify Signature
Config File
Malicious code
Specially crafted data
Storage (Flash, Disk…) In ROM Memory
REMOTE ATTESTATION OF THE STATE OF A DEVICE
So we want :
10/07/2013 - - p 15
So we want …
… to be able to remotely verify current state of low cost, low power devices
We will mostly focus on devices based on a Micro-Controller Unit (MCU)
For example the Atmega128 8-bit CPU, 8MHz 128KB Program Memory (Flash) 4 KB Data Memory (Ram) 512 KB External Memory (Flash)
We also target the MSP430 that has similar constraints
Concepts could be adapted to other platforms
10/07/2013 - - p 16
Remote attestation : example
• Positive result – Good device (i.e. correct
memory configuration) • Failure or no result
– Malfunctioning device or – Malicious device
Remote attestation by software ?
Software-based attestation
Motivation: Using remote attestation on of the shelf devices With no dedicated support (TPM…)
– TPM would be too expensive anyway! Physical connection to the device is impossible TPM /bios implementation are known to be vulnerable
2 main approaches randomness-based and time-based
10/07/2013 - - p 19
Time Based Software based Attestation
20
Time-based Attestation is optimized to be performed in a constant time Code modifications are detected by additional delay Random memory accesses Possibly relies on cache behaviors
– Cache hit – Cache miss
Strong Time constraints, not for: Multihop networks Unreliable networks
SWATT Assembly Code
21
Generate ith member of random sequence using RC4 zh = 2 ldi zh, 0x02 r15 = *(x++) ld r15, x+ yl = yl + r15 add yl, r15 zl = *y ld zl, y *y = r15 st y, r15 *x = r16 st x, r16 zl = zl + r15 add zl, r15 zh = *z ld zh, z Generate 16-bit memory address zl = r6 mov zl, r6 Load byte from memory and compute transformation r0 = *z lpm r0, z r0 = r0 xor r13 xor r0, r13 r0 = r0 + r4 add r0, r4 Incorporate output of hash into checksum r7 = r7 + r0 add r7, r0 r7 = r7 << 1 lsl r7 r7 = r7 + carry_bit adc r7, r5 r4 = zh mov r4, zh
PRG (RC4)
Address Generation
Memory Read and Transform
Compute Checksum
Seed from verifier
NB: This is just an example I’m not expecting you to understand the code
This slide is © A. Perrig
Attacks on timing based attestation
Difficult to have : Optimal code
– Would the attacker be able to compute attestation faster ? – Sometimes yes…
Know the best attack – How much delay is too much ?
This is very important to be able to distinguish between attacks and network delays
Return-oriented programming Allows to perform arbitrary computations by manipulating only data
memory This can be used to hide malicious code
22
Return Oriented Rootkit attack [ACM CCS 2009]
24
Attestation request
Attestation done
State ?
But what is the “state” we want to attest ? We have seen that it is difficult to trust “changing” data
Do we need to include : Program on disk (elf/pe file?) Program instructions ? Data / BSS / HEAP memories ? Stack ?
While we can hash this Some of those memories are impossible to predict Even if we are given a copy we would have trouble to tell the good
from the bad…
10/07/2013 - - p 25
SECURE AND MINIMAL ARCHITECTURE FOR ESTABLISHING A DYNAMIC ROOT OF TRUST
SMART
10/07/2013 - - p 26
Joint work with: Karim El Defrawy, Daniele Perito, Gene Tsudik NDSS 2012
SMART : Lightweight Hardware Dynamic Root of Trust Static root of trust
E.g. Secure boot, verifies software at loading time Does not provides guaranties of software integrity at runtime
– Difficult (impossible? ) to guaranty integrity at runtime
Dynamic root of trust Enforce runtime isolation of limited “trusted” code Provides a “proof” (hash): remotely verifiable Exists in the PC realm, or high end embedded systems
– Quite complex and “costly” in terms of required resources – Intel TXT, AMD SVM (Trusted computing / TPM v1.2 ) – ARM trustzone, still too complex for low end devices
Software based attestation on low end devices aims at providing such features, with weak guaranties
27
Why dynamic root of trust is better ?
In a dynamic root of trust we don’t care of the rest of the system It can very well be compromised….
All we care about is our new environment That is running in isolation from the rest Without disrupting the rest of the system
We must have strong guaranties for this to work well.
E.g. many problems on PC DMA attacks Sinit code flaws IOMMU configuration, etc…
10/07/2013 - - p 28
So What Shall we Check ?
Instead of trying to guarantee complete integrity of the system: Do small, important steps in isolation of the rest of the system Guarantee isolation and integrity with HW
… despite a full software compromise
10/07/2013 - - p 29
SMART in a Nutshell
Minimal modification to a CPU Architecture Simple modification to the HW Most of the logic in software
Goal : providing external verification Prover Authentication:
– We are talking to the right device Attestation :
– Verifier knows the content of memory or state on the Device Trusted Execution :
– Verifier knows that the device executed a given piece of code
This is about reporting, not enforcement ! A compromised system may not call SMART at all…
- p 30
SMART main Idea
“User Interface”: Use smart as a function call:
SMART(a, b, x, jmp?, nonce, &result, param);
Hmac memory from [a,b] using nonce
If jmp?== 1 jump to x on completion of SMART and pass parameter param
Write the HMAC result into memory location &result
10/07/2013 - - p 31
SMART building blocks
Secure Key Storage Only SMART code has access to a KEY used for attestation
Attestation Read-Only Memory Region of memory in ROM inside the MCU Only this code is granted access the “attestation key” Performs an HMAC
MCU Access Controls Control access to attestation key and prevent non-SMART code
from accessing it Prevents misuses of ROM memory
– like ret-to-libc, ROP
10/07/2013 - - p 32
SMART: Implementation
All the modifications are confined to the memory controller No new instruction added to the instruction set
Invoking SMART requires only a function call No additional programming interface needed
Implementation done on AVR and MSP-430 two very different architectures
Test ASIC currently in production (MPW) Based on the OpenMSP430 In TAMPRES project (manufactured by both by NXP, IHP)
- p 41
Hardware support helps !
There are things that are very hard/impossible to perform in software, and easy in Hardware
But this is not because some parts of attest() are done in Hardware that it is flawless ! We recently found 3 possible design flaws...
Interrupts needs to be disabled Non atomic IRQ deactivation on the MSP430
IRQ would anyway move the PC away ? MSP430 not necessarily
Jump(x) could be misused And lead to the only real practical attack !
10/07/2013 - - p 43
Countermeasures
The countermeasures are very simple: Check the value of the pointer “x” before using it. Prevent “x” to point to invalid memory locations
2 other attacks do not actually work on our implementation By “chance”
But is this enough ?
- p 45
Future Work
• We would like to formally verify • Security of the hardware implementation • Correctness of SMART implementation • And whether SMART security assertions sufficient or really
minimal ?
• High level well defined requirements for attestation
• Verification of the implementation, several options: • Using an approach close to “bounded model checking”
• Smart code + Verilog HDL => Executable mode • Execute the model on a symbolic execution environment
• HOL based solutions…
- p 46