CMSC 414 Computer and Network Security Lecture 25 Jonathan Katz.

40
CMSC 414 Computer and Network Security Lecture 25 Jonathan Katz
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    214
  • download

    0

Transcript of CMSC 414 Computer and Network Security Lecture 25 Jonathan Katz.

CMSC 414Computer and Network Security

Lecture 25

Jonathan Katz

Secure programming techniques(Based on: “Programming Secure Applications

for Unix-Like Systems,” David Wheeler)

Overview

Validate all input

Avoid buffer overflows

Program internals…

Careful calls to other resources

Send info back intelligently

Validating input

Determine acceptable input, check for match --- don’t just check against list of “non-matches”– Limit maximum length– Watch out for special characters, escape chars.

Check bounds on integer values– E.g., sendmail bug…

Validating input

Filenames– Disallow *, .., etc.

Html, URLs, cookies– Cf. cross-site scripting attacks

Command-line arguments– Even argv[0]…

Config files

Avoiding buffer overflows

Use arrays instead of pointers (cf. Java)

Avoid strcpy(), strcat(), etc.– Use strncpy(), strncat(), instead– Even these are not perfect… (e.g., no null

termination)

Make buffers (slightly) longer than necessary to avoid “off-by-one” errors

Program internals… Avoid race conditions

– E.g., authorizing file access, then opening file

Watch out for temporary files in shared directories (e.g., /tmp)

Watch out for “spoofed” IP addresses/email addresses

Simple, open design; fail-safe defaults; completge mediation; etc.

Don’t write your own crypto algorithms– Use crypto appropriately

Calling other resources

Use only “safe” library routines

Limit call parameters to valid values– Avoid metacharacters

Avoid calling the shell

User output

Minimize feedback– Don’t explain failures to untrusted users– Don’t release version numbers…– Don’t offer “too much” help (suggested

filenames, etc.)

Don’t use printf(userInput)– Use printf(“%s”, userInput) instead…

Source code scanners

Used to check source code– E.g., flawfinder, cqual

“Static” analysis vs. “dynamic” analysis– Not perfect– Dynamic analysis can slow down execution,

lead to bloated code– Will see examples of dynamic analysis later…

“Higher-level” techniques

Addressing buffer overflows Basic stack exploit can be prevented by marking

stack segment as non-executable, or randomizing stack location.– Code patches exist for Linux and Solaris.– Some complications on x86.

Problems:– Does not defend against `return-to-libc’ exploit.

• Overflow sets ret-addr to address of libc function.– Some apps need executable stack (e.g. LISP interpreters).

– Does not block more general overflow exploits:• Overflow on heap: overflow buffer next to func pointer.

Patch not shipped by default for Linux and Solaris

Run-time checking: StackGuard

Embed “canaries” in stack frames and verify their integrity prior to function return

strretsfplocaltopof

stackcanarystrretsfplocal canary

Frame 1Frame 2

Canary types Random canary: (used in Visual Studio 2003)

– Choose random string at program startup.– Insert canary string into every stack frame.– Verify canary before returning from function.– To corrupt random canary, attacker must learn current

random string.

Terminator canary:Canary = 0, newline, linefeed, EOF

– String functions will not copy beyond terminator.– Attacker cannot use string functions to corrupt stack

Canaries, continued…

StackGuard implemented as a GCC patch– Program must be recompiled

Minimal performance effects:

Not foolproof…

Run-time checking: Libsafe

Intercepts calls to strcpy (dest, src)– Validates sufficient space in current stack

frame:|frame-pointer – dest| > strlen(src)

– If so, does strcpy. Otherwise, terminates application

destret-addrsfptopof

stacksrc buf ret-addrsfp

libsafe main

More methods …

Address obfuscation– Encrypt return address on stack by XORing

with random string. Decrypt just before returning from function.

– Attacker needs decryption key to set return address to desired value.

PaX ASLR: Randomize location of libc– Attacker cannot jump directly to exec function

Software fault isolation

Partition code into data and code segments

Code inserted before each load/store/jump– Verify that target address is safe

Can be done at compiler, link, or run time– Increases program size, slows down execution

Security for mobile code

Mobile code is particularly dangerous!

Sandboxing– Limit the ability of code to do harmful things

Code-signing– Mechanism to decide whether code should be

trusted or not

ActiveX uses code-signing, Java uses sandboxing (plus code-signing)

Code signing

Code producer signs code

Binary notion of trust

What if code producer compromised?

Lack of PKI => non-scalable approach

“Proof-carrying code”

Input: code, safety policy of client

Output: safety proof for code

Proof generation expensive– Proof verification cheaper– Prove once, use everywhere (with same policy)

Prover/compiler need not be trusted– Only need to trust the verifier

Sandboxing in Java

Focus on preventing system modification and violations of user privacy– Denial of service attacks much harder to

prevent, and not handled all that well

We will discuss some of the basics, but not all the most up-to-date variants of the Java security model

Sandboxing

A default sandbox applied to untrusted code

Users can change the defaults…– Can also define “larger” sandboxes for

“partially trusted” code– Trust in code determined using code-signing…

Some examples…

Default sandbox prevents:– Reading/writing/deleting files on client system– Listing directory contents– Creating new network connections to other

hosts (other than originating host)– Etc.

Sandbox components

Verifier, Class loader, and Security Manager

If any of these fail, security may be compromised

Verifier

Java program is compiled to platform-independent Java byte code

This code is verified before it is run– Prevents, for example, malicious “hand-

written” byte code

Efficiency gains by checking code before it is run, rather than constantly checking it while running

Verifier…

Checks:– Byte code is well-formatted– No forged pointers– No violation of access restrictions– No incorrect typing

Of course, cannot be perfect…

Class loader

Helps prevent “spoofed” classes from being loaded– E.g., external class claiming to be the security

manager

Whenever a class needs to be loaded, this is done by a class loader– The class loader decides where to obtain the

code for the class

Security manager

Restricts the way an applet uses Java API calls– All calls to the OS are mediated by the security

manager

Security managers are browser-dependent!

System call monitoring

Monitor all system calls– Enforce particular policy– Policy may be loaded in kernel

Hand-tune policy for individual applications

Similar to Java security manager– Difference in where implemented

Viruses/malicious code

Viruses/malicious code

Virus – passes malicious code to other non-malicious programs– Or documents with “executable” components

Trojan horse – software with unintended side effects

Worm – propagates via network– Typically stand-alone software, in contrast to

viruses which are attached to other programs

Viruses

Can insert themselves before program, can surround program, or can be interspersed throughout program– In the last case, virus writer needs to know

about the specifics of the other program

Two ways to “insert” virus:– Insert virus in memory at (old) location of

original program– Change pointer structure…

Viruses…

Boot sector viruses– If a virus is loaded early in the boot process,

can be very difficult (impossible?) to detect

Memory-resident viruses– Note that virus might complicate its own

detection– E.g., removing virus name from list of active

programs, or list of files on disk

Some examples

BRAIN virus– Locates itself in upper memory; resets the

upper memory bound below itself– Traps “disk reads” so that it can handle any

requests to read from the boot sector– Not inherently malicious, although some

variants were

Morris worm (1988)

Resource exhaustion (unintended)– Was supposed to have only one copy running, but did

not work correctly…

Spread in three ways– Exploited buffer overflow flaw in fingerd

– Exploited flaw in sendmail debug mode

– Guessing user passwords(!) on current network

Bootstrap loader would be used to obtain the rest of the worm

Chernobyl virus (1998)

When infected program run, virus becomes resident in memory of machine– Rebooting does not help

Virus writes random garbage to hard drive

Attempts to trash FLASH BIOS– Physically destroys the hardware…

Melissa virus/worm (1999)

Word macro…– When file opened, would create and send

infected document to names in user’s Outlook Express mailbox

– Recipient would be asked whether to disable macros(!)

• If macros enabled, virus would launch

Code red (2001)

Propagated itself on web server running Microsoft’s Internet Information Server– Infection using buffer overflow…– Propagation by checking IP addresses on port

80 of the PC to see if they are vulnerable

Detecting viruses

Can try to look for “signatures”– Unreliable unless up-to-date

– Encrypted viruses

– Polymorphic viruses

Examine storage– Sizes of files, “jump” instruction at beginning of code

– Can be hard to distinguish from normal software

Check for (unusual) execution patterns– Hard to distinguish from normal software…