Theory of Compilation 236360 Erez Petrank

Post on 23-Feb-2016

26 views 0 download

Tags:

description

Theory of Compilation 236360 Erez Petrank. Lecture 1: Introduction, Lexical Analysis. Compilation Theory. Lecturer: Assoc. Prof. Erez Petrank erez@cs.technion.ac.il Office: Taub 528; Phone: 829-4942 Reception hour: Thursday 14:30—15:30, Taub 528. Teaching assistants: - PowerPoint PPT Presentation

Transcript of Theory of Compilation 236360 Erez Petrank

Theory of Compilation 236360

Erez PetrankLecture 1:

Introduction, Lexical Analysis

2

Compilation Theory• Lecturer: Assoc. Prof. Erez Petrank

– erez@cs.technion.ac.il– Office: Taub 528; Phone: 829-4942– Reception hour: Thursday 14:30—15:30, Taub 528.

• Teaching assistants: – Adi Sosnovich (responsible TA) – Maya Arbel

3

Administration• Site: http://webcourse.cs.technion.ac.il/236360 • Grade:

– 25% homework (important!)• 5% dry (MAGEN)• 20% wet: compulsory

– 75% Test• Failure in test means failure in course, independent of

homework grade. • Prerequisite: Automata and formal languages 236353• The slides cannot replace attending the lectures or reading

the textbook. • MOED GIMMEL for Miluim only

• ... העתקות

4

Books• Main book

– A.V. Aho, M. S. Lam, R. Sethi, and J.D. Ullman – “Compilers – Principles, Techniques, and Tools”, Addison-Wesley, 2007.

• Additional book: – Dick Grune, Kees van

Reeuwijk, Henri E. Bal, Ceriel J.H. Jacobs, and Koen G. Langendoen. “Modern Compiler Design”, Second Edition, Wiley 2010.

5

Goal• What is a compiler? • How does it work? • Tools and techniques can be used in other

settings.

6

What is a compiler?

7

2לקוח מ-"קצרים". קרן מור וריקי בליך, רשת, ערוץ .

8

What is a Compiler?• “A compiler is a computer program that

transforms source code written in a programming language (source language) into another language (target language). The most common reason for wanting to transform source code is to create an executable program.”

--Wikipedia

9

What is a Compiler?

Executable code

exe

Sourcetext

txt

source language target language

Compiler

CC++PascalJava

PostscriptTeX

PerlJavaScriptPythonRuby

Prolog

LispSchemeMLOCaml

IA32IA64SPARC

CC++PascalJava

Java Bytecode…

10

What is a Compiler?

Executable code

exe

Sourcetext

txt

Compiler

int a, b;a = 2;b = a*2 + 1;

MOV R1,2SAL R1INC R1MOV R2,R1

• The source and target program are semantically equivalent.

• Since the translation is complex, it is partitioned into standard modular steps.

11

Anatomy of a Compiler(Coarse Grained)

Executable code

exe

Sourcetext

txtIntermediate

RepresentationBackend(synthesi

s)

Compiler

Frontend(analysis)

int a, b;a = 2;b = a*2 + 1;

MOV R1,2SAL R1INC R1MOV R2,R1

Optimization

12

Modularity

SourceLanguage 1

txt

IntermediateRepresentation

BackendTL2

FrontendSL2

int a, b;a = 2;b = a*2 + 1;

MOV R1,2SAL R1INC R1MOV R2,R1

FrontendSL3

FrontendSL1

BackendTL1

BackendTL3

SourceLanguage 2

txt

SourceLanguage n

txt

Executabletarget 1

exe

Executabletarget 2

exe

Executabletarget m

exe

SET R1,2STORE #0,R1SHIFT R1,1STORE #1,R1ADD R1,1STORE #2,R1

Build m+n modules instead of mn compilers…

13

Anatomy of a Compiler

Executable code

exe

Sourcetext

txtIntermediate

RepresentationBackend(synthesi

s)

Compiler

Frontend(analysis)

int a, b;a = 2;b = a*2 + 1;

MOV R1,2SAL R1INC R1MOV R2,R1

LexicalAnalysi

s

Syntax Analysi

sParsing

Semantic

Analysis

IntermediateRepresentati

on(IR)

CodeGeneration

14

Interpreter

Interpreter

int a, b;a = 2;b = a*2 + 1;

Sourcetext

txt

Input

OutputIntermediateRepresentation

Frontend(analysis)

ExecutionEngine

15

Compiler vs. Interpreter

Executable code

exe

Sourcetext

txtIntermediate

RepresentationBackend(synthesi

s)

Frontend(analysis)

Sourcetext

txt

Input

OutputIntermediateRepresentation

Frontend(analysis)

ExecutionEngine

16

Compiler vs. Interpreter

IntermediateRepresentation

Backend(synthesi

s)

Frontend(analysis)

3

7IntermediateRepresentation

Frontend(analysis)

ExecutionEngine

b = a*2 + 1;

b = a*2 + 1;

MOV R1,8(ebp)SAL R1INC R1MOV R2,R1

3

7

17

Just-in-time Compiler (e.g., Java)

JavaSource

txt

Input

OutputJava source to Java bytecode

compilerJava

Bytecode

txtJava

Virtual Machine

Just-in-time compilation: bytecode interpreter (in the JVM) compiles program fragments during interpretation to avoid expensive re-interpretation.

Machine dependent and optimized.

18

Importance• Many in this class will build a parser some day

– Or wish they knew how to build one… • Useful techniques and algorithms

– Lexical analysis / parsing– Intermediate representation– …– Register allocation

• Understand programming languages better• Understand internals of compilers• Understanding of compilation versus runtime, • Understanding of how the compiler treats the program (how

to improve the efficiency, how to use error messages), • Understand (some) details of target architectures

19

Complexity:Various areas, theory and practice.

TargetSource

Compiler

Useful formalisms Regular expressions Context-free grammars Attribute grammars

Data structures AlgorithmsProgramming Languages

Software EngineeringOperating systemsRuntime environmentGarbage collectionArchitecture

20

Turing Award

2008: Barbara Liskov, programming languages and system design.2009: Charles P Thacker, architecture.2010: Leslie Valiant, theory of computation. 2011: Judea Pearl, artificial intelligence.

21

Course Overview

Executable code

exe

Sourcetext

txt

Compiler

LexicalAnalysi

s

Syntax Analysi

sParsing

Semantic

Analysis

Inter.Rep.(IR)

CodeGen.

Front End IngredientsCharacter Stream

Lexical Analyzer

Token Stream

Syntax Analyzer

Syntax Tree

Semantic Analyzer

Decorated Syntax Tree

Intermediate Code Generator

Easy, Regular Expressions

More complex, recursive ,context-free grammar

More complex, recursive, requiresWalking up and down in the

Derivation tree .

Get code from the tree. Some optimizations are easier on a tree ,

and some easier on the code.

23

Lexical Analysis

LexicalAnalysi

s

Syntax Analysi

s

Sem.Analysi

s

Inter.Rep.

Code Gen.

x = b*b – 4*a*c

txt

<ID,”x”> <EQ> <ID,”b”> <MULT> <ID,”b”> <MINUS> <INT,4> <MULT> <ID,”a”> <MULT> <ID,”c”>

TokenStream

24

Syntax Analysis (Parsing)

LexicalAnalysi

s

Syntax Analysi

s

Sem.Analysi

s

Inter.Rep.

Code Gen.

<ID,”b”> <MULT> <ID,”b”> <MINUS> <INT,4> <MULT> <ID,”a”> <MULT> <ID,”c”>

‘b’ ‘4’

‘b’‘a’

‘c’

ID

ID

ID

ID

ID

factor

term factorMULT

term

expression

expression

factor

term factorMULT

term

expression

term

MULT factor

MINUS

SyntaxTree

25

Simplified Tree

Sem.Analysi

s

Inter.Rep.

Code Gen.

‘b’

‘4’

‘b’

‘a’

‘c’

MULT

MULT

MULT

MINUS

LexicalAnalysi

s

Syntax Analysi

s

AbstractSyntaxTree

26

Semantic Analysis

LexicalAnalysi

s

Syntax Analysi

s

Sem.Analysi

s

Inter.Rep.

Code Gen.

‘b’

‘4’

‘b’

‘a’

‘c’

MULT

MULT

MULT

MINUS

type: intloc: sp+8

type: intloc: const

type: intloc: sp+16

type: intloc: sp+16

type: intloc: sp+24

type: intloc: R2

type: intloc: R2

type: intloc: R1

type: intloc: R1

AnnotatedAbstractSyntaxTree

27

Intermediate Representation

LexicalAnalysi

s

Syntax Analysi

s

Sem.Analysi

s

Inter.Rep.

Code Gen.

‘b’

‘4’

‘b’

‘a’

‘c’

MULT

MULT

MULT

MINUS

type: intloc: sp+8

type: intloc: const

type: intloc: sp+16

type: intloc: sp+16

type: intloc: sp+24

type: intloc: R2

type: intloc: R2

type: intloc: R1

type: intloc: R1

R2 = 4*aR1 = b*bR2 = R2*cR1 = R1-R2

IntermediateRepresentation

28

Generating Code

Inter.Rep.

Code Gen.

‘b’

‘4’

‘b’

‘a’

‘c’

MULT

MULT

MULT

MINUS

type: intloc: sp+8

type: intloc: const

type: intloc: sp+16

type: intloc: sp+16

type: intloc: sp+24

type: intloc: R2

type: intloc: R2

type: intloc: R1

type: intloc: R1

R2 = 4*aR1 = b*bR2 = R2*cR1 = R1-R2

MOV R2,(sp+8)SAL R2,2MOV R1,(sp+16)MUL R1,(sp+16)MUL R2,(sp+24)SUB R1,R2

LexicalAnalysi

s

Syntax Analysi

s

Sem.Analysi

s

IntermediateRepresentation

AssemblyCode

29

The Symbol Table• A data structure that holds attributes for each

identifier, and provides fast access to them. • Example: location in memory, type, scope. • The table is built during the compilation process.

– E.g., the lexical analysis cannot tell what the type is, the location in memory is discovered only during code generation, etc.

30

Error Checking• An important part of the process. • Done at each stage.

• Lexical analysis: illegal tokens• Syntax analysis: illegal syntax • Semantic analysis: incompatible types, undefined variables,

• Each phase tries to recover and proceed with compilation (why?)– Divergence is a challenge

31

Errors in lexical analysis

pi = 3.141.562

txt

Illegal token

pi = 3oranges

txt

Illegal token

pi = oranges3

txt

<ID,”pi”>, <EQ>, <ID,”oranges3”>

32

Error detection: type checking

x = 4*a*”oranges”

txt

‘4’ ‘a’

“oranges”MULT

MULT

type: intloc: sp+8

type: intloc: const

type: stringloc: const

type: intloc: R2

type: intloc: R2

33

The Real Anatomy of a Compiler

Executable

code

exe

Sourcetext

txt LexicalAnalys

is

Sem.Analysi

s

Process text input

characters SyntaxAnalys

istokens AST

Intermediate code

generation

Annotated AST

Intermediate code

optimization

IR Codegeneratio

nIR

Target code

optimization

Symbolic Instructions

Machine code

generation

Write executable output

MI

34

Optimizations• “Optimal code” is out of reach

– many problems are undecidable or too expensive (NP-complete)

– Use approximation and/or heuristics – Must preserve correctness, should (mostly) improve

code, should run fast.• Improvements in time, space, energy, etc. • This part takes most of the compilation time. • A major question: how much time should be

invested in optimization to make the code run faster. – Answer changes for the JIT setting.

35

Optimization Examples• Loop optimizations: invariants, unrolling, … • Peephole optimizations• Constant propagation

– Leverage compile-time information to save work at runtime (pre-computation)

• Dead code elimination– space

• …

36

Modern Optimization Challenges• Main challenge is to exploit modern platforms

– Multicores– Vector instructions– Memory hierarchy– Is the code sent on the net (Java byte-code)?

37

Machine code generation• A major goal: determine location of variables. • Register allocation

– Optimal register assignment is NP-Complete– In practice, known heuristics perform well

• assign variables to memory locations• Instruction selection

– Convert IR to actual machine instructions

• Modern architectures– Multicores– Challenging memory hierarchies

38

Compiler Construction Toolset• Lexical analysis generators

– lex• Parser generators

– yacc• Syntax-directed translators• Dataflow analysis engines

39

Summary• A compiler is a program that translates code from

source language to target language• Compilers play a critical role

– Bridge from programming languages to the machine– Many useful techniques and algorithms– Many useful tools (e.g., lexer/parser generators)

• Compiler are constructed from modular phases– Reusable – Debug-able, understandable. – Different front/back ends

40

Theory of Compilation

Lexical Analysis

41

You are here

Executable code

exe

Sourcetext

txt

Compiler

LexicalAnalysi

s

Syntax Analysi

sParsing

Semantic

Analysis

Inter.Rep.(IR)

CodeGen.

42

From characters to tokens• What is a token?

– Roughly – a “word” in the source language– Identifiers– Values– Language keywords– (Really - anything that should appear in the input to

syntax analysis)• Technically

– A token is a pair of (kind,value)

43

Example: kinds of tokensType Examples

Identifier x, y, z, foo, bar

NUM 42

FLOATNUM 3.141592654

STRING “so long, and thanks for all the fish”

LPAREN (

RPAREN )

IF if

44

Strings with special handling

Type Examples

Comments /* Ceci n'est pas un commentaire */

Preprocessor directives #include<foo.h>

Macros #define THE_ANSWER 42

White spaces \t \n

45

The Terminology• Lexeme (a.k.a. symbol): a series of letters

separated from the rest of the program according to a convention (space, semi-column, comma, etc.)

• Pattern: a rule specifying a set of strings.Example: “an identifier is a string that starts with a letter and continues with letters and digits”.

• Token: a pair of (pattern, attributes)

46

From characters to tokens

x = b*b – 4*a*c

txt

<ID,”x”> <EQ> <ID,”b”> <MULT> <ID,”b”> <MINUS>

<INT,4> <MULT> <ID,”a”> <MULT> <ID,”c”>

Token Stream

47

Errors in lexical analysis

pi = 3.141.562

txt

Illegal token

pi = 3oranges

txt

Illegal token

pi = oranges3

txt

<ID,”pi”>, <EQ>, <ID,”oranges3”>

48

Error Handling• Many errors cannot be identified at this stage. • For example: “fi (a==f(x))”. Should “fi” be “if”? Or is it a

routine name? – We will discover this later in the analysis. – At this point, we just create an identifier token.

• But sometimes the lexeme does not satisfy any pattern. What can we do?

• Easiest: eliminate letters until the beginning of a legitimate lexeme.

• Other alternatives: eliminate one letter, add one letter, replace one letter, replace order of two adjacent letteres, etc.

• The goal: allow the compilation to continue. • The problem: errors that spread all over.

49

How can we define tokens?• Keywords – easy!

– if, then, else, for, while, …

• We need a precise, formal description (that a machine can understand) of things like: – Identifiers– Numerical Values– Strings

• Solution: regular expressions.

50

Regular Expressions over ΣBasic Patterns MatchingΦ No stringε The empty stringa A single letter ‘a’ in ΣRepetition OperatorsR* Zero or more occurrences of RR+ One or more occurrences of RComposition OperatorsR1|R2 Either an R1 or R2

R1R2 An R1 followed by R2

Grouping(R) R itself

51

Examples• ab*|cd? = • (a|b)* =• (0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9)* =

52

Simplifications• Precedence: * is of highest priority, then

concatenation, and then unification. a | (b (c)*) = a | bc*

• ‘R?’ stands for R or ε.• ‘.’ stands for any character in Σ• [xyz] for letters x,y,z in Σ means (x|y|z)

53

More Simplifications• Assign names for expressions:

– letter = a | b | … | z | A | B | … | Z– letter_ = letter | _– digit = 0 | 1 | 2 | … | 9– id = letter_ (letter_ | digit)*

• Use hyphens to denote a range– letter = a-z | A-Z– digit = 0-9

54

An Example• A number is

( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+

( | . ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+

( | E ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+

) )

• Using simplifications it is: – digit = 0-9– digits = digit+– number = digits (Є | .digits (Є | e (Є|+|-) digits ) )

55

Additional (Practical) Examples• if = if• then = then• relop = < | > | <= | >= | = | <>

56

Escape characters• What is the expression for one or more ‘+’

symbols coming after the letter ‘C’ ?– C++ won’t work– C(\+)+ will

• backslash \ before an operator turns it to standard character

• \*, \?, \+, …

57

Ambiguity• Consider the two definitions:

– if = if– id = letter_ (letter_ | digit)*

• The string “if” is valid for the pattern if and also for the pattern id… so what should it be?

• How about the string “iffy”? – Is it an identifier? Or an “if” followed by the identifier

“fy”? • Convention:

– Always find longest matching token– Break ties using order of definitions… first definition wins

(=> list rules for keywords before identifiers)

58

Creating a lexical analyzer• Input

– List of token definitions (pattern name, regular-expression)

– String to be analyzed • Output

– List of tokens

• How do we build an analyzer?

59

Character classification#define is_end_of_input(ch) ((ch) == ‘\0’);#define is_uc_letter(ch) (‘A’<= (ch) && (ch) <= ‘Z’)#define is_lc_letter(ch) (‘a’<= (ch) && (ch) <= ‘z’)#define is_letter(ch) (is_uc_letter(ch) || is_lc_letter(ch))#define is_digit(ch) (‘0’<= (ch) && (ch) <= ‘9’)…

60

Main reading routinevoid get_next_token() {do { char c = getchar(); switch(c) { case is_letter(c) : return recognize_identifier(c); case is_digit(c) : return recognize_number(c); …} while (c != EOF);

61

But we have a much better way!• Generate a lexical analyzer automatically from

token definitions

• Main idea– Use finite-state automata to match regular expressions

62

Overview• Produce an finite-state automaton from a regular

expression (automatically). • Simulate a final-state automaton on a computer

(automatically). • Immediately obtain a lexical analyzer.

• Let’s recall material from the Automata course…

63

Reminder: Finite-State Automaton

• Deterministic automaton• M = (,Q,,q0,F)

– - alphabet– Q – finite set of state– q0 Q – initial state– F Q – final states– δ : Q Q - transition function

64

Reminder: Finite-State Automaton

• Non-Deterministic automaton• M = (,Q,,q0,F)

– - alphabet– Q – finite set of state– q0 Q – initial state– F Q – final states– δ : Q ( {}) → 2Q - transition function

• Possible -transitions• For a word w, M can reach a number of states or

get stuck. If some reached state is final, M accepts w.

65

Identifying Patterns = Lexical Analysis

• Step 1: remove shortcuts and obtain pure regular expressions R1…Rm for the m patterns.

• Step 2: construct an NFA Mi for each regular expression Ri

• Step 3: combine all Mi into a single NFA• Step 4: convert the NFA into a DFA• DFA is ready to identify the patterns.

• Ambiguity resolution: prefer longest accepting word

66

A Comment• In the Automata course you study of automata as

identifiers only: is input in the language or not? • But when you run an automaton on an input there

is no reason to not gather information along the way. – E.g., letters read from input so far, line number in the

code, etc.

67

Building NFA: Basic constructs

R =

R =

R = a a

68

Building NFA: Composition

R = R1 | R2 M1

M2

R = R1R2

M1 M2

The starting and final states in the original automata become regular states after the composition

69

Building NFA: Repetition

R = R1*

M1

70

Use of Automata• Naïve approach: try each automaton separately• Given a word w:

– Try M1(w)– Try M2(w)– …– Try Mn(w)

• Requires resetting after every attempt.• A more efficient method: combine all automata

into one.

Combine automata: an example. Combine a, abb, a*b+, abab.

71

1 2aa

3 a 4 b 5 b 6abb

7 8b a*b+ba

9 a 10 b 11 a 12 b 13

abab

0

72

Ambiguity resolution• Longest word• Tie-breaker based on order of rules when words

have same length. – Namely, if an accepting state has two labels then we can

select one of them according to the rule priorities.

• Recipe– Turn NFA to DFA– Run until stuck, remember last accepting state, this is

the token to be returned.

Now let’s return to the previous example…

76

Corresponding DFA

0 1 3 7 9

8

7

b

a

a

2 4 7 10

a

bb

6 8

5 8 11b12 13a b

b

abba*b+a*b+

a*b+

abab

a

Examples

77

0 1 3 7 9

8

7

b

a

a2 4 7

10

a

bb

6 8

5 8 11b 12 13a b

b

abba*b+a*b+

a*b+

abab

a

abaa: gets stuck after aba in state 12, backs up to state (5 8 11) pattern is a*b+, token is ababba: stops after second b in (6 8), token is abb because it comes first in spec

bb

78

Summary of Construction• Developer describes the tokens as regular

expressions (and decides which attributes are saved for each token).

• The regular expressions are turned into a deterministic automata (a transition table) that describes the expressions and specifies which attributes to keep.

• The lexical analyzer simulates the run of an automata with the given transition table on any input string.

79

Good News• All of this construction is done automatically for

you by common tools. • Lex automatically generates a lexical analyzer

from declaration file.• Advantages: a short declaration, easily verified,

easily modified and maintained.

lexDeclaration file

LexicalAnalysi

s

characters tokens

Intuitively: • Lex builds a DFA table,• The analyzer simulates

the DFA on a given input.

80

Summary• Lexical analyzer

– Turns character stream into token stream– Tokens defined using regular expressions– Regular expressions -> NFA -> DFA construction for

identifying tokens– Automated constructions of lexical analyzer using lex

Lex will be presented in the exercise.

81

Coming up next time• Syntax analysis