Lecture 6: Memory management Linking and...

50
Lecture 6: Memory management Linking and Loading

Transcript of Lecture 6: Memory management Linking and...

Page 1: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6: Memory management

Linking and Loading

Page 2: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 2 AE3B33OSD 2012

Contents

Paging on demand

Page replacement

Algorithm LRU and it’s approximation

Process memory allocation, problem of thrashing

Linker vs. loader

Linking the executable

Libraries

Loading executable

ELF – UNIX format

PE – windows program

Dynamic libraries

Page 3: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 3 AE3B33OSD 2012

Page fault

With each page table entry a valid–invalid bit is associated (1 in-memory, 0 not-in-memory)

Initially valid–invalid but is set to 0 on all entries

Example of a page table snapshot:

During address translation, if valid–invalid bit in page table entry is 0 page fault

Frame # valid-invalid bit

page table

1

0

1

1

0

0

1

0

Page 4: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 4 AE3B33OSD 2012

Paging techniques

Paging implementations Demand Paging (Demand Segmentation)

Lazy method, do nothing in advance

Paging at process creation

Program is inserted into memory during process start-up

Pre-paging

Load page into memory that will be probably used

Swap pre-fetch

With page fault load neighborhood pages

Pre-cleaning

Dirty pages are stored into disk

Page 5: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 5 AE3B33OSD 2012

Demand Paging

Bring a page into memory only when it is needed Less I/O needed

Less memory needed

Faster response

More users

Slow start of application

Page is needed reference to it invalid reference abort

not-in-memory page fault bring to memory

Page fault solution Process with page fault is put to waiting queue

OS starts I/O operation to put page into memory

Other processes can run

After finishing I/O operation the process is marked as ready

Page 6: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 6 AE3B33OSD 2012

Steps in Handling a Page Fault

Page 7: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 7 AE3B33OSD 2012

Locality In A Memory-Reference Pattern

Page 8: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 8 AE3B33OSD 2012

Locality principle

Reference to instructions and data creates

clusters

Exists time locality and space locality Program execution is (excluding jump and calls) sequential

Usually program uses only small number of functions in time interval

Iterative approach uses small number of repeating instructions

Common data structures are arrays or list of records in

neighborhoods memory locations.

It’s possible to create only approximation of

future usage of pages

Main memory can be full First release memory to get free frames

Page 9: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 9 AE3B33OSD 2012

Other paging techniques

Improvements of demand paging Pre-paging

Neighborhood pages in virtual space usually depend and can be loaded together – speedup loading

Locality principle – process will probably use the neighborhood page soon

Load more pages together

Very important for start of the process

Advantage: Decrease number of page faults

Disadvantage: unused page are loaded too

Pre-cleaning If the computer has free capacity for I/O operations, it is possible to run

copying of changed (dirty) pages to disk in advance Advantage: to free page very fast, only to change validity bit Disadvantage: The page can be modified in future - boondoggle

Page 10: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 10 AE3B33OSD 2012

What happens if there is no free frame?

Page replacement – find some page (victim) in

memory, but not really in use, swap it out

algorithm

performance – want an algorithm which will result in minimum

number of page faults

Same page may be brought into memory several

times

Page 11: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 11 AE3B33OSD 2012

Page Replacement

Prevent over-allocation of memory by modifying page-

fault service routine to include page replacement

Some pages cannot be replaced, they are locked (page

table, interrupt functions,…)

Use modify (dirty) bit to reduce overhead of page

transfers – only modified pages are written to disk

Page replacement completes separation between logical

memory and physical memory – large virtual memory can

be provided on a smaller physical memory

Want lowest page-fault rate

Evaluate algorithm by running it on a particular string of

memory references (reference string) and computing the

number of page faults on that string

Page 12: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 12 AE3B33OSD 2012

Page Replacement with Swapping

Page 13: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 13 AE3B33OSD 2012

Graph of Page Faults Versus The Number of

Frames

Page 14: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 14 AE3B33OSD 2012

Algorithm First-In-First-Out (FIFO)

3 frames (memory with only 3 frames)‏

4 frames of memory ‏

Beladyho anomalie (more frames – more page faults)‏

FIFO – simple, not effective Old pages can be very busy

9 Page faults

5 4 3 2 1 5 2 1 4 3 2 1 Reference:

Page faults

1

Frame content

2

1

3

2

1

3

2

4

3

1

4

2

1

4

2

1

5

2

1

5

4

3

5

4 2 2 3

3 3 1 2

5 5 5 1

Frame number

10 Page faults

3 3 3 4 4 4 4 4 4 4

5 4 3 2 1 5 2 1 4 3 2 1 Reference:

Page faults

1

Frame content

2

1

3

2

1

3

2

1

3

2

1

3

2

1

3

2

5

3

1

5

2

5

4

2 2 2 3

1 1 1 2

4 5 5 1

Frame number

Page 15: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 15 AE3B33OSD 2012

Optimal algorithm

Victim – Replace page that will not be used for longest period of time

We need to know the future Can be only predicted

Used as comparison for other algorithms

Example: memory with 4 frames As example we know the whole future

6 Page faults

(The best possible result)4 4 4 4 5 5 5 5 5 5 ‏

5 4 3 2 1 5 2 1 4 3 2 1 Reference:

1

Frame content

2

1

3

2

1

3

2

1

3

2

1

3

2

1

3

2

1

3

2

1

3

2

4

3 3 3 3

2 2 2 2

4 1 1 1

Frame number

Page 16: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 16 AE3B33OSD 2012

Least Recently Used

Prediction is based on history Assumption: Page, that long time was not used will be probably not

used in future

Victim – page, that was not used for the longest period

LRU is considered as the best approximation of optimal algorithm

Example: memory with 4 frames

Best result 6 page faults, LRU 8 page faults, FIFO 10 page faults

8 Page faults

3 3 3 4 4 4 4 4 4 4

5 4 3 2 1 5 2 1 4 3 2 1 Reference:

Page faults

1

Frame content

2

1

3

2

1

3

2

1

3

2

1

3

2

1

5

2

1

5

2

1

4

2

5

4 5 5 3

2 2 2 2

1 1 1 1

Frame number

Page 17: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 17 AE3B33OSD 2012

LRU – implementation

It is not easy to implement LRU The implementation should be fast

There must be CPU support for algorithm – update step cannot be solved be SW because is done by each instruction (each memory reading)

Counter implementation Every page entry has a counter; every time page is referenced

through this entry, copy the clock into the counter

When a page needs to be changed, look at the counters to determine which are to change

Stack implementation – keep a stack of page numbers in a double link form: Page referenced:

move it to the top

requires 6 pointers to be changed

No search for replacement

Page 18: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 18 AE3B33OSD 2012

Approximation of LRU

Reference bit With each page associate a bit, initially = 0

When page is referenced bit set to 1

Replace the one which is 0 (if one exists). We do not know the order, however.

Second chance Need reference bit

Clock replacement

If page to be replaced (in clock order) has reference bit = 1 then:

set reference bit 0

leave page in memory

replace next page (in clock order), subject to same rules

In fact it is FIFO with second chance

Page 19: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 19 AE3B33OSD 2012

Algorithm Second Chance

Page fault test the frame that is pointed by clock arm.

Depend on access a-bit:

if a=0:

take this page as victim

if a=1:

turn a=0, and keep page in

memory

turn the clock arm forward

if you have no victim do the

same for the next page

Numerical simulation of this algorithm shows that it is really close to LRU

Page 20: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 20 AE3B33OSD 2012

Modification LRU

NRU – not recently used Use a-bit and dirty bit d-bit‏ Timer regularly clean a-bit and therefore it is possible to have page

with d-bit=1 and a-bit=0. Select page in order (da): 00, 01, 10, 11

Priority of d-bit enable to spare disk operation and time

Ageing

a-bit is regularly saved and old-values are shifted

Time window is limited by HW architecture

If the history of access to page is 0,0,1,0,1, then it corresponds to

number 5 (00101)‏

The page with the smallest number well be removed

Page 21: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 21 AE3B33OSD 2012

Counter algorithms

Reference counter Each frame has reference counter

For „swap-in“ – the counter is set to 0

Each reference increments the counter

Algorithm LFU (Least Frequently Used) replaces page with smallest count‏

Algorithm MFU (Most Frequently Used) based on the argument that the page with the smallest count was

probably just brought in and has yet to be used

Page 22: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 22 AE3B33OSD 2012

Process and paging

Global replacement – process selects a replacement frame

from the set of all frames; one process can take a frame from

another

Local replacement – each process selects from only its own

set of allocated frames

Principles of frame allocation Fixed allocation

Process receives fixed number of frames (Can be fixed for each process

or can depends on it’s virtual space size)‏

Priority allocation

Process with higher priority receives more frames to be able to run faster

If there is page fault process with higher priority gets frame from process

with lower priority

Page 23: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 23 AE3B33OSD 2012

Fixed Allocation

Equal allocation – For example, if there are 100

frames and 5 processes, give each process 20

frames.

Proportional allocation – Allocate according to the

size of process

Example:

mS

spa

m

sS

ps

iii

i

ii

for allocation

frames of number total

process of size

5964137

127

564137

10

127

10

64

2

1

2

1

a

a

s

s

m

Page 24: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 24 AE3B33OSD 2012

Dynamic Allocation

Priority allocation

Use a proportional allocation scheme using priorities rather

than size

If process Pi generates a page fault,

select for replacement one of its frames

select for replacement a frame from a process with lower

priority number

Working set

Dynamically detect how many pages is used by each

process

Page 25: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 25 AE3B33OSD 2012

Thrashing

If a process does not have “enough” pages, the page-fault rate is very high. This leads to: low CPU utilization

operating system thinks that it needs to increase the degree of multiprogramming

another process can be added to the system

Thrashing a process is busy swapping pages in and out

Page 26: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 26 AE3B33OSD 2012

Working-Set Model

How many pages process need?

Working set define set of pages that were used by last N instructions

Detection of space locality in process

working-set window a fixed number of page references Example: 10,000 instruction

WSSi (working set of Process Pi) = total number of pages referenced in the most recent (varies in time) if too small will not encompass entire locality

if too large will encompass several localities

if = will encompass entire program

D = WSSi total demand frames

if D > m Thrashing

Policy if D > m, then suspend one of the processes

Page 27: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 27 AE3B33OSD 2012

Working-set model

Page 28: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 28 AE3B33OSD 2012

Keeping Track of the Working Set

Approximate with interval timer + a reference bit

Example: = 10,000

Timer interrupts after every 5000 time units

Keep in memory 2 bits for each page

Whenever a timer interrupts copy and sets the values of all

reference bits to 0

If one of the bits in memory = 1 page in working set

Why is this not completely accurate?

Improvement = 10 bits and interrupt every 1000 time

units

Page 29: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 29 AE3B33OSD 2012

Working set

If sum of working sets for all process Pi- WSi exceeds the

whole capacity of physical memory it creates thrashing

Simply protection before thrashing

Whole one process is swapped out

Page 30: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 30 AE3B33OSD 2012

Page size Big pages

Small number of page faults Big fragmentation If page size is bigger

then process size, virtual space is not necessary

Small pages Big number of small pages

Page is more frequently in memory → low number of page faults Smaller pages means

Smaller fragmentation but decrease the effectivness of disk operations The bigger page table and more complicated selection of victim for swap

out Big page table

PT must be in memory, cannot be swaped out – PT occupying real memory

Placing part of PT into virtual memory leads to more page faults (access to invalid page can create 2 page faults, first fault of page table and fault of page)

Page fault fre

quency →

Page size→ P

Lot of small

pages in page table

Whole process in one page

Less pages but some pages contain

unused data

Page 31: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 31 AE3B33OSD 2012

Programming techniques and page faults

Programming techniques have influence to page faults double data[512][512];

Suppose that double occupy 8 byts

Each line of array has 4 KB and is stored in one page 4 KB

It is good to know how the data are stored in virtual space

Approach 1:

for (j = 0; j <512; j++) for (i = 0; i < 512; i++) data[i][j] = i*j;

Can have 512 x 512 = 262 144 page faults

Approach 2:

for (i = 0; i <512; i++) for (j = 0; j < 512; j++) data[i][j] = i*j;

Only 512 page faults

Page 32: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 32 AE3B33OSD 2012

Paging in Windows XP

Uses demand paging with pre-paging clusters. Clustering brings in pages surrounding the faulting page.

Processes are assigned working set minimum and working set maximum

Working set minimum is the minimum number of pages the process is guaranteed to have in memory

A process may be assigned as many pages up to its working set maximum

When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory

Working set trimming removes pages from processes that have pages in excess of their working set minimum

There can be thrashing Recommended minimal memory size – 128 MB Real minimal memory size – 384 MB

Page 33: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 33 AE3B33OSD 2012

Linking and Loading

Page 34: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 34 AE3B33OSD 2012

Background

Operating system is responsible for starting programs

Program must be brought into memory and placed within a process memory space for it to be executed

User programs go through several steps before being run

Linkers and loaders prepare program to execution

Linkers and loaders enable to binds programmer’s abstract names to concrete numeric values – addresses

Page 35: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 35 AE3B33OSD 2012

Linker vs. Loader

Program loading – copy program from secondary storage into main

memory so it’s ready to run

In some cases it is copying data from disk to memory

More often it allocate storage, set protections bits, arrange virtual

memory to map virtual addresses to disk space

Relocation

each object code program address started at 0

If program contains multiple subprograms all subprograms must be

loaded at non-overlapping addresses

In many systems the relocation is done more than once

Symbol resolution

The reference from one subprogram to another subprogram is made by

using symbols

Linker and loader are similar

Loader does program loading and relocation

Linker does symbol resolution and relocation

There exists linking loaders

Page 36: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 36 AE3B33OSD 2012

Binding of Instructions and Data to Memory

Compile time: If memory location is known a priori, absolute code can be generated; must recompile code if starting location changes

Load time: Must generate relocatable code if memory location is not known at compile time

Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers).

Page 37: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 37 AE3B33OSD 2012

Two pass linking

Linker’s input is set of object files, libraries, and command files.

Output of the linker is executable file, link/load map and/or debug

symbol file

Linker uses two-pass approach

Linker first pass

Scan from input files segment sizes, definitions and references

Creates symbol table of definitions and references

Determine the size of joined segments

Linker second pass

Assign numeric location to symbols in new segments

Reads and relocates the object code, substituting numeric address for

symbol references

Adjusting memory address according new segments

Create execution file with correct:

Header information

Relocated segments

New symbol table information

For dynamic linking linker generates “stub” code or an array of pointers that

need

Page 38: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 38 AE3B33OSD 2012

Object code

Compilers and assemblers create object files from source

files

Object files contains:

Header information – overall information about file, like size of the

code, size of the data, name of the source file, creation date

Object code – binary instructions and data

Relocation – list of places in object code, that have to be fixed up,

when the linker or loader change the address of the object code

Symbols – global symbols defined in this object file, this symbols

can be used by other object files

Debugging information – this information is optional, includes

information for debugger, source file line numbers and local

symbols, description of data structures

Page 39: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 39 AE3B33OSD 2012

Library

Library is sequence of object modules

UNIX files use an “archive” format of file which can be

used for collection of any types of files

Linking library is iterative process:

Linker reads object files in library and looks for external symbols

from program

If the linker finds external symbol it adds the concrete object file

to program and adds external symbols of this library object to

external symbols of program

The previous steps repeat until new external symbols and objects

are added to program

There can be dependencies between libraries:

Object A from lib A needs symbol B from lib B

Object B from lib B needs symbol C from lib A

Object C from lib A needs symbol D from lib B

Object D from lib B needs symbol E from ………….

Page 40: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 40 AE3B33OSD 2012

UNIX ELF

Structure for object and executable programs for most

UNIX systems

Successor of more simple format a.out

ELF structure is common for relocatble format (object

files), executable format (program from objects), shared

libraries and core image (core image is created if

program fails)

ELF can be interpreted as a set of sections for linker or

set of segments for loader

ELF contains:

ELF header – magic string \177ELF, attributes - 32/64 bit, little-

endian/big-endian, type – relocatable/executable/shared/core

image, architecture SPARC/x86/68K,….

Data – list of sections and segments depending on ELF type

Page 41: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 41 AE3B33OSD 2012

ELF relocatable

Created by compiler and is prepared for linker to create executable

program

Relocatable files – collection of section defined in header. Each

section is code, or read-only data, or rw data, or relocation entries, or

symbols.

Attribute alloc means that loader must allocate space for this section

Sections:

.text – code with attribute alloc+exec

.data – data with initial value, alloc+write

.rodata – constants with only alloc attribute

.bss – not initialized data – nobits, alloc+write

.rel.text, .rel.data, .rel.rodata – relocation information

.init – initialization code for some languages (C++)

.symtab, .dynsym – linker symbol tables (regular or dynamic)

.strtab, .dynstr – table of strings for .symtab resp. .dynsym (.dynsym has

alloc because it’s used at runtime)

Page 42: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 42 AE3B33OSD 2012

ELF - exucutable

Similar to ELF-relocatable but the data are arranged so

that are ready to be mapped into memory and run

Sections are packed into segments, usually code and

read-only data into read-only segment and r/w data into

r/w segment

Segments are prepared to be loaded at defined address

Usually it is:

Stack from 0x8000000

Text with ro-data from 0x8048000 – 0x48000 is stack size

Data behind text

Bss behind data

Relocation is necessary if dynamic library is colliding with

program – Relocated is dynamic library

Segments are not align to page size, but the offset is

used and some data are copied twice

Page 43: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 43 AE3B33OSD 2012

Microsoft Portable Executable format

Portable executable (PE) is Microsoft format for Win NT.

It is mix of MS-DOS executable, Digital’s VAX VMS, and

Unix System V. It is adapted from COFF, Unix format

between a.out and ELF

PE is based on resources – cursors, icons, bitmaps,

menus, fonts that are shared between program and GUI

PE is for paged environment, pages from PE can be

mapped directly into memory

PE can be executable file (EXE) or shared libraries (DLL)

PE starts with small DOS.EXE program, that prints “This

program needs Microsoft Windows”

Then contains PE header, COFF header and “optional”

headers

Each section is aligned to memory page boundary

Page 44: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 44 AE3B33OSD 2012

PE sections

Each section has address in file and size, memory address and size

(not necessarily same, because disk section use usually 512bytes,

page size 4kB)

Each section is marked with hardware permissions, read, write,

execute

The linker creates PE file for a specific target address – imagebase

If the address space is free than loader do no relocation

Otherwise (in few cases) the loader has to map the file somewhere

else

Relocation is done by fix-ups from section .reloc. The PE is moved

as block, each pointer is shifted by fixed offset (target address –

image address). The fix-up contains position of pointer inside page

and type of the pointer.

Other sections – Exports (mainly for DLL, EXE only for debugging),

Imports (DLL that PE needs), Resources (list of resources), Thread

Local Storage (Thread startup data)

Page 45: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 45 AE3B33OSD 2012

Shared libraries - static

It is efficient to share libraries instead linking the same library to each

program

For example, probably each program uses function printf and if you

have thousands of programs in computer there will be thousands of

copy printf function.

The linker search library as usual to find modules that resolve

undefined external symbols. Rather than coping the contents of

module into output file it creates the table of libraries and modules

into executable

When the program is started the loader finds the libraries and map

them to program address space

Standards systems shares pages that are marked as read-only.

Static shared libraries must used different address.

Assigning address space to libraries is complicated.

Page 46: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 46 AE3B33OSD 2012

Dynamic Libraries

Dynamic Libraries can be relocated to free address space

Dynamic Libraries are easier to update. If dynamic library is updated to new version the program has no change

It is easy to share dynamic libraries

Dynamic linking permits a program to load and unload routines at runtime, a facility that can otherwise be very difficult to provide

Routine can be loaded when it is called

Better memory-space utilization; unused routine is never loaded

Useful when large amounts of code are needed to handle infrequently occurring cases

Page 47: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 47 AE3B33OSD 2012

ELF dynamic libraries

ELF dynamic libraries can be loaded at any address, it uses position

independent code (PIC)

Global offset table (GOT) contains pointer to all static data

referenced in program

Lazy procedure linkage with Procedure Linkage Table (PLT)

For each dynamic function PLT contain code that use GOT to find

address of this function

At program load all addresses point to stub – dynamic loader

After loading dynamic library entry in GOT is changed to real routine

address

Dynamic loader (library ld.so) finds the library by library name, major

and minor versions numbers. The major version number guarantee

compatibility, the minor version number should be the highest.

Dynamic loading can be run explicitly by dlopen(),dlsym(), …

functions

Page 48: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 48 AE3B33OSD 2012

Dynamic Linking Libraries - DLL

Similar to ELF dynamic libraries

Dynamic linker is part of the windows kernel

DLL is relocated if the address space is not free (windows call it rebasing)

Lazy binding postpones binding until execution time

Each function exported by DLL is identified by a numeric ordinal and by name

Addresses of functions are defined in Export Address table

Page 49: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

Lecture 6 / Page 49 AE3B33OSD 2012

Architectural Issues

Linkers and loaders are extremely sensitive to the architectural

details of CPU and OS

Mainly two aspects of HW architecture affect linkers

Program addressing

Instruction format

Position independent code – enable to implement dynamic libraries

Separate code from data and generate code, that won’t change

regardless of the address at which it is loaded

ELF – PIC group of code pages followed by group of data pages

Regardless of where the in the address space the program is loaded, the

offset from the code to the data doesn’t change

Linker creates Global Offset Table containing pointers to all of the global

data

Advantage – no load relocation, share memory pages of code among

processes even though they don’t have the same address

Disadvantage – code is bigger and slower than non-PIC

Page 50: Lecture 6: Memory management Linking and Loadinglabe.felk.cvut.cz/~stepan/AE3B33OSD/OSD-Lecture-6.pdf · 2014-02-19 · AE3B33OSD Lecture 6 / Page 4 2012 Paging techniques Paging

End of Lecture 5

Questions?