Completed Manual OS Case Study

114
CP7212 CASE STUDY – OPERATING SYSTEMS DESIGN OBJECTIVES: 1. To develop capabilities to work at systems level 2. To learn about issues in designing and implementing modern operating systems 3. To understand team formation, team issues, allocating roles and responsibilities 4. To make effective presentations on the work done 5. To develop effective written communication skills LAB EXERCISES: A team of three or four students will work on assigned case study / mini-project. Case Study / Mini-project can be designed on the following lines: 1. Development of a reasonably sized dynamically loadable kernel module for Linux kernel 2. Study educational operating systems such as Minix (http://www.minix3.org/), Weenix (http://weenix.cs.brown.edu/mediawiki/index.php/Weenix) and develop reasonably sized interesting modules for them 3. Study the Android open source operating system for mobile devices (http://source.android.com/) and develop / modify some modules. 4. Study any embedded and real-time operating system such as eCos (http://ecos.sourceware.org/) and develop / modify some modules. OUTCOMES: Upon completion of the course, the students will be able to 1. Develop assigned modules of operating systems design carrying out coding, testing, and documentation work involved. 2. Describe team issuses and apply suitable methods to resolve the same. 3. Demonstrate individual competence in building medium size operating system components. 4. Demonstrate ethical and professional attributes of a computer engineer. 1

description

operating system

Transcript of Completed Manual OS Case Study

Page 1: Completed Manual OS Case Study

CP7212 CASE STUDY – OPERATING SYSTEMS DESIGNOBJECTIVES:

1. To develop capabilities to work at systems level 2. To learn about issues in designing and implementing modern operating systems 3. To understand team formation, team issues, allocating roles and responsibilities 4. To make effective presentations on the work done 5. To develop effective written communication skills

LAB EXERCISES:

A team of three or four students will work on assigned case study / mini-project. Case Study / Mini-project can be designed on the following lines: 1. Development of a reasonably sized dynamically loadable kernel module for Linux kernel 2. Study educational operating systems such as Minix (http://www.minix3.org/), Weenix (http://weenix.cs.brown.edu/mediawiki/index.php/Weenix) and develop reasonably sized interesting modules for them 3. Study the Android open source operating system for mobile devices (http://source.android.com/) and develop / modify some modules. 4. Study any embedded and real-time operating system such as eCos(http://ecos.sourceware.org/) and develop / modify some modules.

OUTCOMES:Upon completion of the course, the students will be able to 1. Develop assigned modules of operating systems design carrying out coding, testing, and documentation work involved.2. Describe team issuses and apply suitable methods to resolve the same.3. Demonstrate individual competence in building medium size operating system components.4. Demonstrate ethical and professional attributes of a computer engineer.5. Prepare suitable plan with clear statements of deliverables, and track the same.6. Make individual presentation of the work carried out.7. Prepare well-organized written documents to communicate individual work accomplished.

REFERENCES:1. Watts S. Humphrey, “Introduction to Team Software Process”, Addison-Wesley, SEI Series in Software Engineering, 1999.2. Mukesh Singhal and Niranjan G. Shivaratri, “Advanced Concepts in Operating Systems – Distributed, Database, and Multiprocessor Operating Systems”, Tata McGraw-Hill, 2001.3. T. W. Doeppner, “Operating Systems in Depth: Design and Programming”, Wiley, 2010.

1

Page 2: Completed Manual OS Case Study

CASE STUDY 1 :The Linux SystemHistory

Linux is a modern, free operating system based on UNIX standards First developed as a small but self-contained kernel in 1991 by Linus Torvalds, with

the major design goal of UNIX compatibility, released as open source Its history has been one of collaboration by many users from all around the world,

corresponding almost exclusively over the Internet It has been designed to run efficiently and reliably on common PC hardware, but also

runs on a variety of other platforms The core Linux operating system kernel is entirely original, but it can run much

existing free UNIX software, resulting in an entire UNIX-compatible operating system free from proprietary code

Linux system has many, varying Linux distributions including the kernel, applications, and management tools

The Linux Kernel Version 0.01 (May 1991) had no networking, ran only on 80386-compatible Intel

processors and on PC hardware, had extremely limited device-drive support, and supported only the Minix file system

Linux 1.0 (March 1994) included these new features:o Support for UNIX’s standard TCP/IP networking protocolso BSD-compatible socket interface for networking programmingo Device-driver support for running IP over an Etherneto Enhanced file systemo Support for a range of SCSI controllers for

high-performance disk accesso Extra hardware support

Version 1.2 (March 1995) was the final PC-only Linux kernel Kernels with odd version numbers are development kernels, those with even numbers

are production kernelsLinux 2.0

Released in June 1996, 2.0 added two major new capabilities:o Support for multiple architectures, including a fully 64-bit native Alpha porto Support for multiprocessor architectures

Other new features included:o Improved memory-management codeo Improved TCP/IP performanceo Support for internal kernel threads, for handling dependencies between loadable

modules, and for automatic loading of modules on demando Standardized configuration interface

2

Page 3: Completed Manual OS Case Study

Available for Motorola 68000-series processors, Sun Sparc systems, and for PC and PowerMac systems

2.4 and 2.6 increased SMP support, added journaling file system, preemptive kernel, 64-bit memory support

3.0 released in 2011, 20th anniversary of Linux, improved virtualization support, new page write-back facility, improved memory management, new Completely Fair Scheduler

The Linux System Linux uses many tools developed as part of Berkeley’s BSD operating system, MIT’s X

Window System, and the Free Software Foundation's GNU project The main system libraries were started by the GNU project, with improvements provided

by the Linux community Linux networking-administration tools were derived from 4.3BSD code; recent BSD

derivatives such as Free BSD have borrowed code from Linux in return The Linux system is maintained by a loose network of developers collaborating over the

Internet, with a small number of public ftp sites acting as de facto standard repositories File System Hierarchy Standard document maintained by the Linux community to

ensure compatibility across the various system componentso Specifies overall layout of a standard Linux file system, determines under which

directory names configuration files, libraries, system binaries, and run-time data files should be stored

Linux Distributions Standard, precompiled sets of packages, or distributions, include the basic Linux system,

system installation and management utilities, and ready-to-install packages of common UNIX tools

The first distributions managed these packages by simply providing a means of unpacking all the files into the appropriate places; modern distributions include advanced package management

Early distributions included SLS and Slackware o Red Hat and Debian are popular distributions from commercial and

noncommercial sources, respectively, others include Canonical and SuSE The RPM Package file format permits compatibility among the various Linux

distributionsLinux Licensing

The Linux kernel is distributed under the GNU General Public License (GPL), the terms of which are set out by the Free Software Foundation

o Not public domain, in that not all rights are waived Anyone using Linux, or creating their own derivative of Linux, may not make the derived

product proprietary; software released under the GPL may not be redistributed as a binary-only product

o Can sell distributions, but must offer the source code too

3

Page 4: Completed Manual OS Case Study

Design Principles Linux is a multiuser, multitasking system with a full set of UNIX-compatible tools Its file system adheres to traditional UNIX semantics, and it fully implements the

standard UNIX networking model Main design goals are speed, efficiency, and standardization Linux is designed to be compliant with the relevant POSIX documents; at least two Linux

distributions have achieved official POSIX certificationo Supports Pthreads and a subset of POSIX real-time process control

The Linux programming interface adheres to the SVR4 UNIX semantics, rather than to BSD behavior

Components of a Linux System

Like most UNIX implementations, Linux is composed of three main bodies of code; the most important distinction between the kernel and all other components.

The kernel is responsible for maintaining the important abstractions of the operating system

o Kernel code executes in kernel mode with full access to all the physical resources of the computer

o All kernel code and data structures are kept in the same single address space The system libraries define a standard set of functions through which applications

interact with the kernel, and which implement much of the operating-system functionality that does not need the full privileges of kernel code

The system utilities perform individual specialized management tasks User-mode programs rich and varied, including multiple shells like the bourne-again

(bash)Kernel Modules

Sections of kernel code that can be compiled, loaded, and unloaded independent of the rest of the kernel.

A kernel module may typically implement a device driver, a file system, or a networking protocol

The module interface allows third parties to write and distribute, on their own terms, device drivers or file systems that could not be distributed under the GPL.

Kernel modules allow a Linux system to be set up with a standard, minimal kernel, without any extra device drivers built in.

4

Page 5: Completed Manual OS Case Study

Four components to Linux module support:o module-management systemo module loader and unloader o driver-registration systemo conflict-resolution mechanism

Module Management Supports loading modules into memory and letting them talk to the rest of the kernel Module loading is split into two separate sections:

o Managing sections of module code in kernel memoryo Handling symbols that modules are allowed to reference

The module requestor manages loading requested, but currently unloaded, modules; it also regularly queries the kernel to see whether a dynamically loaded module is still in use, and will unload it when it is no longer actively needed

Driver Registration Allows modules to tell the rest of the kernel that a new driver has become available The kernel maintains dynamic tables of all known drivers, and provides a set of routines

to allow drivers to be added to or removed from these tables at any time Registration tables include the following items:

o Device driverso File systems o Network protocolso Binary format

Conflict Resolution A mechanism that allows different device drivers to reserve hardware resources and to

protect those resources from accidental use by another driver. The conflict resolution module aims to:

o Prevent modules from clashing over access to hardware resourceso Prevent autoprobes from interfering with existing device driverso Resolve conflicts with multiple drivers trying to access the same hardware:

1. Kernel maintains list of allocated HW resources2. Driver reserves resources with kernel database first3. Reservation request rejected if resource not available

Process Management UNIX process management separates the creation of processes and the running of a new

program into two distinct operations.o The fork() system call creates a new processo A new program is run after a call to exec()

Under UNIX, a process encompasses all the information that the operating system must maintain to track the context of a single execution of a single program

5

Page 6: Completed Manual OS Case Study

Under Linux, process properties fall into three groups: the process’s identity, environment, and context

Process Identity Process ID (PID) - The unique identifier for the process; used to specify processes to the

operating system when an application makes a system call to signal, modify, or wait for another process

Credentials - Each process must have an associated user ID and one or more group IDs that determine the process’s rights to access system resources and files

Personality - Not traditionally found on UNIX systems, but under Linux each process has an associated personality identifier that can slightly modify the semantics of certain system calls

o Used primarily by emulation libraries to request that system calls be compatible with certain specific flavors of UNIX

Namespace – Specific view of file system hierarchyo Most processes share common namespace and operate on a shared file-system

hierarchyo But each can have unique file-system hierarchy with its own root directory and set

of mounted file systemsProcess Environment

The process’s environment is inherited from its parent, and is composed of two null-terminated vectors:

o The argument vector lists the command-line arguments used to invoke the running program; conventionally starts with the name of the program itself.

o The environment vector is a list of “NAME=VALUE” pairs that associates named environment variables with arbitrary textual values.

Passing environment variables among processes and inheriting variables by a process’s children are flexible means of passing information to components of the user-mode system software.

The environment-variable mechanism provides a customization of the operating system that can be set on a per-process basis, rather than being configured for the system as a whole.

Process Context The (constantly changing) state of a running program at any point in time The scheduling context is the most important part of the process context; it is the

information that the scheduler needs to suspend and restart the process The kernel maintains accounting information about the resources currently being

consumed by each process, and the total resources consumed by the process in its lifetime so far

The file table is an array of pointers to kernel file structures

6

Page 7: Completed Manual OS Case Study

o When making file I/O system calls, processes refer to files by their index into this table, the file descriptor (fd)

Whereas the file table lists the existing open files, the file-system context applies to requests to open new files

The current root and default directories to be used for new file searches are stored here The signal-handler table defines the routine in the process’s address space to be called

when specific signals arrive The virtual-memory context of a process describes the full contents of the its private

address spaceProcesses and Threads

Linux uses the same internal representation for processes and threads; a thread is simply a new process that happens to share the same address space as its parent

o Both are called tasks by Linux A distinction is only made when a new thread is created by the clone() system call

o fork() creates a new task with its own entirely new task contexto clone() creates a new task with its own identity, but that is allowed to share the

data structures of its parent Using clone() gives an application fine-grained control over exactly what is shared

between two threads

Scheduling The job of allocating CPU time to different tasks within an operating system While scheduling is normally thought of as the running and interrupting of processes, in

Linux, scheduling also includes the running of the various kernel tasks Running kernel tasks encompasses both tasks that are requested by a running process and

tasks that execute internally on behalf of a device driver As of 2.5, new scheduling algorithm – preemptive, priority-based, known as O(1)

o Real-time rangeo nice valueo Had challenges with interactive performance

2.6 introduced Completely Fair Scheduler (CFS)

7

Page 8: Completed Manual OS Case Study

CFS Eliminates traditional, common idea of time slice Instead all tasks allocated portion of processor’s time CFS calculates how long a process should run as a function of total number of tasks N runnable tasks means each gets 1/N of processor’s time Then weights each task with its nice value

o Smaller nice value -> higher weight (higher priority) Then each task run with for time proportional to task’s weight divided by total weight of

all runnable tasks Configurable variable target latency is desired interval during which each task should

run at least onceo Consider simple case of 2 runnable tasks with equal weight and target latency of

10ms – each then runs for 5ms If 10 runnable tasks, each runs for 1ms Minimum granularity ensures each run has reasonable amount of time

(which actually violates fairness idea)Kernel Synchronization

A request for kernel-mode execution can occur in two ways:o A running program may request an operating system service, either explicitly via

a system call, or implicitly, for example, when a page fault occurso A device driver may deliver a hardware interrupt that causes the CPU to start

executing a kernel-defined handler for that interrupt Kernel synchronization requires a framework that will allow the kernel’s critical sections

to run without interruption by another critical section Linux uses two techniques to protect critical sections:

1. Normal kernel code is non preemptible (until 2.6)– when a time interrupt is received while a process is executing a kernel system service routine, the kernel’s need_resched flag is set so that the scheduler will run once the system call has completed and control is about to be returned to user mode2. The second technique applies to critical sections that occur in interrupt service routines – By using the processor’s interrupt control hardware to disable interrupts during a critical section, the kernel guarantees that it can proceed without the risk of concurrent access of shared data structures

o Provides spin locks, semaphores, and reader-writer versions of both Behavior modified if on single processor or multi:

8

Page 9: Completed Manual OS Case Study

To avoid performance penalties, Linux’s kernel uses a synchronization architecture that allows long critical sections to run without having interrupts disabled for the critical section’s entire duration

Interrupt service routines are separated into a top half and a bottom halfo The top half is a normal interrupt service routine, and runs with recursive

interrupts disabledo The bottom half is run, with all interrupts enabled, by a miniature scheduler

that ensures that bottom halves never interrupt themselveso This architecture is completed by a mechanism for disabling selected bottom

halves while executing normal, foreground kernel codeInterrupt Protection Levels

Each level may be interrupted by code running at a higher level, but will never be interrupted by code running at the same or a lower level

User processes can always be preempted by another process when a time-sharing scheduling interrupt occurs

Symmetric Multiprocessing Linux 2.0 was the first Linux kernel to support SMP hardware; separate processes or

threads can execute in parallel on separate processors Until version 2.2, to preserve the kernel’s nonpreemptible synchronization requirements,

SMP imposes the restriction, via a single kernel spinlock, that only one processor at a time may execute kernel-mode code

Later releases implement more scalability by splitting single spinlock into multiple locks, each protecting a small subset of kernel data structures

Version 3.0 adds even more fine-grained locking, processor affinity, and load-balancingMemory Management

Linux’s physical memory-management system deals with allocating and freeing pages, groups of pages, and small blocks of memory

It has additional mechanisms for handling virtual memory, memory mapped into the address space of running processes

Splits memory into four different zones due to hardware characteristicso Architecture specific, for example on x86:

9

Page 10: Completed Manual OS Case Study

Managing Physical Memory The page allocator allocates and frees all physical pages; it can allocate ranges of physically-contiguous pages on request The allocator uses a buddy-heap algorithm to keep track of available physical pages

o Each allocatable memory region is paired with an adjacent partnero Whenever two allocated partner regions are both freed up they are combined to

form a larger regiono If a small memory request cannot be satisfied by allocating an existing small free

region, then a larger free region will be subdivided into two partners to satisfy the request

Memory allocations in the Linux kernel occur either statically (drivers reserve a contiguous area of memory during system boot time) or dynamically (via the page allocator)

Also uses slab allocator for kernel memory Page cache and virtual memory system also manage physical memory

o Page cache is kernel’s main cache for files and main mechanism for I/O to block devices

o Page cache stores entire pages of file contents for local and network file I/OSplitting of Memory in a Buddy Heap

Slab Allocator in Linux

10

Page 11: Completed Manual OS Case Study

Virtual Memory The VM system maintains the address space visible to each process: It creates pages of

virtual memory on demand, and manages the loading of those pages from disk or their swapping back out to disk as required.

The VM manager maintains two separate views of a process’s address space:o A logical view describing instructions concerning the layout of the address space

The address space consists of a set of non-overlapping regions, each representing a continuous, page-aligned subset of the address space

o A physical view of each address space which is stored in the hardware page tables for the process

Virtual memory regions are characterized by:o The backing store, which describes from where the pages for a region come;

regions are usually backed by a file or by nothing (demand-zero memory)o The region’s reaction to writes (page sharing or copy-on-write

The kernel creates a new virtual address space When a process runs a new program with the exec() system call Upon creation of a new process by the fork() system call On executing a new program, the process is given a new, completely empty virtual-

address space; the program-loading routines populate the address space with virtual-memory regions

Creating a new process with fork() involves creating a complete copy of the existing process’s virtual address space

o The kernel copies the parent process’s VMA descriptors, then creates a new set of page tables for the child

o The parent’s page tables are copied directly into the child’s, with the reference count of each page covered being incremented

o After the fork, the parent and child share the same physical pages of memory in their address spaces

Swapping and Paging The VM paging system relocates pages of memory from physical memory out to disk when the memory is needed for something else

The VM paging system can be divided into two sections:o The pageout-policy algorithm decides which pages to write out to disk, and wheno The paging mechanism actually carries out the transfer, and pages data back into

physical memory as neededo Can page out to either swap device or normal fileso Bitmap used to track used blocks in swap space kept in physical memoryo Allocator uses next-fit algorithm to try to write contiguous runs

Kernel Virtual Memory The Linux kernel reserves a constant, architecture-dependent region of the virtual

address space of every process for its own internal use

11

Page 12: Completed Manual OS Case Study

This kernel virtual-memory area contains two regions:o A static area that contains page table references to every available physical

page of memory in the system, so that there is a simple translation from physical to virtual addresses when running kernel code

o The reminder of the reserved section is not reserved for any specific purpose; its page-table entries can be modified to point to any other areas of memory

Executing and Loading User Programs Linux maintains a table of functions for loading programs; it gives each function the

opportunity to try loading the given file when an exec system call is made The registration of multiple loader routines allows Linux to support both the ELF and

a.out binary formats Initially, binary-file pages are mapped into virtual memory

o Only when a program tries to access a given page will a page fault result in that page being loaded into physical memory

An ELF-format binary file consists of a header followed by several page-aligned sectionso The ELF loader works by reading the header and mapping the sections of the file

into separate regions of virtual memory

Memory Layout for ELF Programs

Static and Dynamic Linking A program whose necessary library functions are embedded directly in the program’s

executable binary file is statically linked to its libraries The main disadvantage of static linkage is that every program generated must contain

copies of exactly the same common system library functions Dynamic linking is more efficient in terms of both physical memory and disk-space usage

because it loads the system libraries into memory only once

12

Page 13: Completed Manual OS Case Study

Linux implements dynamic linking in user mode through special linker libraryo Every dynamically linked program contains small statically linked function called

when process startso Maps the link library into memory o Link library determines dynamic libraries required by process and names of

variables and functions neededo Maps libraries into middle of virtual memory and resolves references to symbols

contained in the librarieso Shared libraries compiled to be position-independent code (PIC) so can be

loaded anywhereFile Systems

To the user, Linux’s file system appears as a hierarchical directory tree obeying UNIX semantics

Internally, the kernel hides implementation details and manages the multiple different file systems via an abstraction layer, that is, the virtual file system (VFS)

The Linux VFS is designed around object-oriented principles and is composed of four components:

o A set of definitions that define what a file object is allowed to look like The inode object structure represent an individual file The file object represents an open file The superblock object represents an entire file system A dentry object represents an individual directory entry

To the user, Linux’s file system appears as a hierarchical directory tree obeying UNIX semantics

Internally, the kernel hides implementation details and manages the multiple different file systems via an abstraction layer, that is, the virtual file system (VFS)

The Linux VFS is designed around object-oriented principles and layer of software to manipulate those objects with a set of operations on the objects

o For example for the file object operations include (from struct file_operations in /usr/include/linux/fs.h int open(. . .) — Open a file ssize t read(. . .) — Read from a file ssize t write(. . .) — Write to a file int mmap(. . .) — Memory-map a file

The Linux ext3 File System ext3 is standard on disk file system for Linux

o Uses a mechanism similar to that of BSD Fast File System (FFS) for locating data blocks belonging to a specific file

o Supersedes older extfs, ext2 file systemso Work underway on ext4 adding features like extents

13

Page 14: Completed Manual OS Case Study

o Of course, many other file system choices with Linux distros The main differences between ext2fs and FFS concern their disk allocation policies

o In ffs, the disk is allocated to files in blocks of 8Kb, with blocks being subdivided into fragments of 1Kb to store small files or partially filled blocks at the end of a file

o ext3 does not use fragments; it performs its allocations in smaller units The default block size on ext3 varies as a function of total size of file

system with support for 1, 2, 4 and 8 KB blocks o ext3 uses cluster allocation policies designed to place logically adjacent blocks of

a file into physically adjacent blocks on disk, so that it can submit an I/O request for several disk blocks as a single operation on a block group

o Maintains bit map of free blocks in a block group, searches for free byte to allocate at least 8 blocks at a time

Ext2fs Block-Allocation Policies

Journaling ext3 implements journaling, with file system updates first written to a log file in the

form of transactionso Once in log file, considered committedo Over time, log file transactions replayed over file system to put changes in place

On system crash, some transactions might be in journal but not yet placed into file systemo Must be completed once system recovers

14

Page 15: Completed Manual OS Case Study

o No other consistency checking is needed after a crash (much faster than older methods)

Improves write performance on hard disks by turning random I/O into sequential I/O The Linux Proc File System

The proc file system does not store data, rather, its contents are computed on demand according to user file I/O requests

proc must implement a directory structure, and the file contents within; it must then define a unique and persistent inode number for each directory and files it contains

o It uses this inode number to identify just what operation is required when a user tries to read from a particular file inode or perform a lookup in a particular directory inode

o When data is read from one of these files, proc collects the appropriate information, formats it into text form and places it into the requesting process’s read buffer

Input and Output The Linux device-oriented file system accesses disk storage through two caches:

o Data is cached in the page cache, which is unified with the virtual memory systemo Metadata is cached in the buffer cache, a separate cache indexed by the physical

disk block Linux splits all devices into three classes:

o block devices allow random access to completely independent, fixed size blocks of data

o character devices include most other devices; they don’t need to support the functionality of regular files

o network devices are interfaced via the kernel’s networking subsystem Block Devices

Provide the main interface to all disk devices in a system The block buffer cache serves two main purposes:

o it acts as a pool of buffers for active I/Oo it serves as a cache for completed I/O

The request manager manages the reading and writing of buffer contents to and from a block device driver

Kernel 2.6 introduced Completely Fair Queueing (CFQ)o Now the default schedulero Fundamentally different from elevator algorithmso Maintains set of lists, one for each process by defaulto Uses C-SCAN algorithm, with round robin between all outstanding I/O from all

processeso Four blocks from each process put on at once

15

Page 16: Completed Manual OS Case Study

Device-Driver Block Structure

Character Devices A device driver which does not offer random access to fixed blocks of data A character device driver must register a set of functions which implement the driver’s

various file I/O operations The kernel performs almost no preprocessing of a file read or write request to a character

device, but simply passes on the request to the device The main exception to this rule is the special subset of character device drivers which

implement terminal devices, for which the kernel maintains a standard interface Line discipline is an interpreter for the information from the terminal device

o The most common line discipline is tty discipline, which glues the terminal’s data stream onto standard input and output streams of user’s running processes, allowing processes to communicate directly with the user’s terminal

o Several processes may be running simultaneously, tty line discipline responsible for attaching and detaching terminal’s input and output from various processes connected to it as processes are suspended or awakened by user

o Other line disciplines also are implemented have nothing to do with I/O to user process – i.e. PPP and SLIP networking protocols

Interprocess Communication Like UNIX, Linux informs processes that an event has occurred via signals There is a limited number of signals, and they cannot carry information: Only the fact

that a signal occurred is available to a process The Linux kernel does not use signals to communicate with processes with are running in

kernel mode, rather, communication within the kernel is accomplished via scheduling states and wait_queue structures

Also implements System V Unix semaphoreso Process can wait for a signal or a semaphore

16

Page 17: Completed Manual OS Case Study

o Semaphores scale bettero Operations on multiple semaphores can be atomic

Passing Data Between Processes The pipe mechanism allows a child process to inherit a communication channel to its

parent, data written to one end of the pipe can be read a the other Shared memory offers an extremely fast way of communicating; any data written by

one process to a shared memory region can be read immediately by any other process that has mapped that region into its address space

To obtain synchronization, however, shared memory must be used in conjunction with another Interprocess-communication mechanism

Network Structure Networking is a key area of functionality for Linux

o It supports the standard Internet protocols for UNIX to UNIX communicationso It also implements protocols native to non-UNIX operating systems, in particular,

protocols used on PC networks, such as Appletalk and IPX Internally, networking in the Linux kernel is implemented by three layers of software:

o The socket interfaceo Protocol driverso Network device drivers

Most important set of protocols in the Linux networking system is the internet protocol suite

o It implements routing between different hosts anywhere on the networko On top of the routing protocol are built the UDP, TCP and ICMP protocols

Packets also pass to firewall management for filtering based on firewall chains of rulesSecurity

The pluggable authentication modules (PAM) system is available under Linux PAM is based on a shared library that can be used by any system component that needs to

authenticate users Access control under UNIX systems, including Linux, is performed through the use of

unique numeric identifiers (uid and gid) Access control is performed by assigning objects a protections mask, which specifies

which access modes—read, write, or execute—are to be granted to processes with owner, group, or world access

Linux augments the standard UNIX setuid mechanism in two ways:o It implements the POSIX specification’s saved user-id mechanism, which allows

a process to repeatedly drop and reacquire its effective uid o It has added a process characteristic that grants just a subset of the rights of the

effective uid

17

Page 18: Completed Manual OS Case Study

Linux provides another mechanism that allows a client to selectively pass access to a single file to some server process without granting it any other privileges

Loadable Kernel Module (LKM)

Simpler hello.c

Lkmpg gives an example of the world's simplest LKM, hello-1.c. But it is not as simple as it could be and depends on your having kernel messaging set up a certain way on your system to see it work. Finally, the program requires you to include -D options on your compile command to work, because it does not define some macros in the source code, where the definitions belong.

Here is an improved world's simplest LKM, hello.c.

/* hello.c * "Hello, world" - the loadable kernel module version. * Compile this with * gcc -c hello.c -Wall *//* Declare what kind of code we want from the header files */#define __KERNEL__ /* We're part of the kernel */#define MODULE /* Not a permanent part, though. */

/* Standard headers for LKMs */#include <linux/modversions.h> #include <linux/module.h> #include <linux/tty.h> /* console_print() interface *//* Initialize the LKM */int init_module(){ console_print("Hello, world - this is the kernel speaking\n"); /* More normal is printk(), but there's less that can go wrong with console_print(), so let's start simple. */ /* If we return a non zero value, it means that * init_module failed and the LKM can't be loaded */ return 0;}

/* Cleanup - undo whatever init_module did */void cleanup_module()

18

Page 19: Completed Manual OS Case Study

{ console_print("Short is the life of an LKM\n");}

Compile this with the simple command

$ gcc -c -Wall -nostdinc -I /usr/src/linux/include hello.c

The -I above assumes that you have the source code from which your base kernel (the base kernel of the kernel into which you hope to load hello.c) was built in the conventional spot, /usr/src/linux. If you're masochistic enough to be using symbol versioning in your base kernel, then you better have run 'make dep' on that kernel source too, because that's what builds the .ver files that change the names of all your symbols.

But note that it's reasonably common not to have the kernel headers installed there, and often, the wrong headers are installed there. When you use a kernel that you loaded from a distribution CD, you often have to separately load the headers for it. To be safe, if you're playing with compiling LKMs, you really should compile your own kernel, so you know exactly what you're working with and can be absolutely sure you're working with matching header files.

The -nostdinc option isn't strictly necessary, but is the right thing to do. It will keep you out of trouble and also remind you that the services of the standard C library, which you may have melded in your mind with C itself, are not available to kernel code. -nostdinc says not to include "standard" directories in the include file search path. This means, most notably, /usr/include.

The -c option says you just want to create an object (.o) file, as opposed to gcc's default which is to create the object file, then link it with a few other standard object files to create something suitable for exec'ing in a user process. As you will not be exec'ing this module but rather adding it to the kernel, that link phase would be entirely inappropriate.

-Wall (which makes the compiler warn you about lots of kinds of questionable code) is obviously not necessary, but this program should not generate any warnings. If it does, you need to fix something.

Using the Kernel Build System

Lkmpg contains fine instructions for building (compiling) an LKM (except that the __KERNEL__ macro and usually the MODULE macro should be defined in the source code instead of with -D compiler options as Lkmpg suggests). But it deserves mention that some Linux kernel programmers believe that the only right way to build an LKM is to add it to a copy of the complete Linux source tree and build it with the existing Linux make files just like the LKMs that are part of Linux.

19

Page 20: Completed Manual OS Case Study

There are advantages to this. The biggest one is that when Linux programmers change the way LKMs interface with the rest of the kernel in a way that affects how you build an LKM, you're covered.

The Essential Bits and Pieces

There's a shopping list of what you'll need on your system, and here it is: The version of your currently running kernel, available via uname -r, as in:

$ uname -r 2.6.29.4-167.fc11.x86_64$

(Actually, you don't technically need that bit of information, so much as you just need to know what command to run to get it. You'll see why shortly.)

The standard development packages such as gcc, binutils, and so on.

The package of module utilities containing insmod, rmmod, and so on

A kernel source tree to build against, and this might require a bit more explanation. So let's do that.

This part's important so let's spend a couple minutes here. In order to compile a kernel module, you need at least part of a kernel source tree against which to compile. That's because when you write your module, all of the preprocessor #include statements you use do not refer to your normal user space header files. Rather, they refer to the kernel space header files found in the kernel source tree so, one way or another, you have to have the relevant portion of some kernel tree available to build against.

While you can get fancy and download your own kernel source tree for this, a simpler and easier solution (for now) is to install the official kernel development package that matches your running kernel. That kernel development package normally installs, under /usr/src/kernels, just enough of the source tree to contain the necessary header files and build infrastructure, and little else. (In Fedora, this would be the kernel-devel package. Under other distros, your mileage may vary, as they say.)

Finally, once that package is installed, you'll have to pass its directory location to your build step. You can either make a note of the actual directory name, or you can take advantage of the fact that it's normally available under the /lib/modules directory, by way of a symlink. On this Fedora 11 system:

$ ls -l /lib/modules/`uname -r` ... build ->../../../usr/src/kernels/2.6.29.4-167.fc11.x86_64

In other words, if that symlink exists, any time I need the location of the kernel source tree corresponding to the currently running kernel, I can always use the expression /lib/modules/`uname -r`/build, knowing it will keep up with any kernel upgrades. And that's exactly what we're going to do.

20

Page 21: Completed Manual OS Case Study

And now that we have our building blocks in place, on to writing our first module.

"Hello, Kernel!"

And without further ado, your first loadable module:

/* Module source file 'hi.c'. */

#include <linux/module.h> // for all modules #include <linux/init.h> // for entry/exit macros #include <linux/kernel.h> // for printk priority macros #include <asm/current.h> // process information, just for fun #include <linux/sched.h> // for "struct task_struct"

static int hi(void){ printk(KERN_INFO "hi module being loaded.\n"); printk(KERN_INFO "User space process is '%s'\n", current->comm); printk(KERN_INFO "User space PID is %i\n", current->pid); return 0; // to show a successful load }

static void bye(void) { printk(KERN_INFO "hi module being unloaded.\n"); }

module_init(hi); // what's called upon loading module_exit(bye); // what's called upon unloading

MODULE_AUTHOR("Robert P. J. Day"); MODULE_LICENSE("Dual BSD/GPL");

MODULE_DESCRIPTION("You have to start somewhere.");

Some notes about the above:

Technically, I didn't need to print anything upon module insertion or removal but, without some feedback, loading and unloading that module would be stultifyingly boring.

Always return zero from the init routine to signify a successful load.

Pick a valid license for your module, or you'll end up "tainting" the kernel--something we'll get into in the next article.

No, there is no comma after the log level in a printk statement. That's a common mistake. Don't make it.

21

Page 22: Completed Manual OS Case Study

So there's our first module. Let's build it.

The Makefile

Once again, let's get right at the Makefile we'll use to compile our module:

ifeq ($(KERNELRELEASE),)

KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD := $(shell pwd)

.PHONY: build clean

build: $(MAKE) -C $(KERNELDIR) M=$(PWD) modules

clean: rm -rf *.o *~ core .depend .*.cmd *.ko *.mod.c else

$(info Building with KERNELRELEASE = ${KERNELRELEASE}) obj-m := hi.o

endif

For now, don't ask--just run it and see the results:

$ make build make -C /lib/modules/2.6.29.4-167.fc11.x86_64/build M=/home/rpjday/lf/1 modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.x86_64' Building with KERNELRELEASE = 2.6.29.4-167.fc11.x86_64 CC [M] /home/rpjday/lf/1/hi.o Building modules, stage 2. Building with KERNELRELEASE = 2.6.29.4-167.fc11.x86_64 MODPOST 1 modules CC /home/rpjday/lf/1/hi.mod.o LD [M] /home/rpjday/lf/1/hi.ko make[1]: Leaving directory `/usr/src/kernels/2.6.29.4-167.fc11.x86_64' $ ls -l hi.ko -rw-rw-r--. 1 rpjday rpjday 126716 2009-06-24 19:18 hi.ko

$

And there it is--our loadable module hi.ko, ready to be loaded. But what was that Makefile all about?

22

Page 23: Completed Manual OS Case Study

To make a long story very short, that's a two-part Makefile. When you run make to compile your module, the Makefile immediately realizes that you're still in your source directory, at which point the make wanders off to the kernel source tree that you've identified, where it finds all of the necessary build infrastructure and kernel headers and so on, which it then uses to come back to compile your module.

We might come back some day and take a closer look at that. For now, take my word for it. If you got a hi.ko module out of it, you're good. And we're almost done here. But first ...

Poking at that Module

If you want to examine your newly-built module while it's just sitting there, no sweat--there's the modinfo command:

$ modinfo hi.ko filename: hi.ko description: You have to start somewhere. license: Dual BSD/GPL author: Robert P. J. Day srcversion: 658D8123B9EE52CF16981C4 depends: vermagic: 2.6.29.4-167.fc11.x86_64 SMP mod_unload

Which, mercifully, brings us to...

Loading and Unloading the Module

And assuming you managed to score root privilege, away we go:

# insmod hi.ko # lsmod Module Size Used by hi 1792 0 <--- oh, look! ... snip ... # rmmod hi #

But where did all your printk output go? Kernel programming rule one: you don't normally interact with user space, so don't expect to see print output coming back to your terminal. Instead, our output would have been directed to the standard syslog messages log file (in our case, /var/log/messages), so if we'd been keeping an eye on that file we would have seen:

Jun 24 19:33:01 localhost kernel: hi module being loaded. Jun 24 19:33:01 localhost kernel: User space process is 'insmod' Jun 24 19:33:01 localhost kernel: User space PID is 15359 Jun 24 19:33:03 localhost kernel: hi module being unloaded.

Cleaning Up the Mess and Starting Over

If you take one last look at that Makefile, you'll notice that it defines the clean target for removing the results of your build, so toss it all if you want to start over:

23

Page 24: Completed Manual OS Case Study

$ make clean

Suppose we want to add some extra functionality in the Linux kernel.So the first idea that strikes the mind is to enhance the kernel by adding more code to it, compiling the code and getting the new kernel up. But this process has the following drawbacks among several others:

The added code adds to the size of kernel permanently.

The whole kernel needs to be compiled again for the changes to get compiled.

This means that machine needs to be rebooted for changes to take affect.

The solution to above problems is the concept of LKMs.

LKM stands for Loadable kernel modules (LKM). As the name suggests LKMs are the modules that can be directly loaded in kernel at run time.

The loadable kernel module overcomes all the above mentioned shortcomings.

The module can be compiled separately

The module can be loaded onto kernel at run time without having the machine to reboot.

The module can be unloaded anytime and hence no permanent affect on kernel size.

How to Create LKMs

Lets create a basic loadable kernel module.#include <linux/module.h>#include <linux/kernel.h>int init_module(void){ printk(KERN_INFO "Welcome.....\n"); return 0;}void cleanup_module(void){ printk(KERN_INFO "Bye....\n");}

So we see that the above code is a basic LKM. The names ‘init_module’ and ‘cleanup_module’ are standard names for an LKM.

24

Page 25: Completed Manual OS Case Study

If you see closely then you will find that we have used ‘printk’ instead of ‘printf’. This is because it is not a normal C programming, its a kernel level programming which is a bit different from normal user level programming.

The headers module.h and kernel.h has to be included to get the code compiled.

How to Compile LKMs

To compile the above LKM, I used the following Makefile :obj-m += lkm.oall: sudo make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modulesclean: sudo make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

Note that the commands  beginning with the keyword ‘sudo’ above should have one tab space from the left. So, when the above command is run, the following output is observed :

make: Entering directory `/usr/src/linux-headers-2.6.32-21-generic'CC [M]  /home/himanshu/practice/lkm.oBuilding modules, stage 2.MODPOST 1 modulesCC      /home/himanshu/practice/lkm.mod.oLD [M]  /home/himanshu/practice/lkm.komake: Leaving directory `/usr/src/linux-headers-2.6.32-21-generic'After the above successful compilation you will find a .ko file in the same directory

where the compilation took place. This .ko file is the module that will be loaded in the kernel.   modinfo utility can be used to fetch the information about this module :

$ modinfo lkm.kofilename:       lkm.kosrcversion:     19967CB3EAB7B31E643E006depends:vermagic:       2.6.32.11+drm33.2 SMP mod_unload modversionsSo we see that the utility ‘modinfo’ provides some information about this module.

How LKM is Loaded

After a successful compilation and creation of the module, now is the time to insert it in the kernel so that it gets loaded on run time. The insertion of the module can be achieved using the following two utilities :

modprobe

25

Page 26: Completed Manual OS Case Study

insmod The difference between the two lies in the fact that ‘modprobe’ take care of the fact that

if the module in dependent on some other module then that module is loaded first and then the main module is loaded. While the ‘insmod’ utility just inserts the module (whose name is specified) into the kernel.

So ‘modprobe’ is a better utility but since our module is not dependent on any other module so we will use ‘insmod’ only. So, to insert the module, the following command is used :

$ sudo insmod ./lkm.koif this command does not give any error then that means the LKM is loaded successfully

in the kernel. To unload the LKM, the following command is used :$ sudo rmmod lkm.ko

Again, if this command does not give any error then that means the LKM is un-loaded successfully in the kernel. To check that the module was loaded and unloaded correctly we can use the dmesg utilitywhich gives the last set of logs as logged by the kernel. You’ll see the following two lines among all the other logs :

....

....[ 4048.333756] Welcome.....[ 4084.205143] Bye....

If you go back to the code and see then you will realize that these are the logs from the two functions in the code. So we see that one function was called when the ‘insmod’ was called and the other function was called when the ‘rmmod’ was called.This was just a dummy LKM. In this way many working LKM (that carry out meaningful tasks) work inside Linux kernel.

26

Page 27: Completed Manual OS Case Study

RESULT:

Thus the case study has been analyzed and built a loadable module.CASE STUDY 2: MINIX 3

The MINIX 3 operating system

Agenda

Introduction Minix 3 Features Design Goals Minix 3 Architecture Minix 3 Drivers and Servers Reliability and security Installation Conclusion References

Introduction 

MINIX 3 is a new open-source operating system designed to be highly reliable, flexible, and secure. It is loosely based somewhat on previous versions of MINIX, but is fundamentally different in many key ways. MINIX 1 and 2 were intended as teaching tools; MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability. This new OS is extremely small, with the part that runs in kernel mode under 6000 lines of executable code. The parts that run in user mode are divided into small modules, well insulated from one another.

For example, each device driver runs as a separate user-mode process so a bug in a driver (by far the biggest source of bugs in any operating system), cannot bring down the entire OS. In fact, most of the time when a driver crashes it is automatically replaced without requiring any user intervention, without requiring rebooting, and without affecting running programs. These features, the tiny amount of kernel code, and other aspects greatly enhance system reliability.

MINIX 3 Features POSIX compliant, Networking with TCP/IP X Window System, Languages: cc, gcc, g++, perl, python, etc. Over 650 UNIX programs, Many improvements since V2 Full multiuser and multiprogramming, Device drivers run as user processes

27

Page 28: Completed Manual OS Case Study

High degree of fault tolerance, Full C source code supplied

Design Goals

The approach that MINIX 3 uses to achieve high reliability is fault isolation. In particular, unlike traditional OSes, where all the code is linked into a single huge binary running in kernel mode, in MINIX 3, only a tiny bit of code runs in kernel mode--about 4000 lines in all (Minix 2). This code handles interrupts, process scheduling, and interprocess communication. The rest of the operating system runs as a collection of user-mode processes, each one encapsulated by the MMU hardware and none of them running as superuser. One of these processes, dubbed thereincarnation server, keeps tabs on all the others and when one of them begins acting sick or crashes, it automatically replaces it by a fresh version. Since many bugs are transient, triggered by unusual timing, in most cases, restarting the faulty component solves the problem and allows the system to repair itself without a reboot and without the user even noticing it. This property is called self healing, and traditional systems do not have it.

Minix 3 Architecture

MINIX 3 is a microkernel based POSIX compliant operating system designed to be highly reliable, flexible, and secure. The approach is based on the ideas of modularity and fault isolation by breaking the system into many self-contained modules. In general the MINIX design is guided by the following principles:

Simplicity: Keep the system as simple as possible so that it is easy to understand and thus more likely to be correct.

Modularity: Split the system into a collection of small, independent modules and therefore prevent failures in one module from indirectly affecting another module.

Least authorization: Reduce privileges of all modules as far as it is possible. Fault tolerance: Design the system in a way that it withstands failures. De-tect the

faulty component and replace it, while the system continues running the entire time. The operating system is structured as follows. A minimal kernel provides interrupt

handlers, a mechanism for starting and stopping processes, a scheduler, and interprocess communication. Standard operating system functionality that is usually present in a monolithic kernel is moved to user space, and no longer runs at the highest privilege level. Device drivers, the file system, the network server and high-level memory management run as separate user processes that are encapsulated in their private address space.

28

Page 29: Completed Manual OS Case Study

Although from the kernel’s point of view the server and driver processes are also just user-mode processes, logically they can be structured into three layers. The lowest level of user-mode processes are the device drivers, each one control-ling some device. Drivers for IDE, floppy, and RAM disks, etc. Above the driver layer are the server processes. These include the VFS server, underlying file sys-tem implementations, process server, reincarnation server, and others. On top of the servers come the ordinary user processes including shells, compilers, utilities, and application programs. Figure 1.1 shows the structure of the operating system.

Because the default mode of interprocess communication (IPC) are synchronous calls, deadlocks can occur when two or more processes simultaneously try to communicate and all processes are blocked waiting for one another. Therefore, a deadlock avoidance protocol has been carefully devised that prescribes a partial, top-down message ordering. The message ordering roughly follows the layering that is described above. Deadlock detection is also implemented in the kernel. If a process unexpectedly were to cause a deadlock, the offending is denied and an error message is returned to the caller.

Recovering from failures is an important reliability feature in MINIX. Servers and drivers are started and guarded by a system process called the reincarnation server. If a guarded process unexpectedly exits or crashes this is immediately de-tected – because the process server notifies the reincarnation server whenever a server or driver terminates – and the process is automatically restarted. Further-more, the reincarnation server periodically polls all servers and drivers for their status. If one does not respond correctly within a specified time interval, the rein-carnation server kills and restarts the misbehaving server or driver.

This topic explains how to install MINIX 3. A complete MINIX 3 installation requires a Pentium (or compatible) with at least 16-MB of RAM, 1 GB of free disk space, an IDE CD-ROM and an IDE hard disk. A minimal installation (without the commands sources) requires 8 MB RAM and 50 MB of disk space. Serial ATA, USB, and SCSI disks are not supported at

29

Page 30: Completed Manual OS Case Study

present. For USB CD-ROMS, see the Website: www.minix3.org.

A.1 PREPARATION

If you already have the CD-ROM (e.g., from the book), you can skip steps 1 and 2, but it is wise to check www.minix3.org to see if a newer version is avail-able. If you want to run MINIX 3 on a simulator instead of native, see Part V first. If you do not have an IDE CD-ROM, either get the special USB CD-ROM boot image or use a simulator.

1. Download the MINIX 3 CD-ROM imageDownload the MINIX 3 CD-ROM image from the MINIX 3 Website at www.minix3.org.

2. Create a bootable MINIX 3 CD-ROMDecompress the downloaded file. You will get a CD-ROM image file with extension .iso

and this manual. The .iso file is a bit-for-bit CD-ROM image. Burn it to a CD-ROM to make a bootable CD-ROM.

If you are using Easy CD Creator 5, select ‘‘Record CD from CD image from the File menu and change the file type from .cif to .iso in the dialog box that appears. Select the image file and click ‘‘Open.’’ Then click ‘‘Start Recording.’’

If you are using Nero Express 5, choose ‘‘Disc Image or Saved Project’’ and change the type to ‘‘Image Files,’’ select the image file and click ‘‘Open.’’ Select your CD recorder and click on ‘‘Next.’’

If you are running Windows XP and do not have a CD-ROM burning pro-gram, take a look at alexfeinman.brinkster.net/isorecorder.htm for a free one and use it to create a CD image.

3. Determine which Ethernet Chip you haveMINIX 3 supports several Ethernet chips for networking over LAN, ADSL, and cable.

These include Intel Pro/100, RealTek 8029 and 8139, AMD LANCE, and several 3Com chips. During setup you will be asked which Ethernet chip you have, if any. Determine that now by looking at your documentation. Alternatively, if you are using Windows, go to the device manager as follows:

Windows 2000:Start > Settings > Control Panel > System > Hardware > Device Manager Windows XP: Start > Control Panel > System > Hardware > Device Manager

System requires double clicking; the rest are single. Expand the + next to ‘‘Net-work adapters’’ to see what you have. Write it down. If you do not have a sup-ported chip, you can still run MINIX 3, but without Ethernet.

4. Partition your hard disk

30

Page 31: Completed Manual OS Case Study

You can boot the computer from your CD-ROM if you like and MINIX 3 will start, but to do anything useful, you have to create a partition for it on your hard disk. But before partitioning, be sure to back up your data to an external med-ium like CD-ROM or DVD as a safety precaution, just in case something goes wrong. Your files are valuable; protect them.

Unless you are sure you are an expert on disk partitioning with much experi-ence, it is strongly suggested that you read the online tutorial on disk partitioning at www.minix3.org/doc/partitions.html. If you already know how to manage par-titions, create a contiguous chunk of free disk space of at least 50 MB, or, if you want all the commands sources, 1 GB. If you do not know how to manage parti-tions but have a partitioning program like Partition Magic, use it to create a region of free disk space. Also make sure there is at least one primary partition (i.e., Master Boot Record slot) free. The MINIX 3 setup script will guide you through creating a MINIX partition in the free space, which can be on either the first or second IDE disk.

If you are running Windows 95, 98, ME, or 2000 and your disk consists of a single FAT partition, you can use the presz134.exe program on the CD-ROM (also available at zeleps.com) to reduce its size to leave room for MINIX. In all other cases, please read the online tutorial cited above.

If your disk is larger than 128 GB, the MINIX 3 partition must fall entirely in the first 128 GB (due to the way disk blocks are addressed).

WARNING: If you make a mistake during disk partitioning, you can lose all the data on the disk, so be sure to back it up to CD-ROM or DVD before starting. Disk partitioning requires great care, so proceed with caution.

A.2 BOOTING

By now you should have allocated some free space on your disk. If you have not done so yet, please do it now unless there is an existing partition you are wil-ling to convert to MINIX 3.

1. Boot from the CD-ROMInsert the CD-ROM into your CD-ROM drive and boot the computer from it. If you have

16 MB of RAM or more, choose ‘‘Regular;’’ if you have only 8 MB choose ‘‘small.’’ If the computer boots from the hard disk instead of the CD-ROM, boot again and enter the BIOS setup program to change the order of boot devices, putting the CD-ROM before the hard disk.

2. Login as rootWhen the login prompt appears, login as root. After a successful login as root, you will

see the shell prompt (#). At this point you are running fully-operational MINIX

31

Page 32: Completed Manual OS Case Study

3. If you type:ls /usr/bin | moreyou can see what software is available. Hit space to scroll the list. To see what program

foo does, type:man fooThe manual pages are also available at www.minix3.org/manpages.

3. Start the setup scriptTo start the installation of MINIX 3 on the hard disk, type

setup

After this and all other commands, be sure to type ENTER (RETURN). When the installation script ends a screen with a colon, hit ENTER to continue. If the screen suddenly goes blank, press CTRL-F3 to select software scrolling (should only be needed on very old computers). Note that CTRL-key means depress the CTRL key and while holding it down, press ‘‘key’’.

A.3 INSTALLING TO THE HARD DISK

These steps correspond to the steps on the screen.

1. Select keyboard typeWhen you are asked to select your national keyboard, do so. This and other steps have a

default choice, in square brackets. If you agree with it, just hit ENTER. In most steps, the default is generally a good choice for beginners. The us-swap keyboard interchanges the CAPS LOCK and CTRL keys, as is conven-tional on UNIX systems.

2. Select your Ethernet chipYou will now be asked which of the available Ethernet drivers you want in-stalled (or

none). Please choose one of the options.

3. Basic minimal or full distribution?If you are tight on disk space, select M for a minimal installation which includes all the

binaries but only the system sources installed. The minimal option does not install the sources of the commands. 50 MB is enough for a bare-bones system. If you have 1 GB or more, choose F for a full installation.

4. Create or select a partition for MINIX 3You will first be asked if you are an expert in MINIX 3 disk partitioning. If so, you will

be placed in the part program to give you full power to edit the Master Boot Record (and enough rope to hang yourself). If you are not an expert, press ENTER for the default action, which is an automated step-by-step guide to for-matting a disk partition for MINIX 3.

32

Page 33: Completed Manual OS Case Study

Substep 4.1: Select a disk to install MINIX 3An IDE controller may have up to four disks. The setup script will now look for each

one. Just ignore any error messages. When the drives are listed, select one. and confirm your choice. If you have two hard disks and you decide to install MINIX 3 to the second one and have trouble booting from it, please see www.minix3.org/doc/using2disks.html for the solution.

Substep 4.2: Select a disk regionNow choose a region to install MINIX 3 into. You have three choices:

(1) Select a free region (2) Select a partition to overwrite (3) Delete a partition to free up space and merge with adjacent free space

For choices (1) and (2), type the region number. For (3) type Deletethen give the region number when asked. This region will be overwritten and its previous contents lost forever.

Substep 4.3: Confirm your choices

You have now reached the point of no return. You will be asked if you want to continue. If you do, the data in the selected region will be lost forever. If you are sure, type:

yesand then ENTER. To exit the setup script without changing the partition table, hit CTRL-C.

5. Reinstall choiceIf you chose an existing MINIX 3 partition, in this step you will be offered a choice

between a Full install, which erases everything in the partition, and a Rein-stall, which does not affect your existing /home partition. This design means that you can put your personal files on /home and reinstall a newer version of MINIX 3 when it is available without losing your personal files.

6. Select the size of /homeThe selected partition will be divided into three subpartitions: root, /usr, and /home. The

latter is for your own personal files. Specify how much of the partition should be set aside for your files. You will be asked to confirm your choice.

7. Select a block sizeDisk block sizes of 1-KB, 2-KB, 4-KB, and 8-KB are supported, but to use a size larger

than 4-KB you have to change a constant and recompile the system. If your memory is 16 MB or more, use the default (4 KB); otherwise, use 1 KB.

33

Page 34: Completed Manual OS Case Study

8. Wait for bad block detectionThe setup script will now scan each partition for bad disk blocks. This will take several

minutes, possibly 10 minutes or more on a large partition. Please be patient. If you are absolutely certain there are no bad blocks, you can kill each scan by hitting CTRL-C.

9. Wait for files to be copiedWhen the scan finishes, files will be automatically copied from the CD-ROM to the hard

disk. Every file will be announced as it is copied. When the copying is complete, MINIX 3 is installed. Shut the system down by typing

shutdownAlways stop MINIX 3 this way to avoid data loss as MINIX 3 keeps some files on the

RAM disk and only copies them back to the hard disk at shutdown time.

10. Install packagesTo start, boot your new MINIX 3 system For example, if you used controller 0, disk 0,

partition 3, typeboot c0d0p3

and log in as root. Under very rare conditions the drive number seen by the BIOS (and used by the boot monitor) may not agree with the one used by MINIX 3. Try the one announced by the setup script first.

The MINIX 3 distribution comes with a large number of software packages. To install them, type

packmanand choose one of the options, depending on whether you want to install all the binaries, all the binaries and sources, or select the packages you want. When you have finished installing packages, exit packman by choosing option 5. If you have installed the X Windows package, you can start it now by typing

xdm

A.4 TESTINGThis section tells you how to test your installation, rebuild the system after modifying it,

and boot it later. To start, boot your new MINIX 3 system. For example, if you used controller 0, disk 0, partition 3, type

boot c0d0p3and log in as root. Under very rare conditions the drive number seen by the BIOS (and used by the boot monitor) may not agree with the one used by MINIX 3. Try the one announced by the setup script first. This is a good time to create a root password. See man passwd for help.

34

Page 35: Completed Manual OS Case Study

1. Compile the test suiteTo test MINIX 3, at the command prompt (#) typecd /usr/src/test make

and wait until it completes all 40 compilations. Log out by typing CTRL-D,

2. Run the test suiteTo test the system, log in as bin (required) and type

cd /usr/src/test./run

to run the test programs. They should all run correctly but they can take 20 min on a fast machine and over an hour on a slow one. Note: It is necessary to compile the test suite when running as root but execute it as bin in order to see if the setuid bit works correctly.

3. Rebuild the entire operating systemIf all the tests work correctly, you can now rebuild the system. Doing so is not necessary

since it comes prebuilt, but if you plan to modify the system, you will need to know how to rebuild it. Besides, rebuilding the system is a good test to see if it works. Type:

cd /usr/src/tools maketo see the various options available. Now make a new bootable image by typing

sumake cleantime make image

You just rebuilt the operating system, including all the kernel and user-mode parts. That did not take very long, did it? If you have a legacy floppy disk drive, you can make a bootable floppy for use later by inserting a formatted floppy and typing

make fdboot

When you are asked to complete the path, type:fd0

This approach does not currently work with USB floppies since there is no MINIX 3 USB floppy disk driver yet. To update the boot image currently installed on the hard disk, type

make hdboot

4. Shut down and reboot the new systemTo boot the new system, first shut down by typing:

shutdownThis command saves certain files and returns you to the MINIX 3 boot monitor. To get a summary of what the boot monitor can do, while in it, type:

35

Page 36: Completed Manual OS Case Study

helpFor more details, see www.minix3.org/manpages/man8/boot.8.html. You can now remove any CD-ROM or floppy disk and turn off the computer.

5. Booting TomorrowIf you have a legacy floppy disk drive, the simplest way to boot MINIX 3 is by inserting your

new boot floppy and turning on the power. It takes only a few seconds. Alternatively, boot from the MINIX 3 CD-ROM, login as bin and type:

shutdown

to get back to the MINIX 3 boot monitor. Now type:

boot c0d0p0

to boot from the operating system image file on controller 0, driver 0, partition 0. Of course, if you put MINIX 3 on drive 0 partition 1, use:

boot c0d0p1

and so on.

A third possibility for booting is to make the MINIX 3 partition the active one, and use the MINIX 3 boot monitor to start MINIX 3 or any other operating system. For details see www.minix3.org/manpages/man8/boot.8.html.

Finally, a fourth option is for you to install a multiboot loader such as LILO or GRUB (www.gnu.org/software/grub). Then you can boot any of your operating systems easily. Discussion of multiboot loaders is beyond the scope of this guide, but there is some information on the subject at www.minix3.org/doc.

A.5 USING A SIMULATOR

A completely different approach to running MINIX 3 is to run it on top of another operating system instead of native on the bare metal. Various virtual machines, simulators, and emulators are available for this purpose. Some of the most popular ones are:

dVMware (www.vmware.com) dBochs (www.bochs.org) dQEMU (www.qemu.org)

36

Page 37: Completed Manual OS Case Study

See the documentation for each of them. Running a program on a simulator is similar to running it on the actual machine, so you should go back to Part I and acquire the latest CD-ROM and continue from there.

Minix 3 Drivers and Servers

Drivers and servers run as user mode processes Powers granted to them are carefully cotrolled

o Non can execute privileged instructionso Time slicing so drivers/servers in infinite loop can't hang syso None can access kernel memoryo None can directly access other address spaceso Bit map for determining allowed kernel callso Bit map for who each one can send too No direct I/O and list of allowed I/O ports via kernel

User mode serverso File server:

File system interface to user space programso Process manager

Process management (Creation-Termination) Handles signals

o Virtual memory server Mechanisms is in the kernel, policy is in the VM server Keeps track of free and used pages Catches and handles page faults

o Data store Small local name server Used to map server name to end point Could be used for recoverable drivers

o Information server Used for debug dumps

o Network server Contains full TCP/IP stack in user space (Interesting huh? :-D)

o X servero Reincarnation server

Parent of all drivers and servers Whenever a server/driver dies the RS collects it RS checks a table for action to take e.g., Restart it RS also ping drivers and servers frequently

Reliability and security   

Kernel reliability and security Fewer line of codes means fewer kernel bugs No foreign code (e.g., drivers) in the kernel

37

Page 38: Completed Manual OS Case Study

Static data structures (no malloc in kernel) Removing bugs to user space reduces their power

IPC reliability and security Fixed length messages (no buffer overruns) Initial rendezvous system was simple

o No lost messageso No buffer management

Interrupts and messages are unifiedProblem:

Clients send messages to server Server tries to respond, but client has died Server can't send message and thus hangs

Solution: Asynchronous messages

Drivers reliability and security

Untrusted code: heavily isolated Bugs, viruses cannot spread to other modules Cannot touch kernel data structures Bad pointers crash only one driver; recoverable Infinite loops detected and driver restarted Restricted power to do damage (not superuser)

Conclusion 

Current OSes are bloated and unreliable Minix 3 is an attempt at a reliable, secure OS Kernel is very small (6000 LoC) OS runs as a collection of user space processes Each driver is a separate process Each OS component has restricted privileges Faulty drivers can be replaced automatically

Example Module: How to Add a New System Call for Minix 3

1. Introduction

Minix3 has the micro-kernel architecture. The micro-kernel handles interrupts, provides basic mechanisms for process management, implements inter-process communication, and performs process scheduling. Filesystem, process management, networking, and other user-services are available from separate servers outside the micro-kernel. The system calls handled by these services are now processed outside the kernel. The kernel supports a few system-calls and these are called system-tasks. Systems-tasks are more like a hardware abstraction.

38

Page 39: Completed Manual OS Case Study

In Minix3, the servers handle system calls. Adding a new system call consists of two steps: writing a system- call handler and writing a user library. System-call handler is a function that is called in response to a user requesting a system call. Each system call has one handler. A user library packages the parameters for the system call and calls the handler on the appropriate server. A user always invokes a system call using the library.

The system-call handler should be placed in an appropriate server, which in turn would process a user request by invoking the matching handler. It is important to choose the correct server for the system-call. For instance, if the system call should update filesystem or fproc data-structures, then the system-call handler should be placed in the FS (filesystem) server.

This document illustrates the method for adding a new system call for Minix3 using an example. We would implement a system-call handler do_printmessage() in the FS server that would simply print a message “I am a system call”. However, the method described could be used for adding the handler to any server. We would also add a user-library to call the handler.

2. Creating a System-call Handler

The source code for all servers are located at /usr/src/servers. Each server has a separate directory. Filesystem (FS) is located at /usr/src/servers/fs. Each of the server source directories contain two files: table.c and proto.h. Table.c contains definition for the call_vec table. The call_vec table is an array of function pointers that is indexed by the system-call number. In each line, the address of a system-call handler function is assigned to one entry in the table and the index of the entry is the system-call number.

PUBLIC _PROTOTYPE (int (*call_vec[]), (void) ) = {

no_sys, /* 0= unused */

do_exit, /* 1 = exit */do_fork, /* 2 = fork */

do_read,/* 3 = read */

do_write, /* 4 = write */do_open, /* 5 = open */do_close, /* 6 = close */no_sys, /* 7 = wait */do_creat, /* 8 = creat */

Figure 1: Some entries from /usr/src/servers/fs/table.c

Figure 1 contains a few entries from /usr/src/servers/fs/table.c. The second line in the

39

Page 40: Completed Manual OS Case Study

table assigns the address of function do_exit to the second entry in the table. The index of the second entry, which is the number 2, is the system-call number for calling the handler do_exit.

do_unpause, /* 65 = NPAUSE*/no_sys, /* 66 = unused */do_revive, /* 67 = REVIVE */no_sys, /* 68 = TASK_REPLY */no_sys, /* 69 = unused */no_sys, /* 70 = unused */no_sys, /* 71 = si */no_sys, /* 72 = sigsuspend */no_sys, /* 73 = sigpending */no_sys, /* 74 = sigprocmask */

Figure 2: Unused entries

There are a few unused entries. For adding a new system call, we need to identify one unused entry. For instance, index 69 contains an unused entry. We could use slot number 69 for our system-call handler do_printmessage(). To use entry 69, we replace no_sys with do_printmessage().

do_revive, /* 67 = REVIVE*/no_sys, /* 68 TASK_REPLY*/

do_printmessage(), /* 69 = unused */no_sys, /* 70 = unused */no_sys, /* 71 = si */

Figure 3: Using entry 69

The next step is to declare a prototype of the system-call handler in file /usr/src/servers/fs/proto.h. This file contains the prototypes of all system-call handler functions. Figure 4 contains a few prototype declarations from /usr/src/servers/fs/proto.h. We should add the prototype for the system-call handler to the proto.h file.

_PROTOTYPE( int do_printmessage, (void) );

/* open.c */_PROTOTYPE( int do_close, (void) );_PROTOTYPE( int do_creat, (void) );_PROTOTYPE( int do_lseek, (void) );_PROTOTYPE( int do_mknod, (void) );_PROTOTYPE( int do_mkdir, (void) );_PROTOTYPE( int do_open, (void) );

40

Page 41: Completed Manual OS Case Study

Figure 4: /usr/src/servers/fs/proto.h

Int do_printmessage(){

printf(“\I am a system call \n”); return (OK);

}

Figure 5: Our system-call handler

A few files like misc.c, stadir.c, write.c, and read.c contain the definitions for the system-call handler functions. We could either add our system-call handler to one of these files or have it in a separate file. If we choose to add it in a separate file, we have to make changes in the /usr/src/servers/fs/Makefile accordingly. For our example, we will add the definition of function do_printmessage() to /usr/src/servers/fs/misc.c. After implementing the system-call handler, we can compile the FS server to ensure that our new system-call handler does not contain any errors.

2.1. Compiling the FS Server

Steps for compiling the servers:Go to directory /usr/src/servers/ Issue “make image” Issue “make install”

(4) Calling the System-call Handler Function Directly

Our system-call handler function do_printmessage() has the system-call number 69. We could call the system-handler function directly using the system call _syscall. _syscall takes three parameters: the recipient process, system-call number, and pointer to a message structure.

PUBLIC int _syscall(int who, int syscallnr, register message *msgptr);

In our example, the recipient process is FS, the system-call number is 69, and we do not pass any parameters. Still, we should pass a pointer to an empty message for the third parameter when calling the handler function. We call the handler as show below.

message m; _syscall(FS,69,&m);

When the system-call handler needs to receive some parameters, we pass the parameters using the message structure. The message structure is described in figure 6. To use the message structure, the header file “lib.h” should be used. The header file contains some “#define”’s that

41

Page 42: Completed Manual OS Case Study

makes using the message structure simple.

typedef struct {int m1i1, m1i2, m1i3; char *m1p1, *m1p2, *m1p3;} mess_1; typedef struct {int m2i1, m2i2, m2i3; long m2l1, m2l2; char *m2p1;} mess_2; typedef struct {int m3i1, m3i2; char *m3p1; char m3ca1[M3_STRING];} mess_3; typedef struct {long m4l1, m4l2, m4l3, m4l4, m4l5;} mess_4;typedef struct {short m5c1, m5c2; int m5i1, m5i2; long m5l1, m5l2, m5l3;}mess_5; typedef struct {int m7i1, m7i2, m7i3, m7i4; char *m7p1, *m7p2;} mess_7;typedef struct {int m8i1, m8i2; char *m8p1, *m8p2, *m8p3, *m8p4;} mess_8;typedef struct {int m_source; /* who sent the message */int m_type; /* what kind of message is it */union {

mess_1 m_m1;mess_2 m_m2;mess_3 m_m3;mess_4 m_m4;mess_5 m_m5;mess_7 m_m7;mess_8 m_m8;

} m_u;} message;/* The following defines provide names for useful members. */ #define m1_i1 m_u.m_m1.m1i1#define m1_i2 m_u.m_m1.m1i2 #define m1_i3 m_u.m_m1.m1i3 #define m1_p1 m_u.m_m1.m1p1 #define m1_p2 m_u.m_m1.m1p2 #define m1_p3 m_u.m_m1.m1p3#define m2_i1 m_u.m_m2.m2i1 #define m2_i2 m_u.m_m2.m2i2 #define m2_i3 m_u.m_m2.m2i3 #define m2_l1 m_u.m_m2.m2l1 #define m2_l2 m_u.m_m2.m2l2 #define m2_p1 m_u.m_m2.m2p1#define m3_i1 m_u.m_m3.m3i1 #define m3_i2 m_u.m_m3.m3i2 #define m3_p1 m_u.m_m3.m3p1 #define m3_ca1 m_u.m_m3.m3ca1#define m4_l1 m_u.m_m4.m4l1 #define m4_l2 m_u.m_m4.m4l2

42

Page 43: Completed Manual OS Case Study

#define m4_l3 m_u.m_m4.m4l3 #define m4_l4 m_u.m_m4.m4l4 #define m4_l5 m_u.m_m4.m4l5#define m5_c1 m_u.m_m5.m5c1 #define m5_c2 m_u.m_m5.m5c2 #define m5_i1 m_u.m_m5.m5i1 #define m5_i2 m_u.m_m5.m5i2

Message structure

Say a system-call handler do_managecap needs to receive three integer parameters. The system-call number of do_managecap is 58. We need to initialize the three parameters in the message structure, and call the system call handler using the message structure as shown in figure 7.

message m;

m.m1_i1=45; m.m1_i2=55; m.m1_i3=65;

_syscall(FS,58,&m);

Figure 7: Passing Parameter Using the Message Structure

The FS server has a global variable named “m_in”, which is a message structure. Whenever a system-call arrives at the FS server, m_in would contain the message structure pointed to by the third parameter in the call. We retrieve the three parameters from the m_in message structure in the system-call handler function.

PUBLIC int do_manageusercap(void){

int user_id = m_in.m1_i1; int what = m_in.m1_i2;int cap_to_process=m_in.m1_i3;

Figure 8: Retrieving Parameters From the Message Structure

3. Creating a User Library Function

A user library function would package the parameters for the system-call handler in the message structure and would call the handler function. First, we should use #define to map the system-call

43

Page 44: Completed Manual OS Case Study

number of the handler function to an identifier in the file /usr/src/include/minix/callnr.h and /usr/include/minix/callnr.h.

#define PRINTMESSAGE 69

We implement the library function for the do_printmessage system call in a separate file named

_printmessage.c. This file should be placed in the directory /usr/src/lib/posix/.

#include <lib.h> #include <unistd.h>

PUBLIC int printmessage(void){

message m; return(_syscall(FS,PRINTMESSAGE,&m));

}

Figure 9: Library Function Implementation

3.1. Compiling the LibrarySteps to compile the new library

Go to the directory /usr/src/lib/posix/ Add the name of the file in the /usr/src/lib/posix/Makefile.in Issue the command “make Makefile”. (This command will generate a new

makefile with the rules for the new file included.) Go to directory /usr/src/ Issue command “make libraries” All these steps will compile and install the updated posix library.

3.1. Creating a New Boot-Image Using the Updated Servers and Library

We already compiled and created the binaries for the servers, and now we have the fresh libraries compiled and installed. Now, we need to merge them the updated binaries and create a new boot image.

Steps for creating the boot-image: Go to directory /usr/src/tools Issue command “make hdboot” Issue command “make install”

These steps would create a new boot -image in the directory /boot/image/. Note down the name of the new boot-image. When we shutdown and reboot, we should select the new boot-image.

44

Page 45: Completed Manual OS Case Study

In the boot prompt, we could setup the new image for booting using the command “image =/boot/image/<name of boot-image>”. Then we could issue the command boot to startup using the new boot-image.

Now, the new system call is ready to use.

4. Using the New System Call

#include <stdio.h>

int main(){

printmessage();}

RESULT:

45

Page 46: Completed Manual OS Case Study

Thus the Study of educational operating systems such as Minix and Weenix has been done and developed a reasonably sized interesting modules for them

CASESTUDY 3: ANDROID

Andoid:

Android is an operating system based on the Linux kernel,[12] and designed primarily for touch screen mobile devices such as smartphones and tablet computers

The user interface of Android is based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching and reverse pinching to manipulate on-screen objects. Internal hardware . eg: accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions.

For example adjusting the screen from portrait to landscape depending on how the device is oriented. Android allows users to customize their home screens with shortcuts to applications and widgets, which allow users to display live content, such as emails and weather information, directly on the home screen. Applications can further send notifications to the user to inform them of relevant information, such as new emails and text messages.

Version Code name Release date API level Distribution

4.4 KitKat October 31, 2013 19 1.4%

4.3.x

Jelly Bean

July 24, 2013 18 7.8%

4.2.x November 13, 2012 17 15.4%

4.1.x July 9, 2012 16 35.9%

4.0.3–4.0.4 Ice Cream Sandwich December 16, 2011 15 16.9%

3.2 Honeycomb July 15, 2011 13 0.1%

2.3.3–2.3.7 Gingerbread February 9, 2011 10 21.2%

2.2 Froyo May 20, 2010 8 1.3%

46

Page 47: Completed Manual OS Case Study

History:

2014-02-14: The Android-x86 4.4-RC1 released (kitkat-x86). 2013-12-17: The kitkat-x86 branch is updated to Android 4.4.2 release (kitkat-mr1).

2013-12-10: The kitkat-x86 branch is updated to Android 4.4.1 release.

2013-11-04: The kitkat-x86 branch is ready in git.android-x86.org.

2013-11-01: Google announced Android 4.4, the KitKat release.

2013-07-25: Test build 20130725 of jb-x86 based on Android 4.3 is ready

2013-07-24: Google announced Android 4.3. The source code is released.

2013-06-23: Android-x86 4.0-r1 is released.

2013-06-08: Joe Spencer created a supported device list.

2013-02-28: A new test build of jb-x86 is available for downloading.

2013-01-16: The jb-x86 source tree is ready for downloading.

2013-01-02: Android-x86 at Google code is unbanned.

2013-01-01: Android-x86 was blocked by Google code.

2012-12-25: The first test release of jb-x86 is available for downloading.

2012-11-20: begin the jb-x86 porting based on Android 4.2.

2012-07-15: Android-x86 4.0-RC2 is released.

2012-07-11: Intel contributes x86 Dalvik JIT support to AOSP.

2012-06-26: Dalvik patch for arm translator from BuilDroid is merged.

2012-04-10: The ics-x86 branch is updated to Android 4.0.4.

2012-02-27: Android-x86 4.0-RC1 is released.

2012-01-01: Test build 20120101 is released.

2011-12-25: New testing ics-x86 isos supporting hybrid format are available.

2011-12-24: The git.android-x86.org is back and supports smart http transport.

2011-12-20: The git.android-x86.org is down for maintenance.

2011-12-10: Display issue of Intel i915/i965 for ics-x86 is solved.

2011-12-01: The ics-x86 branch is ready for developers.

47

Page 48: Completed Manual OS Case Study

2011-11-23: Android-x86 3.2-RC2 (honeycomb-x86) is released.

2011-10-31: New target amd_persimmon is added (contributed by AMD).

2011-10-30: The gingerbread-x86 branch is updated to Android 2.3.7.

2011-08-28: Android-x86 2.3 RC1 (Test build 20110828) is released.

2011-08-08: The gingerbread-x86 branch is updated to Android 2.3.5, hardware

acceleration enabled for some targets.

2011-07-04: The ethernet support is added to gingerbread-x86 branch.

2011-06-28: Android-x86 2.2-r2 is released.

2011-05-10: The mouse patch is added to gingerbread-x86 branch.

2011-05-05: The gingerbread-x86 branch is updated to Android 2.3.4, API level 10.

2011-04-20: The gingerbread-x86 branch is updated to Android 2.3.3.

2011-04-08: The froyo-x86 branch is updated to Android 2.2.2 based.

2011-04-02: AMD donates T56N/E1 development boards to Android-x86.org

2011-03-30: The build break in gingerbread-x86 branch has been fixed.

2011-02-10: TegaTech donates three Tega v2 tablets to Android-x86.org.

2011-01-26: Gingerbread-x86 branch is ready to download.

2011-01-13: Android-x86 2.2 is released.

2011-01-01: Test build 20110101 is released.

Android delivers a complete set of software for mobile devices:

Operating System Middleware Key mobile applications Open Breaking down Application Boundaries Fast & Easy Application Development

Android OS is built on top of the Linux 2.6 Kernel Linux Core functionality

• Memory management • Process management

48

Page 49: Completed Manual OS Case Study

• Networking • Security settings

Hardware drivers

Android’s native libraries.

Libc: c standard lib. SSL: Secure Socket Layer SGL: 2D image engine OpenGL|ES: 3D image engine 2D and 3D graphics hardware support or software simulation Media Framework: media codecs offer support for major audio/video codecs SQLite: Database engine WebKit: Kernel of web browser for fast HTML rendering

FreeType: Bitmap and Vector SufraceManager: Compose window manager with off-screen buffering. Bionic, a super fast and small license-friendly libc library optimized for

embedded use Surface Manager for composing window manager with off-screen buffering

Core Libraries

Provides the functionality of the JAVA Programming Language

Dalvik VM

A type of Java Virtual Machine Register based (not stack machine like JVM) Optimization for low memory requirements Executes .dex (Dalvik-Executable) files instead of .class DX tool converts classes to .dex format

49

Page 50: Completed Manual OS Case Study

Each Android application:

• runs on its own Process • runs on its own Instance of Dalvik VM • is assigned its own Linux user ID

Important blocks:

o Activity Manager: Manages the activity life cycle of applications o Content Providers: Manage the data sharing between applications o Telephony Manager: Manages all voice calls. We use telephony

manager , if we want to access voice calls in our application. o Location Manager: Location management, using GPS or cell tower o Resource Manager: Manage the various types of resources

Intents Intent = asynchronous message w/ or w/o designated target Like a polymorphic Unix signal, but w/o required target Intents “payload” held in Intent Object Intent Filters specified in Manifest file

Overall Android architecture:

50

Page 51: Completed Manual OS Case Study

Components

1 App = N Components Apps can use components of other applications App processes are automagically started whenever any part is needed Ergo: N entry points, !1, and !main() Components: Activities Services Broadcast Receivers Content Providers

Component lifecycle System automagically starts/stops/kills processes: Entire system behaviour predicated on low memory System triggers Lifecycle callbacks when relevant Ergo: Must manage Component Lifecycle Some Components are more complex to manage than others

Development tools SDK: android – manage AVDs and SDK components apkbuilder – creating .apk packages dx – converting .jar to .dex adb – debug bridge emulator – QEMU-based ARM emulator Eclipse w/ ADT plugin

51

Page 52: Completed Manual OS Case Study

NDK: GNU toolchain for native binaries

System Server Entropy Service Device Policy Audio Service Power Manager Status Bar Headset Observer Activity Manager Clipboard Service Dock Observer Telephone Registry Input Method Service UI Mode Manager Service Package Manager Backup Service Account Manager Content Manager Connectivity Service Recognition Service System Content Providers Throttle Service Status Bar Icons Battery Service Accessibility Manager Lights Service Mount Service ADB Settings Observer Vibrator Service Notification Manager Alarm Manager Device Storage Monitor Location Manager Sensor Service Search Service Window Manager Wallpaper Service NetStat Service NetworkManagement Service AppWidget Service DiskStats Service Init Watchdog DropBox Service Bluetooth Service

ActivityManager Start new Activities, Services Fetch Content Providers Intent broadcasting OOM adj. maintenance Application Not Responding Permissions Task management Lifecycle management

Java Native Interface(JNI) JNI defines naming and coding convention so that Java VM can find and call native

code. JNI is built into JVM to provide access to OS I/O and others.

Zygote

Android at its core has a process they call the “Zygote”, which starts up at init. It gets it's name from dictionary definition: "It is the initial cell formed when a new organism is produced".

52

Page 53: Completed Manual OS Case Study

This process is a “Warmed-up” process, which means it’s a process that’s been initialized and has all the core libraries linked in.  When you start an application, the Zygote is forked.

ANDROID '' STARTUP'' &'' RUNTIME"

Stock AOSP Apps /packages/apps /packages/providers Launcher2 Music

53

Page 54: Completed Manual OS Case Study

Browser Calculator Calendar Provision Camera Settings Contacts Email Gallery /packages/inputmethods AccountsAndSettings ApplicationProvider LatinIME AlarmClock Mms CalendarProvider OpenWnn Bluetooth ContactsProvider PinyinIME PackageInstaller DownloadProvider Protips DrmProvider GoogleContactsProvider QuickSearchBox MediaProvider CertInstaller TelephonyProvider SoundRecorder UserDictionaryProvider DeskClock SpeechRecorder Stk VoiceDialer HTMLViewer

Creating an Android ProjectAn Android project contains all the files that comprise the source code for your Android

app. The Android SDK tools make it easy to start a new Android project with a set of default project directories and files.

Create a Project with Eclipse

1. Click New in the toolbar.

2. In the window that appears, open the Android folder, select Android Application Project, and click Next.

54

Page 55: Completed Manual OS Case Study

Figure . The New Android App Project wizard in Eclipse.

3. Fill in the form that appears:o Application Name is the app name that appears to users. For this project, use

"My First App."o Project Name is the name of your project directory and the name visible in

Eclipse.o Package Name is the package namespace for your app (following the same rules

as packages in the Java programming language). Your package name must be unique across all packages installed on the Android system. For this reason, it's generally best if you use a name that begins with the reverse domain name of your organization or publisher entity. For this project, you can use something like "com.example.myfirstapp." However, you cannot publish your app on Google Play using the "com.example" namespace.

o Minimum Required SDK is the lowest version of Android that your app supports, indicated using the API level. To support as many devices as possible, you should set this to the lowest version available that allows your app to provide its core feature set. If any feature of your app is possible only on newer versions of Android and it's not critical to the app's core feature set, you can enable the feature only when running on the versions that support it (as discussed in Supporting Different Platform Versions). Leave this set to the default value for this project.

o Target SDK indicates the highest version of Android (also using the API level) with which you have tested with your application.As new versions of Android become available, you should test your app on the new version and update this value to match the latest API level in order to take advantage of new platform features.

55

Page 56: Completed Manual OS Case Study

o Compile With is the platform version against which you will compile your app. By default, this is set to the latest version of Android available in your SDK. (It should be Android 4.1 or greater; if you don't have such a version available, you must install one using the SDK Manager). You can still build your app to support older versions, but setting the build target to the latest version allows you to enable new features and optimize your app for a great user experience on the latest devices.

o Theme specifies the Android UI style to apply for your app. You can leave this alone.

Click Next.4. On the next screen to configure the project, leave the default selections and click Next.5. The next screen can help you create a launcher icon for your app.

You can customize an icon in several ways and the tool generates an icon for all screen densities. Before you publish your app, you should be sure your icon meets the specifications defined in the Iconography design guide.Click Next.

6. Now you can select an activity template from which to begin building your app.For this project, select BlankActivity and click Next.

7. Leave all the details for the activity in their default state and click Finish.

Create a Project with Command Line Tools

If you're not using the Eclipse IDE with the ADT plugin, you can instead create your project using the SDK tools from a command line:

1. Change directories into the Android SDK’s tools/ path.

2. Execute:android list targets

This prints a list of the available Android platforms that you’ve downloaded for your SDK. Find the platform against which you want to compile your app. Make a note of the target id. We recommend that you select the highest version possible. You can still build your app to support older versions, but setting the build target to the latest version allows you to optimize your app for the latest devices.If you don't see any targets listed, you need to install some using the Android SDK Manager tool. See Adding Platforms and Packages.

3. Execute:android create project --target <target-id> --name MyFirstApp \--path <path-to-workspace>/MyFirstApp --activity MainActivity \--package com.example.myfirstapp

56

Page 57: Completed Manual OS Case Study

Replace <target-id> with an id from the list of targets (from the previous step) and replace <path-to-workspace> with the location in which you want to save your Android projects.

Tip: Add the platform-tools/ as well as the tools/ directory to your PATH environment variable.

Running Your App

How you run your app depends on two things: whether you have a real Android-powered device and whether you're using Eclipse. This lesson shows you how to install and run your app on a real device and on the Android emulator, and in both cases with either Eclipse or the command line tools.

Before you run your app, you should be aware of a few directories and files in the Android project:

AndroidManifest.xml

The manifest file describes the fundamental characteristics of the app and defines each of its components. You'll learn about various declarations in this file as you read more training classes.

One of the most important elements your manifest should include is the <uses-sdk> element. This declares your app's compatibility with different Android versions using the android:minSdkVersion andandroid:targetSdkVersion attributes. For your first app, it should look like this:

<manifest xmlns:android="http://schemas.android.com/apk/res/android" ... >    <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17" />    ...</manifest>

You should always set the android:targetSdkVersion as high as possible and test your app on the corresponding platform version. For more information, read Supporting Different Platform Versions.

src/

Directory for your app's main source files. By default, it includes an Activity class that runs when your app is launched using the app icon.

res/

Contains several sub-directories for app resources. Here are just a few:

drawable-hdpi/

57

Page 58: Completed Manual OS Case Study

Directory for drawable objects (such as bitmaps) that are designed for high-density (hdpi) screens. Other drawable directories contain assets designed for other screen densities.

layout/ Directory for files that define your app's user interface.

values/Directory for other various XML files that contain a collection of resources, such as string and color definitions.

When you build and run the default Android app, the default Activity class starts and loads a layout file that says "Hello World." The result is nothing exciting, but it's important that you understand how to run your app before you start developing.

Run on a Real Device

If you have a real Android-powered device, here's how you can install and run your app:

1. Plug in your device to your development machine with a USB cable. If you're developing on Windows, you might need to install the appropriate USB driver for your device. For help installing drivers, see the OEM USB Drivers document.

2. Enable USB debugging on your device.o On most devices running Android 3.2 or older, you can find the option under Settings >

Applications > Development.o On Android 4.0 and newer, it's in Settings > Developer options.

Note: On Android 4.2 and newer, Developer options is hidden by default. To make it available, go toSettings > About phone and tap Build number seven times. Return to the previous screen to findDeveloper options.

To run the app from Eclipse:1. Open one of your project's files and click Run from the toolbar.2. In the Run as window that appears, select Android Application and click OK.Eclipse installs the app on your connected device and starts it.

Or to run your app from a command line:

1. Change directories to the root of your Android project and execute:ant debug

2. Make sure the Android SDK platform-tools/ directory is included in your PATH environment variable, then execute:

adb install bin/MyFirstApp-debug.apk3. On your device, locate MyFirstActivity and open it.

58

Page 59: Completed Manual OS Case Study

Run on the EmulatorWhether you're using Eclipse or the command line, to run your app on the emulator you

need to first create anAndroid Virtual Device (AVD). An AVD is a device configuration for the Android emulator that allows you to model different devices.

Figure The AVD Manager showing a few virtual devices.

To create an AVD:

1. Launch the Android Virtual Device Manager:a. In Eclipse, click Android Virtual Device Manager from the toolbar.b. From the command line, change directories to<sdk>/tools/ and execute:

android avd2. In the Android Virtual Device Manager panel, click New.3. Fill in the details for the AVD. Give it a name, a platform target, an SD card

size, and a skin (HVGA is default).4. Click Create AVD.5. Select the new AVD from the Android Virtual Device Manager and click Start.6. After the emulator boots up, unlock the emulator screen.

To run the app from Eclipse:

1. Open one of your project's files and click Run from the toolbar.2. In the Run as window that appears, select Android Application and click OK.Eclipse installs the app on your AVD and starts it.

Or to run your app from the command line:

1. Change directories to the root of your Android project and execute:ant debug

59

Page 60: Completed Manual OS Case Study

2. Make sure the Android SDK platform-tools/ directory is included in your PATH environment variable, then execute:

adb install bin/MyFirstApp-debug.apk3. On the emulator, locate MyFirstActivity and open it.

Example Project:Android Google Maps Tutorial

The Android platform provides easy and tight integration between Android applications and Google Maps. The well established Google Maps APIis used under the hood in order to bring the power of Google Maps to your Android applications. In this tutorial we will see how to incorporate Google Maps into an Android app.Installing the Google APIs

In order to be able to use Google Maps, the Google APIs have to be present in your SDK. In case the Google APIs are not already installed, you will have to manually install them. This is accomplished by using the Android SDK and AVD Manager.

Launch the manager and choose the “Installed Options” section to see what is already installed and the “Available Packages” to download the additional APIs.

You can find more information about this procedure in the following links: Adding SDK Components Installing the Google APIs Add-On

Setting up an Eclipse project

Now that the appropriate tools are installed, let’s proceed with creating a new Android project in Eclipse. The project I created is named “AndroidGoogleMapsProject” and has the following configuration:

It is important to use the “Google APIs” as the target since this option includes the Google extensions that allow you to use Google Maps. Return to the first step of this tutorial if

60

Page 61: Completed Manual OS Case Study

no such option is available in your configuration. I chose the 1.5 version of the platform since we will not be using any of the latest fancy API stuff.

Google Maps API Key Generation

As you might know if you have used the Google Maps API in the past, a key is required in order to be able to use the API. The process is slightly different for use in Android applications, so let’s see what is required to do.

First, we have to calculate the MD5 fingerprint of the certificate that we will use to sign the final application. This fingerprint will have to be provided to the Google Maps API service so that it can associate the key with your application. Java’s Key and Certificate Management tool named keytool is used for the fingerprint generation.

The keytool executable resides in the %JAVA_HOME%/bin directory for Windows or $JAVA_HOME/bin for Linux/OS X. For example, in my setup, it is installed in the “C:\programs\Java\jdk1.6.0_18\bin” folder.

While developing an Android application, the application is being signed in debug mode. That is, the SDK build tools automatically sign the application using the debug certificate. This is the certificate whose fingerprint we need to calculate. To generate the MD5 fingerprint of the debug certificate we first need to locate the debug keystore. The location of the keystore varies by platform: Windows Vista: C:\Users\\.android\debug.keystore Windows XP: C:\Documents and Settings\\.android\debug.keystore OS X and Linux: ~/.android/debug.keystore

Now that we have located the keystore, we use the keytool executable to get the MD5 fingerprint of the debug certificate by issuing the following command:

keytool -list -alias androiddebugkey \-keystore .keystore \-storepass android -keypass android

For example, in my Windows machine I changed directory to the .android folder and I used the following command:

%JAVA_HOME%/bin/keytool -list -alias androiddebugkey -keystore debug.keystore -storepass android -keypass android

Note that this was executed against the debug keystore, you will have to repeat this for the keystore that will be used with the application you are going to create. Additionally, the

61

Page 62: Completed Manual OS Case Study

application is run on another development environment, with different Android SDK keystore, the API key will be invalid and Google Maps will not work.

The output would be something like the following:

androiddebugkey, Apr 2, 2010, PrivateKeyEntry,Certificate fingerprint (MD5): 72:BF:25:C1:AF:4C:C1:2F:34:D9:B1:90:35:XX:XX:XX

This the fingerprint we have to provide to the Google Maps service. Now we are ready to sign up for a key by visiting the Android Maps API Key Signup page. After we read and accept the terms and conditions, we provide the generated fingerprint as follows:

We generate the API key and we are presented with the following screen:

Creating the Google Maps application

Finally, its time to write some code. Bookmark the Google APIs Add-On Javadocs for future reference. Integrating Google Maps is quite straightforward and can be achieved by extending the MapActivity class instead of the Activity class that we usually do. The main work is performed by a MapView which displays a map with data obtained from the Google Maps service. A MapActivity is actually a base class with code to manage the boring necessities of any activity that displays a MapView. Activity responsibilities include:

62

Page 63: Completed Manual OS Case Study

Activity lifecycle management and Setup and teardown of services behind a MapView

To extend from MapActivity we have to implement the isRouteDisplayed method, which denotes whether or not we are displaying any kind of route information, such as a set of driving directions. We will not provide such information, so we just return false there.

In our map activity, we will just take reference of a MapView. This view will be defined in the layout XML. We will also use thesetBuiltInZoomControls method to enable the built-in zoom controls.

Let’s see how our activity looks like so far:

Let’s also see the referenced main.xml layout file:

63

Page 64: Completed Manual OS Case Study

Do not forget to provide your API key in the relevant field or else Google Maps will not work.

Launching the application

To test the application we will have to use a device that includes the Google APIs. We will use the AVD manager to create a new device with target set to one of the Google APIs and settings like the following:

If we now launch the Eclipse configuration, we will encounter the following exception:

java.lang.ClassNotFoundException: com.javacodegeeks.android.googlemaps.GMapsActivity in loader dalvik.system.PathClassLoader@435988d0

The problem is that we haven’t notified Android that we wish to use the add-on Google APIs which are external to the base API. To do so, we have to use the uses-library element in our Android manifest file, informing Android that we are going to use classes from the com.google.android.maps package.

Additionally, we have to grant internet access to our application by adding the  android.permission.INTERNET directive. Here is how our AndroidManifest.xml file looks like:

64

Page 65: Completed Manual OS Case Study

And here is what the application screen looks like:

If you click inside the map, the zoom controls will appear and you will be able to zoom in and out.

Adding map overlays

The next step is to add some custom map overlays. To do so, we can extend the Overlay class, which is a base class representing an overlay which may be displayed on top of a map. Alternatively, we may extend the ItemizedOverlay, which is a base class for an Overlaywhich consists of a list of OverlayItems. Let’s see how we can do this (note that the following example is very similar to the Hello Map Viewarticle from the Android documentation):

65

Page 66: Completed Manual OS Case Study

Our class requires an Android Drawable in its constructor, which will be used as a marker. Additionally, the current Context has to be provided. We use an ArrayList to store all the OverlayItems stored in the specific class, so the createItem and size methods are pretty much self-explanatory. The onTap method is called when an item is “tapped” and that could be from a touchscreen tap on an onscreen Item, or from a trackball click on a centered, selected Item. In that method, we just create an AlertDialog and show it to the user. Finally, in the exposed addOverlay method, we add the OverlayItem and invoke the populate method, which is a utility method to perform all processing on a new ItemizedOverlay.

Let’s see how this class can be utilized from our map activity:

66

Page 67: Completed Manual OS Case Study

We create a new instance of our CustomItemizedOverlay class by using the default Android icon as the Drawable. Then we create aGeoPoint pointing to a predefined location and use that to generate an OverlayItem object. We add the overlay item to our CustomItemizedOverlay class and it magically appears in our map on the predefined point.

Finally, we take reference of the underlying MapController and use it to point the map to a specific geographical point using theanimateTo method and to define the zoom level by using the setZoom method.

If we launch again the configuration, we will be presented with a zoomed-in map which includes an overlay marker pointing toJavaCodeGeeks home town Athens, Greece. Clicking on the marker will cause the alert dialog to pop-up displaying our custom message.

RESULT:Thus the Study the Android open source operating system for mobile devices has been

done and developed a module.

67

Page 68: Completed Manual OS Case Study

CASE STUDY 4 :eCos

eCos:

eCos is provided as an open source runtime system supported by the GNU open source development tools. Developers have full and unfettered access to all aspects of the runtime system. No parts of it are proprietary or hidden, and you are at liberty to examine, add to, and modify the code as you deem necessary. These rights are granted to you and protected by the eCos license. It also grants you the right to freely develop and distribute applications based on eCos. We welcome all contributions back to eCos such as board ports, device drivers and other components, as this helps the growth and development of eCos, and is of benefit to the entire eCos community.

One of the key technological innovations in eCos is the configuration system. The configuration system allows the application writer to impose their requirements on the run-time components, both in terms of their functionality and implementation, whereas traditionally the operating system has constrained the application's own implementation. Essentially, this enables eCos developers to create their own application-specific operating system and makes eCos suitable for a wide range of embedded uses. Configuration also ensures that the resource footprint of eCos is minimized as all unnecessary functionality and features are removed. The configuration system also presents eCos as a component architecture. This provides a standardized mechanism for component suppliers to extend the functionality of eCos and allows applications to be built from a wide set of optional configurable run-time components. Components can be provided from a variety of sources including the standard eCos release, commercial third party developers and open source contributors.

The royalty-free nature of eCos means that you can develop and deploy your application using the standard eCos release without incurring any royalty charges. In addition, there are no up-front license charges for the eCos runtime source code and associated tools. eCos delivers, without charge, everything necessary for basic embedded applications development.

eCos is designed to be portable to a wide range of target architectures and target platforms including 16, 32, and 64 bit architectures, MPUs, MCUs and DSPs. The eCos kernel, libraries and runtime components are layered on the Hardware Abstraction Layer (HAL), and thus will run on any target once the HAL and relevant device drivers have been ported to the target's processor architecture and board.

Currently eCos supports 13 different target architectures:

68K/ColdFire ARM (including ARM7TDMI, ARM9TDMI, Cortex-M, StrongARM, XScale)

CalmRISC16 and CalmRISC32 (RedBoot only)

68

Page 69: Completed Manual OS Case Study

Fujitsu FR-V

Fujitsu FR30

Hitachi H8/300

Intel x86

Matsushita AM3x

MIPS

NEC V8xx

PowerPC

SPARC

SuperH

Support includes many of the popular variants of these architectures and evaluation boards. Many new ports are in development and will be released as they become available.

eCos has been designed to support applications with real-time requirements, providing features such as full preemptability, minimal interrupt latencies, and all the necessary synchronization primitives, scheduling policies, and interrupt handling mechanisms needed for these type of applications. eCos also provides all the functionality required for general embedded application support including device drivers, memory management, exception handling, C, math libraries, etc. In addition to runtime support, the eCos system includes all the tools necessary to develop embedded applications, including eCos software configuration and build tools, and GNU based compilers, assemblers, linkers, debuggers, and simulators.

The following core functionality is provided:

Hardware Abstraction Layer (HAL) Real-time kernel

o Interrupt handling, Exception handling

o Choice of schedulers, Thread support

o Rich set of synchronization primitives

o Timers, counters and alarms

o Choice of memory allocators

o Debug and instrumentation support

µITRON  3.0 compatible API, POSIX compatible API

69

Page 70: Completed Manual OS Case Study

ISO C and math libraries

Serial, ethernet, SPI, I2C, framebuffer, CAN, ADC, wallclock and watchdog device drivers

USB slave support

TCP/IP networking stacks

C++ Standard Template Library (uSTL)

GDB debug support

System requirements

The eCos net distribution is available in both Linux and Windows versions. The Linux version is tested under recent versions of the Fedora, openSUSE and Ubuntu distributions for x86 and should work under most Linux variants. The Windows version has been tested under Microsoft Windows 2000 Professional, Windows XP and Windows Vista. It should also work under Windows NT4 with SP6a. The use of eCos under Windows 95/98/ME is no-longer supported.

The eCos net distribution is supplied with full support for configuration of eCos on all host platforms via both a graphical configuration tool and a command-line tool. It is intended to be used in conjunction with GNU development tools which are available freely on the net. As a minimum, the gcc compiler, gdb debugger and binutils tools are required to build eCos, link with application code and undertake debugging.

Architecture Index List

ARM, CalmRISC, Cortex-M, FR-V, FR30, H8, IA32, 68K/ColdFire, Matsushita AM3x, MIPS, NEC V8xx, PowerPC, SPARC, SuperH

Devices Index List

Flash devices, Ethernet devices, Serial devices, USB devices, Timekeeping devices

Status Key:

:

70

Page 71: Completed Manual OS Case Study

 Downloading and Installation

These instructions describe how to download and install recent versions of the eCos real-time operating system.

Host support

The eCos net distribution is available in both Linux and Windows versions. The Linux version is tested under recent versions of the Fedora, openSUSE and Ubuntu distributions for x86 and should work under most Linux variants. The Windows version has been tested under Windows 2000 Professional, Windows XP and Windows Vista. It should also work under Microsoft Windows NT 4.0 with SP6a. Please note that eCos is no longer supported under Windows 95/98/ME.

Downloading and installation instructions

Linux

Developers wishing to use the pre-built eCos 3.0 host tools on a 32-bit Linux host (i686) must first ensure that they have libstdc++ v3 (/usr/lib/libstdc++.so.5) installed. Users of Linux distributions which provide a more recent libstdc++ may need to install a libstdc++ v3 compatibility package. Installation of the compatibility package may be achieved as follows:

Fedora i686: yum install compat-libstdc++-33

openSUSE i686: zypper install libstdc++33

Ubuntu 9.10 i686 (and later): dpkg -i libstdc++5_3.3.6-17ubuntu1_i386.deb

71

Hardware supported by eCos

RedBoot: Hardware supported by RedBoot

A: Alpha quality

B: Beta quality

O: Obsolete - hardware no longer available

X: Port presently non-functional

Page 72: Completed Manual OS Case Study

Ubuntu 9.04 i686 (and earlier):   apt-get install libstdc++5

Developers working with a 64-bit Linux host (x86_64) should use the above snapshot build and will also need to install 32-bit libraries as follows:

Fedora x86_64: yum install libstdc++.i686

openSUSE 12.x x86_64: zypper install libstdc++46-32bit

openSUSE 11.x x86_64:   zypper install libstdc++45-32bit

Ubuntu x86_64: apt-get install ia32-libs

The Linux-hosted eCos Configuration Tool also requires the GTK+ toolkit version 2.0 or later.

Cygwin

Developers wishing to install eCos on a Windows host must first install a recent version of the Cygwin UNIX emulation system. Full instructions on installing Cygwin for use with eCos are available. The following instructions assume that Cygwin has already been installed (where necessary) and that the reader is familiar with invoking a bash shell.

eCos and Toolchain

The most recent eCos release (eCos 3.0) may be installed using an installation tool which simplifies the downloading and installation of the eCos sources, host tools and documentation. The installation tool can optionally download one or more pre-built GNU cross toolchains (contributed by eCosCentric Limited) for use in conjunction with eCos. At present, toolchains for the following target architectures are available for download in pre-built form:

Architecture Target

ARM (ARM7TDMI, ARM9, Cortex-M, XScale) arm-eabi

ARM (ARM7DI, StrongARM) arm-elf

ColdFire m68k-elf

72

Page 73: Completed Manual OS Case Study

Intel x86 i386-elf

MIPS32 mipsisa32-elf

PowerPC powerpc-eabi

SuperH sh-elf

Developers targetting one of the other architectures must build a toolchain themselves at present. Full instructions for downloading source code and building a toolchain are available.

We recommend that eCos is installed to /opt/ecos where it will be accessible by all users. This may require installation by a user with suitable privileges. First, download the eCos installation tool by using the following command at a bash prompt:

wget --passive-ftp ftp://ecos.sourceware.org/pub/ecos/ecos-install.tcl

The installation tool may then be invoked as follows:

sh ecos-install.tcl

The installation tool will present a list of mirror sites from which the software may be downloaded. For best results, please select a mirror site in your own geographical region. The tool will then prompt for an installation location. Finally, the tool will present a list of pre-built GNU toolchains available for download. Select each toolchain you wish to download by entering the corresponding number. When all required toolchains have been selected, enter q. Downloading and installation of the software will then commence.

Note: Following installation of eCos, most users will need to replace their eCos host tools with more recent snapshot builds. Download instructions for the most recent snapshot builds are available in the ecos-discuss mailing list archives:

eCos host tools for Cygwin - 120425 snapshot builds eCos host tools for Linux - 110209 snapshot builds

Windows users should note that POSIX-style paths are relative to the root of their Cygwin installation (typically c:\cygwin) by default. For example, /opt/ecos might be located atc:\cygwin\opt\ecos in the Windows Explorer.

73

Page 74: Completed Manual OS Case Study

Users may wish to create a shortcut to the eCos Configuration Tool on their desktop. Typically, this may be achieved by dragging the configtool or configtool.exe executable file from the file manager provided by your operating system onto the desktop and dropping it while holding down the shift and ctrl keys. This file is located in the ecos-version/tools/bin directory under the location at which eCos was installed. On Windows hosts, it will be necessary to modify the "Start in" property of the shortcut to specify the Cygwin /bin directory (typically c:\cygwin\bin) as the working directory.

Users who have downloaded eCos previously and now wish to download additional toolchains should re-invoke the eCos installer, specifying the -t switch on the installer command line as follows:

sh ecos-install.tcl -t

Documentation

The eCos Configuration Tool is used to tailor eCos at source level, prior to compilation or assembly, and provides a configuration file and a set of files used to build user applications. The sources and other files used for building a configuration are provided in a component repository, which is loaded when the eCos Configuration Tool is invoked. The component repository includes a set of files defining the structure of relationships between the Configuration Tool and other components, and is written in a Component Definition Language (CDL). For a description of the concepts underlying component configuration, 

Invoking the eCos Configuration ToolOn Linux

Add the eCos Configuration Tool install directory to your PATH, for example:

export PATH=/opt/ecos/ecos<version>/bin:$PATH

You may run configtool with zero, one or two arguments. You can specify the eCos repository location, and/or an eCos save file (extension .ecc) on the command line. The ordering of these two arguments is not significant. For example:

configtool /opt/ecos/ecos<version>/packages myfile.ecc

On Windows

There are two ways in which to invoke the eCos Configuration Tool:

from the desktop explorer or program set up at installation time (by default Start ->  Programs -> eCos -> Configuration Tool ).

74

Page 75: Completed Manual OS Case Study

type (at a command prompt or in the Start menu’s Run item): <foldername>\ConfigTool.exe where <foldername> is the full path of the directory in which you installed the eCos Configuration Tool.

The Configuration Tool will be displayed

You may run configtool with zero, one or two arguments. You can specify the eCos repository location, and/or an eCos save file (extension .ecc) on the command line. The ordering of these two arguments is not significant. For example:

configtool "c:\Program Files\eCos\packages" myfile.ecc

If you invoke the configuration tool from the command line with --help, you will see this output:

Usage: eCos Configuration Tool [-h] [-e] [-v] [-c] [input file 1] [input file 2] -h --help displays help on the command line parameters -e --edit-only edit save file only -v --version print version -c --compile-help compile online help only

This summarizes valid parameters and switches. Switches are shown with both short form and long form.

--help shows valid options and parameters, as above.

--edit-only runs the Configuration Tool in a mode that suppresses creation of a build tree, in case you only want to create and edit save files.

--version shows version and build date information, and exits.

--compile-help compiles help contents files from the HTML documentation files that the tool finds in the eCos repository, and exits.

Figure. Configuration Tool

75

Page 76: Completed Manual OS Case Study

The Component Repository

When you invoke the eCos Configuration Tool, it accesses the Component Repository, a read-only location of configuration information. The eCos Configuration Tool will look for a component repository using (in descending order of preference):

A location specified on the command line The component repository most recently used by the current user

An eCos distribution under /opt/ecos (under Linux) or a default location set by the installation procedure (under Windows)

User input

The final case above will normally only occur if the previous repository has been moved or (under Windows) installation information stored in the Windows registry has been modified; it will result in a dialog box being displayed that allows you to specify the repository location:

Figure . Repository relocation dialog box

Note that in order to use the eCos Configuration Tool you are obliged to provide a valid repository location.

In the rare event that you subsequently wish to change the component location, select Build->Repository and the above dialog box will then be displayed.

You can check the location of the current repository, the current save file path, and the current hardware template and default package, by selecting Help->Repository Information.... A summary will be displayed.

eCos Configuration Tool DocumentsConfiguration Save File

eCos configuration settings and other information (such as disabled conflicts) that are set using the eCos Configuration Tool are saved to a file between sessions. By default, when the eCos Configuration Tool is first invoked, it reads and displays information from the Component Registry and displays the information in an untitled blank document. You can perform the following operations on a document:

Save the currently active document

76

Page 77: Completed Manual OS Case Study

Use the “File->Save” menu item or click the Save Document icon on the toolbar; if the current document is unnamed, you will be prompted to supply a name for the configuration save file.

Figure . Save As dialog box

Open an existing document

Select File->Open, or click the Open Document icon on the toolbar. You will be prompted to supply a name for the configuration save file.

Figure Open dialog box

Open a document you have used recently Click its name at the bottom of the File menu. Documents may also be opened by: double-clicking a Configuration Save File in the desktop explorer (Windows only); invoking the eCos Configuration Tool with the name of a Configuration File as

command-line argument, or by creating a shortcut to the eCos Configuration Tool with such an argument (under Windows or a suitable Linux desktop environment).

Create a new blank document based on the Component Registry

Select File->New, or click the New Document icon on the toolbar.

Save to a different file name

Select File->Save As. You will be prompted to supply a new name for the configuration save file.

77

Page 78: Completed Manual OS Case Study

Build and Install Trees

The location of the build and install trees are derived from the eCos save file name as illustrated in the following example:Save file name = “c:\My eCos\config1.ecc”Install tree folder = “c:\My eCos\config1_install”Build tree folder = “c:\My eCos\config1_build”

These names are automatically generated from the name of the save file.

Building and Running Sample Applications

The first program you will run is a hello world-style application, then you will run a more complex application that demonstrates the creation of threads and the use of cyg_thread_delay(), and finally you will run one that uses clocks and alarm handlers.

The Makefile depends on an externally defined variable to find the eCos library and header files. This variable is INSTALL_DIR and must be set to the pathname of the install directory.

INSTALL_DIR may be either be set in the shell environment or may be supplied on the command line. To set it in the shell do the following in a bash shell:

$ export INSTALL_DIR=BASE_DIR/ecos-work/arm_install

You can then run make without any extra parameters to build the examples.

Alternatively, if you can do the following:

$ make INSTALL_DIR=BASE_DIR/ecos-work/arm_install

eCos Hello World

The following code is found in the file hello.c in the examples directory:

eCos hello world program listing

/* this is a simple hello world program */#include <stdio.h>int main(void){ printf("Hello, eCos world!\n"); return 0;}

78

Page 79: Completed Manual OS Case Study

To compile this or any other program that is not part of the eCos distribution, you can follow the procedures described below. Type this explicit compilation command (assuming your current working directory is also where you built the eCos kernel):

$ TARGET-gcc -g -IBASE_DIR/ecos-work/install/include hello.c -LBASE_DIR/ecos-work/install/lib -Ttarget.ld -nostdlib

The compilation command above contains some standard GCC options (for example, -g enables debugging), as well as some mention of paths (-IBASE_DIR/ecos-work/install/include allows files likecyg/kernel/kapi.h to be found, and -LBASE_DIR/ecos-work/install/lib allows the linker to find -Ttarget.ld).

The executable program will be called a.out.

You can now run the resulting program using GDB in exactly the same the way you ran the test case before. The procedure will be the same, but this time run TARGET-gdb specifying -nw a.out on the command line:

$ TARGET-gdb -nw a.out

For targets other than the synthetic linux target, you should now run the usual GDB commands described earlier. Once this is done, typing the command "continue" at the (gdb) prompt ("run" for simulators) will allow the program to execute and print the string "Hello, eCos world!" on your screen.

On the synthetic linux target, you may use the "run" command immediately - you do not need to connect to the target, nor use the "load" command.

A Sample Program with Two Threads

Below is a program that uses some of eCos' system calls. It creates two threads, each of which goes into an infinite loop in which it sleeps for a while (using cyg_thread_delay()). This code is found in the file twothreads.cin the examples directory.

eCos two-threaded program listing

#include <cyg/kernel/kapi.h>#include <stdio.h>#include <math.h>#include <stdlib.h>

/* now declare (and allocate space for) some kernel objects, like the two threads we will use */cyg_thread thread_s[2];/* space for two thread objects */char stack[2][4096]; /* space for two 4K stacks */

79

Page 80: Completed Manual OS Case Study

/* now the handles for the threads */cyg_handle_t simple_threadA, simple_threadB;

/* and now variables for the procedure which is the thread */cyg_thread_entry_t simple_program;

/* and now a mutex to protect calls to the C library */cyg_mutex_t cliblock;

/* we install our own startup routine which sets up threads */void cyg_user_start(void){ printf("Entering twothreads' cyg_user_start() function\n"); cyg_mutex_init(&cliblock); cyg_thread_create(4, simple_program, (cyg_addrword_t) 0,"Thread A", (void *) stack[0],4096,

&simple_threadA, &thread_s[0]); cyg_thread_create(4, simple_program, (cyg_addrword_t) 1,"Thread B", (void *) stack[1], 4096,

&simple_threadB, &thread_s[1]); cyg_thread_resume(simple_threadA); cyg_thread_resume(simple_threadB);}

/* this is a simple program which runs in a thread */void simple_program(cyg_addrword_t data){ int message = (int) data; int delay; printf("Beginning execution; thread data is %d\n", message); cyg_thread_delay(200); for (;;) { delay = 200 + (rand() % 50); /* note: printf() must be protected by a call to cyg_mutex_lock() */ cyg_mutex_lock(&cliblock); { printf("Thread %d: and now a delay of %d clock ticks\n",

message, delay); } cyg_mutex_unlock(&cliblock); cyg_thread_delay(delay); }}

When you run the program (by typing continue at the (gdb) prompt) the output should look like this:

80

Page 81: Completed Manual OS Case Study

Starting program: BASE_DIR/examples/twothreads.exeEntering twothreads' cyg_user_start()functionBeginning execution; thread data is 0Beginning execution; thread data is 1Thread 0: and now a delay of 240 clock ticksThread 1: and now a delay of 225 clock ticksThread 1: and now a delay of 234 clock ticksThread 0: and now a delay of 231 clock ticksThread 1: and now a delay of 224 clock ticksThread 0: and now a delay of 249 clock ticksThread 1: and now a delay of 202 clock ticksThread 0: and now a delay of 235 clock ticks

Note: When running in a simulator the delays might be quite long. On a hardware board (where the clock speed is 100 ticks/second) the delays should average to about 2.25 seconds. In simulation, the delay will depend on the speed of the host processor and will almost always be much slower than the actual board. You might want to reduce the delay parameter when running in simulation.

Following figure shows how this multitasking program executes. Note that apart from the thread creation system calls, this program also creates and uses a mutex for synchronization between the printf() calls in the two threads. This is because the C library standard I/O (by default) is configured not to be thread-safe, which means that if more than one thread is using standard I/O they might corrupt each other. This is fixed by a mutual exclusion (or mutex) lockout mechanism: the threads do not call printf() until cyg_mutex_lock() has returned, which only happens when the other thread calls cyg_mutex_unlock().

Figure. Two threads with simple print statements after random delays

81

Page 82: Completed Manual OS Case Study

Ecosconfig on Windows and Linux Quick Start

As an alternative to using the graphical Configuration Tool, it is possible to configure and build a kernel by editing a configuration file manually and using the ecosconfig command. Users with a Unix background may find this tool more suitable than the GUI tool described in the previous section.

To use the ecosconfig command you need to start a shell. In Windows you need to start a CygWin bash shell, not a DOS command line. The following instructions assume that the PATH and ECOS_REPOSITORY environment variables have been setup correctly. They also assume Linux usage but equally well apply to Windows running Cygwin.

82

Page 83: Completed Manual OS Case Study

Before invoking ecosconfig you need to choose a directory in which to work. For the purposes of this tutorial, the default path will be BASE_DIR/ecos-work. Create this directory and change to it by typing:

$ mkdir BASE_DIR/ecos-work$ cd BASE_DIR/ecos-work

To see what options can be used with ecosconfig, type: $ ecosconfig –help

The available packages, targets and templates may be listed as follows:$ ecosconfig list

Here is sample output from ecosconfig showing the usage message.

Example Getting help from ecosconfig

$ ecosconfig --helpUsage: ecosconfig [ qualifier ... ] [ command ] commands are: list : list repository contents new TARGET [ TEMPLATE [ VERSION : create a configuration target TARGET : change the target hardware template TEMPLATE [ VERSION ] : change the template add PACKAGE [ PACKAGE ... ] : add package(s) remove PACKAGE [ PACKAGE ... ] : remove package(s) version VERSION PACKAGE [ PACKAGE ... ] : change version of package(s) export FILE : export minimal config info import FILE : import additional config info check : check the configuration resolve : resolve conflicts tree : create a build tree qualifiers are: --config=FILE : the configuration file --prefix=DIRECTORY : the install prefix --srcdir=DIRECTORY : the source repository --no-resolve : disable conflictresolution --version : show version and copyright

RESULT:Thus the Study the eCos open source operating system has been done and developed a

module.

83