MCA-Sem III-MC 0073

22
NAME - KRUSHITHA.V.P ROLL NO. - 520791371 ASSIGNMENT SET 1 SUBJECT - MC0073 SYSTEM PROGRAMMING 1

description

SMUDE MCA 3rd sem MC0073 I

Transcript of MCA-Sem III-MC 0073

Master of Computer Application (MCA) Semester 3

NAME - KRUSHITHA.V.P

ROLL NO. - 520791371ASSIGNMENT SET 1SUBJECT - MC0073 SYSTEM PROGRAMMING

Master of Computer Application (MCA) Semester 3

MC0073 System Programming

Assignment Set 1

5. Describe the process of Bootstrapping in the context of Linkers.

ANS:In computing, boot strapping refers to a process where a simple system activates another more complicated system that serves the same purpose. It is a solution to the Chicken-and-eg g problem of starting a certain system without the system already functioning. The term is most often applied to the process of starting up a computer, in which a mechanism is needed to execute the software program that is responsible for executing software programs 9the operating system).Bootstrap loading

In modern computers, the first program the computer runs after a hardware reset invariably is stored in a ROM known as bootstrap RAM as in pulling ones self up by the bootstraps. When the CPU is powered on or reset, it sets its registers to a known state. On x86 systems, for example, the reset sequence jumps to the address 16 bytes below the top pf the systems address space. The bootstrap ROM occupies the top 64K of the address space and ROM code then starts up the computer. On IBM-compatible x86 systems, the boot ROM code reads the first block of the floppy disk into memory, or if that fails the first block of the first hard disk, into memory location zero and jumps to location zero. The program in block zero in turn loads a slightly larger operating system boot program from a known place on the disk into memory, and jumps to that program which in turn loads in the operating system and starts it.

The operating system cannot be loaded directly, because we can operating system loader into 512 bytes. The first level loader typically is only able to load a single-segment program from a file with a fixed name in the top-level directory of the boot disk. The operating system loader contains more sophisticated code that can read and interpret a configuration file, uncompress a compressed operating executable, address large amounts of memory (on an x86 the loader usually runs in real mode which means that its tricky to address more than 1MB of memory). The full operating system can turn on the virtual memory system, loads the drivers it needs, and then proceed to run user-level programs.

Many Unix systems use a similar bootstrap process to get user-mode programs running. The kernel creates a process, then stuffs a tiny little program, only a few dozen bytes long, into that process. The tiny program executes a system call that runs /etc/init, the user mode initialization program that in turn runs configuration files and starts the daemons and login programs that a running system needs.

For a application level programmer, it nothing matters, but it becomes more interesting if we want to write program that run on the bare hardware of the machine, and then we need to arrange to intercept the bootstrap sequence somewhere and run the program rather than the usual operating system. Some systems make this quite easy (like just stick the name of your program in AUTOEXEC.BAT and reboot Windows 95), others make it nearly impossible. It also presents opportunities for customized systems. For ex., a single application system could be built over a Unix Kernel by naming the application /etc/init.Software Bootstrapping and compiler Bootstrapping

Bootstrapping can also refer the development of successively more complex, faster programming environments. The simplest environment will be, perhaps a very basic text editor (ex. Ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high level programming language.Compiler Bootstrapping

In compiler design, a bootstrap or bootstrapping compiler is a compiler that is written in the target language, or a subset of the language, that it compiles. Ex. are gcc, GHC, OCaml, BASIC, PL/I and more recently Mono C# compiler.6. Describe the procedure for design of a Linker.

ANS:

The relocation requirements of a program are influenced by the addressing structure of the computer system on which it is to execute. Use of the segmented addressing structure reduces the relocation requirements of program.Implementation of Linker for MS-DOS:

We can consider the program written in the assemble language of Intel 8088 as example.

The assume statement declares the segment registers CS and DS to the available for memory addressing. Hence all memory addressing is performed by using suitable displacements from their contents. Translation time address oA is 0196. Here, in Statement 16, a reference to A is assembled as a displacement of 196 from the contents of the CS register. This avoids the use of an absolute address, hence the instruction is not address sensitive. Now no relocation is needed if segment SAMPLE is to be loaded with address 2000 by a calling program (or by the OS). The effective operand address would be calculated as +0196, which is the correct address 2196. A similar situation exists with the reference to B in statement 17. The reference to B is assembled as a displacement of 002 from the contents of the DS register. Since the DS register would be loaded with the execution time address of DATA_HERE, the reference to B would be automatically relocated to the correct address.

Sr.No.

Statement

Offset

0001

DATA_HERESEGMENT

0002

ABC

DW

25

0000

0003

B

DW?

0002

.

0012

SAMPLE

SEGMENT

0013

ASSUME CS: SAMPLE

DS: DATA_HERE

0014

MOV

AX,DATA_HERE0000

0015

MOV

DS,AX

0003

0016

JMP

A

0005

0017

MOV

AL,B

0008

-

-

-

-

0027

A

MOV

AX,BX

0196

-

-

-

-

0043

SAMPLE

ENDS

0044

END

Though use of segment register reduces the relocation requirements, it does not completely eliminate the need for relocation.

Consider statement 14.

MOV AX, DATA_HERE

which loads the segment base of DATA_HERE into AX register preparatory to its transfer into the DS register. Since the assembler knows DATA_HERE to be a segment, it makes provision to load the higher order 16 bits of the address of DATA_HERE in to the AX register. However it does not know the link time address of DATA_HERE, hence it assembles the MOV instruction in the immediate operand format and puts zeroes in the operand field. It also makes an entry for this instruction in RELOCTAB so that the linker would put the appropriate address in the operand field. Inter-segment calls and jumps are handled in a similar way.

Relocation is somewhat more involved in the case of intra-segment jumps assembled in the FAR format. For ex.,

FAR_LAB EQU THIS FAR ; FAR_LAB IS A FAR label

JMP FAR_LAB; A FAR jump

Here the displacement and the segment base of FAR_LAB are to be put in the JMP instruction Itself. The assembler puts the displacement of FAR_LAB in the first two operand bytes of the instruction, and makes a RELOCTAB entry for the third and fourth operand bytes which are to hold the segment base address. A segment like

ADDR_A

DW

OFFSETA

Which is an address constant does not need any relocation since the assemble can itself put the required offset in the bytes. For linking, however both segment base address and offset of the external symbol must be computed by the linker. So there is no reduction in the linking requirements.

Relocating Algorithm:

Algorithm:-

1. Program_linked_origin : = from linker command

2. For each object module

A. t_origin := translated origin of the object module;

OM_size := size of the object module;

B. relocation_factor := program_linked_origin---t_origin;

C. Read the machine language program in work_area.

D. Read the RELOCTAB of the object Module.

E. For each entry in RELOCTAB

i) Translated_addr := address in the RELOCTAB entry;

ii) Address_in_work_area := address of work_area +

translated_address---t_origin;

iii) Add relocation_factor to the operand address in the word with the

address address_in_work_area.

F. Program_liinked_origin := program_linked_origin + OM_size;

The computations performed in the algorithm are along the lines describedthe only new action is the computation of the work area address of the word requiring relocation (step2(e).._). Step2(f) increments program_linked_origin so that the next object module would granted the next available load address.

2. Define the following:

A) Systems Software B) Application Software C) System Programming

D) Von Neumann Architecture

A) Systems software

System software refers to the files and programs that make up your computer's operating system. System files include libraries of functions, system services, drivers for printers and other hardware, system preferences, and other configuration files. The programs that are part of the system software include assemblers, compilers, file management tools, system utilities, and debuggers.

The system software is installed on your computer when you install your operating system. You can update the software by running programs such as "Windows Update" for Windows or "Software Update" for Mac OS X. Unlike application programs, however, system software is not meant to be run by the end user. For example, while you might use your Web browser every day, you probably don't have much use for an assembler program (unless, of course, you are a computer programmer).

Since system software runs at the most basic level of your computer, it is called "low-level" software. It generates the user interface and allows the operating system to interact with the hardware. Fortunately, you don't have to worry about what the system software is doing since it just runs in the background. It's nice to think you are working at a "high-level" anyway.The most important types of system software are:

The computer BIOS and device firmware, which provide basic functionality to operate and control the hardware connected to or built into the computer.

The operating system (prominent examples being Microsoft Windows, Mac OS X and Linux), which allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It also provides a platform to run high-level system software and application software.

Utility software, which helps to analyze, configure, optimize and maintain the computer.

In some publications, the term system software is also used to designate software development tools (like a compiler, linker or debugger).

Types of system software programsSystem software helps use the operating system and computer system. It includes diagnostic tools, compilers, servers, windowing systems, utilities, language translator, data communication programs, data management programs and more. The purpose of system software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such accessory devices as communications, printers, readers, displays, keyboards, etc.

Specific kinds of system software include:

Loaders

Linkers

Utility software

Desktop environment / Graphical user interface

Shells

BIOS

Hyper visors

Boot loaders

If system software is stored on non-volatile memory such as integrated circuits, it is usually termed firmware.

B) Application Software

Application software, also known as applications, is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include Enterprise software, Accounting software, Office suites, Graphics software and media players.

Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.

An application program (sometimes shortened to application) is any program designed to perform a specific function directly for the user or, in some cases, for another application program. Examples of application programs include word processors; database programs; Web browsers; development tools; drawing, paint, and image editing programs; and communication programs. Application programs use the services of the computer's operating system and other supporting programs. The formal requests for services and means of communicating with other programs that a programmer uses in writing an application program is called the application program interface.There are many types of application software:

An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, OpenOffice.org, and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.

Enterprise software addresses the needs of organization processes and data flow, often in a large distributed environment. (Examples include Financial, Customer Relationship Management, and Supply Chain Management). Note that Departmental Software is a sub-type of Enterprise Software with a focus on smaller organizations or groups within a large organization. (Examples include Travel Expense Management, and IT Helpdesk)

Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include Databases, Email servers, and Network and Security Management)

Information worker software addresses the needs of individuals to create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, documentation tools, analytical, and collaborative. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.

Content access software is software used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers, and Games)

Educational software is related to content access software, but has the content and/or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.

Simulation software are computer software for simulation of physical or abstract systems for either research, training or entertainment purposes.

Media development software addresses the needs of individuals who generate print and electronic media for others to consume, most often in a commercial or educational setting. This includes Graphic Art software, Desktop Publishing software, Multimedia Development software, HTML editors, Digital Animation editors, Digital Audio and Video composition, and many others.[2] Product engineering software is used in developing hardware and software products. This includes computer aided design (CAD), computer aided engineering (CAE), computer language editing and compiling tools, Integrated Development Environments, and Application Programmer Interfaces.

C) System Programming

System programming (or systems programming) is the activity of programming system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user (e.g. word processor), whereas systems programming aims to produce software which provides services to the computer hardware (e.g. disk defragmenter). It requires a greater degree of hardware awareness.

in system programming more specifically:

the programmer will make assumptions about the hardware and other properties of the system that the program runs on, and will often exploit those properties (for example by using an algorithm that is known to be efficient when used with specific hardware)

usually a low-level programming language or programming language dialect is used that:

can operate in resource-constrained environments

is very efficient and has little runtime overhead

has a small runtime library, or none at all

allows for direct and "raw" control over memory access and control flow lets the programmer write parts of the program directly in assembly language Debugging can be difficult if it is not possible to run the program in a debugger due to resource constraints. Running the program in a simulated environment can be used to reduce this problem.

Systems programming is sufficiently different from application programming that programmers tend to specialize in one or the other.

In system programming, often limited programming facilities are available. The use of automatic garbage collection is not common and debugging is sometimes hard to do. The runtime library, if available at all, is usually far less powerful, and does less error checking. Because of those limitations, monitoring and logging are often used; operating systems may have extremely elaborate logging subsystems.

Implementing certain parts in operating system and networking requires systems programming (for example implementing Paging (Virtual Memory) or a device driver for an operating system).

D) Von Neumann Architecture

The von Neumann architecture is a design model for a stored-program digital computer that uses a central processing unit (CPU) and a single separate storage structure ("memory") to hold both instructions and data. It is named after the mathematician and early computer scientist John von Neumann. Such computers implement a universal Turing machine and have a sequential architecture.

A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random-access memory (RAM). Stored-program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch leads to route data and to control signals between various functional units. In the vast majority of modern computers, the same memory is used for both data and program instructions. The mechanisms for transferring the data and instructions between the CPU and memory are, however, considerably more complex than the original von Neumann architecture.

The terms "von Neumann architecture" and "stored-program computer" are generally used interchangeably, and that usage is followed in this article.

A von Neumann machine also has a central processing unit (CPU) with one or more registers that hold data that are being operated on. The CPU has a set of built-in operations (its instruction set) that is far richer than with the Turing machine, e.g. adding two binary integers, or branching to another part of a program if the binary integer in some register is equal to zero (conditional branch).

The CPU can interpret the contents of memory either as instructions or as data according to the fetch-execute cycle.

Von Neumann considered parallel computers but recognized the problems of construction and hence settled for a sequential system. For this reason, parallel computers are sometimes referred to as non-von Neumann architectures.

A von Neumann machine can compute the same class of functions as a universal Turing machine.

3. Explain the following with respect to the design specifications of an Assembler:

A) Data Structures

B) pass1 & pass2 Assembler flow chart

A two-pass assembler performs two sequential scans over the source code:

Pass 1: symbols and literals are defined

Pass 2: object program is generated

Parsing: moving in program lines to pull out op-codes and operands

Pass1

All symbols are identified and put in ST

All op-codes are translated

Missing symbol values are marked

LC = origin

Read next statement

Parse the statement

Y

Comment

N

END Y

Pass 2

N

Npseudo-op Y

what

kind?

N

Label

EQU WORD/

RESW/RESB

BYTE

Y

NLabel N Label

Enter label in ST

Enter label in ST

Y

Y

Enter label in ST

Enter label in ST

Call translator

Place constant in

machine code

Advance LC by the

number of bytes specified

Advance LC

in the pseudo-op

First pass of a two-pass assembler.Pass 2

- Fills addresses and data that was unknown during Pass 1.

Pass 1

More lines inN

Configuration

Done

Table

Y

Get the next line

Retrieve the name of the symbol from SSB

Get the value of the symbol from ST

Compute the location in memory

where this value will be placed

(starting address + offset)

Place the symbol value at this location

Figure 5. Second pass of a two-pass assembler.

A) Data StructuresIn computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.[1]

HYPERLINK "http://en.wikipedia.org/wiki/Data_structure" \l "cite_note-1#cite_note-1" [2]Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.

Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts of data, such as large databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.

Basic principlesData structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as in XOR linking).

Classification of data structureHomogeneous / Heterogeneous: In homogeneous data structures all elements are of the same type, e.g., array. In heterogeneous data structures elements are of different types, e.g. structure.

Static / Dynamic: In static data structures the size can not be changed after the initial allocation, like matrices. In dynamic data structures, like lists, size can change dynamicaly.

Linear / Non-linear: Linear data structures maintain a linear relationship between their elements, e.g., array. Non-linear data structures do not maintain any linear relationship between their elements, e.g., in a tree.

Language supportAssembly languages and some low-level languages, such as BCPL, generally lack support for data structures. Many high-level programming languages, on the other hand, have special syntax or other built-in support for certain data structures, such as vectors (one-dimensional arrays) in the C programming language, multi-dimensional arrays in Pascal, linked lists in Common Lisp, and hash tables in Perl and in Python. Many languages also provide basic facilities such as references and the definition record data types, that programmers can use to build arbitrarily complex structures.

Most programming languages feature some sort of library mechanism that allows data structure implementations to be reused by different programs. Modern programming languages usually come with standard libraries that implement the most common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and Microsoft's .NET Framework.

Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, .NET Framework and Java, use classes for this purpose.

With the advent of multi-core processors, many known data structures have concurrent versions that allow multiple computing threads to access the data structure simultaneously

4. Explain the following with respect to Macros and Macro Processors:

A) Macro Definition and Expansion B) Conditional Macro Expansion

C) Macro Parameters

A) Macro Definition and Expansion A preprocessor define directive directs the preprocessor to replace all subsequent occurrences of a macro with specified replacement tokens.

A preprocessor #define directive has the form:

HYPERLINK "http://publib.boulder.ibm.com/infocenter/comphelp/v7v91/topic/com.ibm.vacpp7a.doc/language/ref/clrc09define.htm" \l "skipsyn-97#skipsyn-97"

INCLUDEPICTURE "http://publib.boulder.ibm.com/infocenter/comphelp/v7v91/topic/com.ibm.vacpp7a.doc/language/ref/c.gif" \* MERGEFORMATINET

>>-#--define--identifier--+--------------------------+---------->

| .-,--------------. |

| V | |

'-(----+------------+-+--)-'

'-identifier-'

.----------------.

V |

>----+------------+-+------------------------------------------> ,var ,end))

,@body)))

In a moment I'll explain how the body generates the correct expansion; for now you can just note that the variables var, start, and end each hold a value, extracted from var-and-range, that's then interpolated into the backquote expression that generates do-primes's expansion.

However, you don't need to take apart var-and-range "by hand" because macro parameter lists are what are called destructuring parameter lists. Destructuring, as the name suggests, involves taking apart a structure--in this case the list structure of the forms passed to a macro.

Within a destructuring parameter list, a simple parameter name can be replaced with a nested parameter list. The parameters in the nested parameter list will take their values from the elements of the expression that would have been bound to the parameter the list replaced. For instance, you can replace var-and-range with a list (var start end), and the three elements of the list will automatically be destructured into those three parameters.

Another special feature of macro parameter lists is that you can use &body as a synonym for &rest. Semantically &body and &rest are equivalent, but many development environments will use the presence of a &body parameter to modify how they indent uses of the macro--typically &body parameters are used to hold a list of forms that make up the body of the macro.

So you can streamline the definition of do-primes and give a hint to both human readers and your development tools about its intended use by defining it like this:

(defmacro do-primes ((var start end) &body body)

`(do ((,var (next-prime ,start) (next-prime (1+ ,var))))

((> ,var ,end))

,@body))

In addition to being more concise, destructuring parameter lists also give you automatic error checking--with do-primes defined this way, Lisp will be able to detect a call whose first argument isn't a three-element list and will give you a meaningful error message just as if you had called a function with too few or too many arguments. Also, in development environments such as SLIME that indicate what arguments are expected as soon as you type the name of a function or macro, if you use a destructuring parameter list, the environment will be able to tell you more specifically the syntax of the macro call. With the original definition, SLIME would tell you do-primes is called like this:

(do-primes var-and-range &rest body)

But with the new definition, it can tell you that a call should look like this:

(do-primes (var start end) &body body)

Destructuring parameter lists can contain &optional, &key, and &rest parameters and can contain nested destructuring lists. However, you don't need any of those options to write do-primes.

PAGE 2