MC0072 SEM3 FEB 2011 SMU

46
MC0072 – Computer Graphics Book ID: B0810 Set-1 1. Describe the following: A) Raster scans display system. B) Random scan display system. C) Video controller Ans:1 Raster Display System: In some graphics systems a separate processor is used to interpret the commands in the display file. Such a processor is known as display processor. Display processor access the display information, processes it once during every refresh cycle. In the raster scan display systems, the purpose of display processor is to free the CPU from the graphics routine task. Here, the display processor is provided with separate memory area as shown in the fig 2.3. The main task of the display processor is to digitize a picture definition given in an application program into a set of pixel- intensity values for storage in the frame buffer. This digitization process is known as scan conversion. Fig. 2.3: Raster scan system with a display processor Display processors are also designed to perform a number of additional operations these operations include:

description

assignments smu semester 3

Transcript of MC0072 SEM3 FEB 2011 SMU

Page 1: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Set-1

1. Describe the following:

A) Raster scans display system.

B) Random scan display system.

C) Video controller

Ans:1Raster Display System:In some graphics systems a separate processor is used to interpret the commands in the display file. Such a processor is known as display processor. Display processor access the display information, processes it once during every refresh cycle.

In the raster scan display systems, the purpose of display processor is to free the CPU from the graphics routine task. Here, the display processor is provided with separate memory area as shown in the fig 2.3. The main task of the display processor is to digitize a picture definition given in an application program into a set of pixel-intensity values for storage in the frame buffer. This digitization process is known as scan conversion.

Fig. 2.3: Raster scan system with a display processor

Display processors are also designed to perform a number of additional operations these operations include:· Generating various line styles( dashed, dotted or solid)· Display color areas· Performing certain transformations· Manipulations on displayed objects.

Page 2: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Random Scan Display:

The fig 2.8 shows the architecture of a random scan display system with display processor. This architecture is similar to the display processor based raster system architecture except the frame buffer. In random scan display no local memory is provided for scan conversion algorithms, since that functionality is typically implemented using PLAs (Programmable Logical Arrays) or microcode.

Fig. 2.9: Random Scan Display System

In random scan displays, the display processor has its own instruction set and instruction address register. Hence it is also called Display Processing Unit ( DPU) or Graphics Controller. It performs instruction fetch, decode and execute cycles found in any computer. To provide a flicker free display, the display processor has to execute its program 30 to 60 times per second. The program executed by the display processor and the graphics package reside in the main memory. The main memory is shared by the general CPU and the display processor.

Video Controller:

We know that the video controller receives the intensity information of each pixel from frame buffer and display them on the screen. Let us see the internal organization of a video controller.

The fig 2.4 shows the internal organization of a video controller. It consists of raster-scan generator, x and y address registers and pixel value register. The raster-scan generator produces deflection signals that generate the raster scan. The raster scan generator also controls the x and y address registers which in turn define the memory location to be accessed next. Frarne buffer locations, and the corresponding screen positions, are referenced in Cartesian coordinates. For many graphics monitors, the coordinate origin is defined at the lower left screen corner. The screen surface is then represented as the first quadrant of a two-dimensional system, with positive x values increasing to the right and positive y values increasing from bottom to top. (On some personal computers, the coordinate origin is referenced at the upper left comer of the screen, so the y values are inverted.) Scan lines are then labeled from y, at the top of the screen to 0 at the bottom. Along each scan line, screen pixel positions are labeled from 0 to xmax. During each fetch the pixel value is read and is used to control the intensity of the CRT beam.

Page 3: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 2.4: Simple organization of Video Controller

2. What is DDA line drawing algorithm explain it with the suitable example? discuss the

merit and demerit of the algorithm.

Ans:2

DDA Line Algorithm

1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.

[if equal then plot that point and exit]

2. ∆x = and ∆y =

3. If then

Length=

else

Length=

end if

4. = (x2-x1)/length

= (y2-y1)/length

This makes either or equal to 1 because the length is either | x2-x1| or |y2-y1|, the incremental value for either x or y is 1.

5. x = x1+0.5 * sign( )

Page 4: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

y = y1+0.5*sign( )

[Here the sign function makes the algorithm worl in all quadrant. It returns –1, 0,1 depending on whether its argument is <0, =0, >0 respectively. The factor 0.5 makes it possible to round the values in the integer function rather than truncating them]

6. i=1 [begins the loop, in this loop points are plotted]

7. while(i length)

{

Plot (Integer(x), Integer(y))

x= x+∆x

y= y+∆y

i=i+1

}

8. stop

Let us see few examples to illustrate this algorithm.

Ex.3.1: Consider the line from (0,0) to (4,6). Use the simple DDA algorithm to rasterize this line.

Sol: Evaluating steps 1 to 5 in the DDA algorithm we have

x1=0, x2= 4, y1=0, y2=6

length = =6

∆x = =

∆y=

Initial values for

Page 5: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

x= 0+0.5*sign ( )=0.5

y= 0+ 0.5 * sign(1)=0.5

Tabulating the results of each iteration in the step 6 we get,

Table 3.1

The results are plotted as shown in the Fig 4.6. It shows that the rasterized line lies to both sides of the actual line, i.e. the algorithm is orientation dependent.

Fig 3.6: Result for s simple DDA

Ex. 3.2: Consider the line from (0,0) to (-6,-6). Use the simple DDA algorithm to rasterize this line.

Page 7: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

The results are plotted as shown in the Fig 3.7. It shows that the rasterized line lies on the actual line and it is 450 line.

Advantages of DDA Algorithm

1. It is the simplest algorithm and it does not require special skills for implementation.

2. It is a faster method for calculating pixel positions than the direct use of equation y=mx + b. It eliminates the multiplication in the equation by making use of raster characteristics, so that appropriate increments are applied in the x or y direction to find the pixel positions along the line path.

Disadvantages of DDA Algorithm

1. Floating point arithmetic in DDA algorithm is still time-consuming.

2. The algorithm is orientation dependent. Hence end point accuracy is poor.

3. Write a short note on:

A) Reflection

B) Sheer

C) Rotation about an arbitrary axis

Ans:3

Reflection

A reflection is a transformation that produces a mirror image of an object relative to an axis of

reflection. We can choose an axis of reflection in the xy plane or perpendicular to the xy plane.

The table below gives examples of some common reflection.

Page 8: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 6.6: Reflection about y axis

 

Shear :

A transformation that slants the shape of an objects is called the shear transformation. Two common

shearing transformations are used. One shifts x coordinate values and other shifts y coordinate values.

However, in both the cases only one coordinate (x or y) changes its coordinates and other preserves its

values.

1 X shear :

The x shear preserves they coordinates, but changes the x values which causes vertical lines to tilt right or left as shown in the fig. 6.7. The transformation matrix for x shear is given as

Page 9: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

 

Fig. 6.7

…. (6.8)

2 Y Shear:

The y shear preserves the x coordinates, but changes the y values which causes horizontal lines to transform into lines which slope up or down, as shown in the Fig. 6.8.

Page 10: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 6.8

The transformation matrix for y shear is given as

…. (6.19)

Rotation about Arbitrary Axis :

A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a composite transformation involving combinations of translation and the coordinate-axes rotations.

In a special case where an object is to be rotated about an axis that is parallel to one of the

coordinate axes we can obtain the resultant coordinates with the following transformation

sequence.

1.       Translate the object so that the rotation axis coincides with the parallel coordinate axis

2.       Perform the specified rotation about that axis.

3.       Translate the object so that the rotation axis is moved back to its original position

Page 11: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we have to perform some additional transformations. The sequence of these transformations is given below.

1. Translate the object so that rotation axis specified by unit vector u passes through the coordinate

origin. (See fig. 6.23 (a) and (b))

2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes. Usually the

z axis is preferred. To coincide the axis of rotation to z axis we have to first perform rotation of unit

vector u about x axis to bring it into xz plane and then perform rotation about y axis to coincide it

with z axis.(See Figs. 6.23 (c) and (d))

3. Perform the desired rotation about the z axis

4. Apply the inverse rotation about y axis and then about x axis to bring the rotation axis back to its

original orientation.

5. Apply the inverse translation to move the rotation axis back to its original position.

As shown in the Fig. 6.23 (a) the rotation axis is defined with two coordinate points P1 and P2 and unit

vector u is defined along the rotation of axis as

Where V is the axis vector defined by two points P1 and P2 as

V = P2 – P1

= (x2 – x1, y2 – y1, z2 – z1)

The components a, b and c of unit vector us are the direction cosines for the rotation axis and they can

be defined as

Page 12: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 6.23

As mentioned earlier, the first step in the transformation sequence is to translate the object to pass the

rotation axis through the coordinate origin. This can be accomplished by moving point P1 to the origin.

The translation is as given below.

Now we have to perform the rotation of unit vector u about x axis. The rotation of u around the x axis

into the xz plane is accomplished by rotation can be determined from the dot product of

into the z axis and the cosine of the rotation angle through angle and the unit vector uz(0, 0, 1)

along the z axis.

where and uz(0, 0, 1) = K

Page 17: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Inverse of translation matrix can be given as

With transformation matrices T and Rxy we can align the rotation axis with the positive z axis. Now the

specified rotation with angle can be achieved by rotation transformation as given below.

To complete the required rotation about the given axis, we have to transform the rotation axis back to

its original position. This can be achieved by applying the inverse transformations T -1 and The

overall transformation matrix for rotation about an arbitrary axis then can be expressed as the

concatenation of five individual transformations.

i.e

Page 18: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

4. Describe the following:

A) Basic Concepts in Line Drawing B) Digital Differential Analyzer Algorithm

C) Bresenham’s Line Drawing Algorithm

Ans:4

Basic Concepts in Line Drawing:

Before discussing specific line drawing algorithms it is useful to note the general requirements for such algorithms. These requirements specify the desired characteristics of line.

· The line should appear as a straight line and it should start and end accurately.

· The line should be displayed with constant brightness along its length independent of its length and orientation.

· The line should be drawn rapidly.

Let us see the different lines drawn in Fig.3.5.

a) Vertical and horizontal lines b) 450 line

c) Lines with other orientation

Fig. 3.5

As shown in Fig.3.5 (a), horizontal and vertical lines are straight and have same width. The 450

line is straight but its width is not constant. On the other hand, the line with any other orientation

Page 19: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

is neither straight nor has same width. Such cases are due to the finite resolution of display and we have to accept approximate pixels in such situations, shown in Fig.3.5 (c).

The brightness of the line is dependent on the orientation of the line. We can observe that the effective spacing between pixels for the 450 line is greater than for the vertical and horizontal lines. This will make the vertical and horizontal lines appear brighter than the 450 line. Complex calculations are required to provide equal brightness along lines of varying length and orientation. Therefore, to draw line rapidly some compromises are made such as

· Calculate only an approximate line length.

· Reduce the calculations using simple integer arithmetic

· Implement the result in hardware or firmware

DDA Line Algorithm:

1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.

[if equal then plot that point and exit]

2. ∆x = and ∆y =

3. If then

Length=

else

Length=

end if

4. = (x2-x1)/length

= (y2-y1)/length

This makes either or equal to 1 because the length is either | x2-x1| or |y2-y1|, the incremental value for either x or y is 1.

Page 20: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

5. x = x1+0.5 * sign( )

y = y1+0.5*sign( )

[Here the sign function makes the algorithm worl in all quadrant. It returns –1, 0,1 depending on whether its argument is <0, =0, >0 respectively. The factor 0.5 makes it possible to round the values in the integer function rather than truncating them]

6. i=1 [begins the loop, in this loop points are plotted]

7. while(i length)

{

Plot (Integer(x), Integer(y))

x= x+∆x

y= y+∆y

i=i+1

}

8. stop

Let us see few examples to illustrate this algorithm.

Ex.3.1: Consider the line from (0,0) to (4,6). Use the simple DDA algorithm to rasterize this line.

Sol: Evaluating steps 1 to 5 in the DDA algorithm we have

x1=0, x2= 4, y1=0, y2=6

length = =6

∆x = =

∆y=

Initial values for

Page 21: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

x= 0+0.5*sign ( )=0.5

y= 0+ 0.5 * sign(1)=0.5

Tabulating the results of each iteration in the step 6 we get,

Table 3.1

The results are plotted as shown in the Fig 4.6. It shows that the rasterized line lies to both sides of the actual line, i.e. the algorithm is orientation dependent.

Fig 3.6: Result for s simple DDA

Ex. 3.2: Consider the line from (0,0) to (-6,-6). Use the simple DDA algorithm to rasterize this line.

Page 23: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

The results are plotted as shown in the Fig 3.7. It shows that the rasterized line lies on the actual line and it is 450 line.

Bresenham’s Line Algorithm:

Bresenham’s line algorithm uses only integer addition and subtraction and multiplication by 2, and we know that the computer can perform the operations of integer addition and subtraction very rapidly. The computer is also time-efficient when performing integer multiplication by powers of 2. Therefore, it is an efficient method for scan-converting straight lines.

The basic principle of Bresenham’s line algorithm is to select the optimum raster locations to represent a straight line. To accomplish this, the algorithm always increments either x or y by one unit depending on the slope of line. The increment in the other variable is determined by examining the distance between the actual line location and the nearest pixel. This distance is called decision variable or the error. This is illustrated in the Fig.3.8

Fig. 3.8

As shown in the Fig. 3.8 the line does not pass through all raster points (pixels). It passes through raster point (0,0) and subsequently crosses three pixels. It is seen that the intercept of line with the line x=1 is closer to the line y=0,i.e. pixel(1,0) than to the line y=1 i.e. pixel (1,1). Hence, in this case, the raster point at (1,0) better represents the path of the line that at (1,1). The intercept, of the line with the line x=2 is close to the line y=1, i.e. pixel (2,1) than to the line y=0,i.e. pixel(2.0). Hence the raster point at (2,1) better represents the path of the line, as shown in the Fig. 3.8

In mathematical term error or decision variable is defined as

e=DB-DA Or DA-DB

Let us define e= DB-DA. Now if e>0, then it implies that DB>DA, i.e., the pixel above the line is closer to the true line. If DB < DA (i.e.<0) then we can say that the pixel below the line is closer to the true line. Thus by checking only the sign of error term it is possible to determine the better pixel to represent the line path.

The error term is initially set as e=2∆y-∆x

Where ∆y=y2-y1, and ∆x=x2-x1

Page 24: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Then according to value of e following actions are taken.

while (e≥0)

{

y=y+1

e=e-2*∆x

}

x=x+1

e=e+2*∆y

When e≥0, error is initialized with e=e - 2∆x. This is continued till error is negative. In each iteration y is incremented by 1. When e<0, error is initialized to e=e + 2∆y. In both the cases x is incremented by 1. Let us see the Bresenham’s line drawing algorithm.

Bresenham’s Line Algorithm:

1. Read the line end points (x1, y1) and (x2, y2) such that they are not equal.

[if equal then plot that line and exit]

2. ∆x = and ∆y =

3. [initialize starting point]

x = x1

y = y1

4. e = 2 * ∆y - ∆x

[ Initialize value of decision variable or error to compensate for non zero intercepts]

5. i = 1 [initialize counter]

6. Plot (x, y)

7. while (e >0)

{

Page 25: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

y = y+1

e = e-2 * ∆x

}

x = x+1

e = e+2 * ∆y

8. i = i+1

9. if ( i ∆x) then go to step 6

10. stop

Ex 3.3: Consider the line from (5,5) to (13,9). Use the Bresenham’s algorithm to rasterize the line.

Sol: Evaluating step 1 through 4 in the Bresenham’s algorithm we have,

∆x = |13-5| = 8

∆y = |9-5| = 4

x = 5, y = 5

e = 2*∆y-∆x = 2*4-8

= 0

Tabulating the results of each iteration in the steps 5 through 10.

Page 26: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

The results are plotted as shown in Fig. 3.9.

Fig. 3.9

The Bresenham’s algorithm only works for the first octant. The generalized Bresenham’s algorithm requires modification for lines lying in the other octants. Such algorithm can be easily developed by considering the quadrant in which the line lies and its slope. When the absolute magnitude of the slope of the line is greater than 1,y is incremented by one and Bresenham’s decision variable or error is used to determine when to increment x. The x and y incremental values depend on the quadrant in which the line exists. This is illustrated in Fig. 3.10.

Fig 3.10: conditions for generalized Bresenham’s algorithm

Page 27: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Generalized Bresenham’s Algorithm

1. read the end points (x1, y1) and (x2, y2) such that they are not equal

2. ∆x = and ∆y =

3. Initialize starting point

x = x1

y = y1

4. s1 = sign (x2-x1)

s2 = sign (y2-y1)

[Sign function returns -1, 0, 1 depending on whether its argument is <0, =0, >0 respectively]

5. If ∆y > ∆x

Exchange ∆x and ∆y

Ex_change =1

else

Ex_change =0

end if

[interchange ∆x and ∆y depending on the slope of the line and set Ex_change flag according]

6. e= 2 * ∆y- ∆x

[Initialize value of decision variable or error to compensate for non zero intercepts]

7. i=1

8. Plot ( x, y)

9. while (e >=0)

{

If ( Ex_change =1) then

Page 28: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

x=x+s1

else

y=y+s2

end if

e=e-2*∆x

}

10. If Ex_change=1 then

y=y+s2

else

x=x+s1

end if

e=e+2*∆y

11. i=i+1

12. If (I <=∆x) then go to step 8

13. stop

Page 29: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Set-2

1. Write a short note on the followings:

A) Video mixing

B) Frame buffer

C) Color table

Ans:1

Video Mixing:

Video controller provides the facility of video mixing. In which it accepts information of two images simultaneously. One from frame buffer and other from television camera, recorder or other source. This is illustrated in fig 2.7. The video controller merges the two received images to form a composite image.

Fig. 2.8: Video mixing

There are two types of video mixing. In first, a graphics image is set into a video image. Here mixing is accomplished with hardware that treats a designated pixel value in the frame buffer as a flag to indicate that the video signal should be shown instead of the signal from the frame buffer, normally the designated pixel value corresponds to the background color of the frame buffer image.

In the second type of mixing, the video image is placed on the top of the frame buffer image. Here, whenever background color of video image appears, the frame buffer is shown, otherwise the video image is shown.

FrameBuffer:

In raster scan displays a special area of memory is dedicated to graphics only. This memory area is called frame buffer. It holds the set of intensity values for all the screen points. The stored intensity values are retrieved from frame buffer and displayed on the screen one row (scan line) at a time.

Usually, frame buffer is implemented using rotating random access semiconductor memory. However, frame buffer also can be implemented using shift registers. Conceptually, shift register

Page 30: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

is operated as first-in-first-out (FIFO) fashion, i.e. similar to queue. We know that, when queue is full and if we want to add new data bit then first data bit is pushed out from the bottom and then the new data bit is added at the top. Here, the data ejected out of the queue can be interpreted as the intensity of a pixel on a scan line.

Fig. 3.2 shows the implementation of frame buffer using shift register. As shown in the Fig. 3.2, one shift register is required per pixel on a scan line and the length of shift register in bits is equal to number of scan lines. Here, there are 8 pixels per scan line and there are in all 5 scan lines. The synchronization between the output of the shift register and the video scan rate is maintained data corresponding to particular scan line is displayed correctly.

Color tables:

In color displays, 24- bits per pixel are commonly used, where 8-bits represent 256 levels for each color. Here it is necessary to read 24-bits for each pixel from frame buffer. This is very time consuming. To avoid this video controller uses look up table (LUT) to store many entries of pixel values in RGB format. With this facility, now it is necessary only to read index to the look up table from the frame buffer for each pixel. This index specifies the one of the entries in the look-up table. The specified entry in the loop up table is then used to control the intensity or color of the CRT.

Usually, look-up table has 256 entries. Therefore, the index to the look-up table has 8-bits and hence for each pixel, the frame buffer has to store 8-bits per pixel instead of 24 bits. Fig. 2.6 shows the organization of a color (Video) look-up table.

Fig. 2.6: Organization of a Video look-up table

There are several advantages in storing color codes in a lookup table. Use of a color table can provide a "reasonable" number of simultaneous colors without requiring Iarge frame buffers. For most applications, 256 or 512 different colors are sufficient for a single picture. Also, table entries can be changed at any time, allowing a user to be able to experiment easily with different color combinations in a design, scene, or graph without changing the attribute settings for the graphics data structure. In visualization and image-processing applications, color tables are

Page 31: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

convenient means for setting color thresholds so that all pixel values above or below a specified threshold can be set to the same color. For these reasons, some systems provide both capabilities for color-code storage, so that a user can elect either to use color tables or to store color codes directly in the frame buffer.

2. Describe the following with respect to methods of generating characters:

A) Stroke method B) Starbust method C) Bitmap method

Ans:2

Stroke method

Fig. 5.19: Stroke method

This method uses small line segments to generate a character. The small series of line segments are drawn like a stroke of pen to form a character as shown in the fig. 5.19.

We can build our own stroke method character generator by calls to the line drawing algorithm. Here it is necessary to decide which line segments are needed for each character and then drawing these segments using line drawing algorithm.

Starbust method:

Page 32: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

In this method a fix pattern of line segments are used to generate characters. As shown in the fig. 5.20, there are 24 line segments. Out of these 24 line segments, segments required to display for particular character are highlighted. This method of character generation is called starbust method because of its characteristic appearance

Fig. 5.20

Fig. 5.20 shows the starbust patterns for characters A and M. the patterns for particular characters are stored in the form of 24 bit code, each bit representing one line segment. The bit is set to one to highlight the line segment; otherwise it is set to zero. For example, 24-bit code for Character A is 0011 0000 0011 1100 1110 0001 and for character M is 0000 0011 0000 1100 1111 0011.

This method of character generation has some disadvantages. They are

1. The 24-bits are required to represent a character. Hence more memory is required

2. Requires code conversion software to display character from its 24-bit code

3. Character quality is poor. It is worst for curve shaped characters.

Bitmap method:

The third method for character generation is the bitmap method. It is also called dot matrix because in this method characters are represented by an array of dots in the matrix form. It is a two dimensional array having columns and rows. An 5 7 array is commonly used to represent characters as shown in the fig 5.21. However 7 9 and 9 13 arrays are also used. Higher resolution devices such as inkjet printer or laser printer may use character arrays that are over 100 100.

Page 33: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 5.21: character A in 5 7 dot matrix format

Each dot in the matrix is a pixel. The character is placed on the screen by copying pixel values from the character array into some portion of the screen’s frame buffer. The value of the pixel controls the intensity of the pixel.

3. Discuss the homogeneous coordinates for translation, rotation and scaling.

Ans:3Homogeneous Coordinates and Matrix Representation of 2D Transformations :

In design and picture formation process, many times we may require to perform translation, rotations, and scaling to fit the picture components into their proper positions. In the previous section we have seen that each of the basic transformations can be expressed in the general matrix form

= P.M1 + M2 …… (6.12)

For translation:

i.e. M1 = Identity matrix

M2 = Translation vector

For rotation:

Page 34: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

i.e. M1 = Rotational matrix

M2 = 0

For scaling:

i.e. M1 = Scaling matrix

M2 = 0

To produce a sequence of transformations with above equations, such as translation followed by

rotation and then scaling, we must calculate the transformed co-ordinates one step at a time.

First, coordinates are rotated. But this sequential transformation process is not efficient. A more

efficient approach is to combine sequence of transformations into one transformation so that the

final coordinate positions are obtained directly from initial coordinates. This eliminates the

calculation of intermediate coordinate values.

In order to combine sequence of transformations we have to eliminate the matrix addition

associated with the translation term in M2 (Refer equation 6.12). To achieve this we have to

represent matrix M1 as 3 × 3 matrix instead of 2 × 2 introducing an additional dummy coordinate

W. Here, points are specified by three numbers instead of two. This coordinate system is called

homogeneous coordinate system and it allows us to express all transformation equations as

matrix multiplication.

The homogeneous coordinate is represented by a triplet

where

For two dimensional transformations, we can have the homogeneous parameter W to be any non

zero value. But it is convenient to have W = 1. Therefore, each two dimensional position can be

represented with homogeneous coordinate as (x, y, 1).

Summering it all up, we can say that the homogeneous coordinates allow combined

transformation, eliminating the calculation of intermediate coordinate values and thus save

Page 35: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

required time for transformation and memory required to store the intermediate coordinate

values. Let us see the homogeneous coordinates for three basic transformations.

1 Homogeneous Coordinates for Translation :

The homogeneous coordinates for translation are given as

…. (6.13)

Therefore, we have

= [x + tx y + ty 1] …. (6.14)

Page 36: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

2 Homogeneous Coordinates for Rotation :

The homogeneous coordinates for rotation are given as

…. (6.15)

Therefore, we have

= …. (6.16)

3 Homogeneous Coordinates for Scaling :

The homogeneous coordinate for scaling are given as

= …. (6.17)

Page 37: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

4. Describe the following with respect to Projection:

A) Parallel B) Perspective C) Types of Parallel

Projections

Ans:4

Projection:

After converting the description of objects from world coordinates to viewing coordinates, we can project the three dimensional objects onto the two dimensional view plane. There are two basic ways of projecting objects onto the view plane : Parallel projection and Perspective projection.

1 Parallel Projection:

In parallel projection, z coordinate is discarded and parallel lined from each vertex on the object are extended until they intersect the view plane. The point of intersection is the projection of the vertex. We connect the projected vertices by line segments which correspond to connections on the original object.

Fig. 7.8: parallel projection of an object to the view plane

As shown in the Fig. 7.8, a parallel projection preserves relative proportions of objects but does not produce the realistic views.

2 Perspective Projection:

The perspective projection, on the other hand, produces realistic views but does not preserve relative proportions. In perspective projection, the lines of projection are not parallel. Instead, they all coverage at a single point called the center of projection or projection reference point. The object positions are transformed to the view plane along these converged projection lines and the projected view of an object is determines by calculating the intersection of the converged projection lines with the view plane, as shown in the Fig. 7.9

Page 38: MC0072 SEM3 FEB 2011 SMU

MC0072 – Computer GraphicsBook ID: B0810

Fig. 7.9: Perspective projection of an object to the view plane

Types of Parallel Projections:

Fig. 7.10

Parallel projections are basically categorized into two types, depending on the relation between the direction of projection and the normal to the view plane. When the direction of the projection is normal (perpendicular) to the view plane, we have an orthographic parallel projection. Otherwise, we have an oblique parallel projection. Fig. 7.10 illustrates the two types of parallel projection.