MC0072 Computer Graphics Fall 10 Answers

38
August 2010 Master of Computer Application (MCA) – Semester 3 MC0072 – Computer Graphics – 4 Credits (Book ID: B0810) Assignment Set – 1 (60 Marks) 1.Describe theory of development of hardware and software for computer graphics. Ans – Advances in computer graphics have transformed how we use computers. Computer graphics has given us the "mouse" input device, "what-you-see-is-what-you-get" document preparation systems, the computer-aided design system used to create the Boeing 777, the ability to visualize molecular dynamics and other scientific phenomena, the animation used in educational software and the advertising and entertainment industries, and virtual reality systems whose applications range from architectural prototyping to surgical training to entertainment. Today, every user of a computer benefits from computer graphics, even in applications such as word processors, spreadsheets, databases, and project planners. Because of user-friendly graphical user interfaces, pre-schoolers now routinely use computers, a revolution undreamt of even a few years ago. While everyone is familiar with the mouse, multiple "windows" on computer screens, and stunningly realistic images of everything from animated logos in television advertisements to NASA animations of spacecraft flying past Saturn, few people realize that these innovations were spawned by federally sponsored university research. Historical Perspective

Transcript of MC0072 Computer Graphics Fall 10 Answers

Page 1: MC0072 Computer Graphics Fall 10 Answers

August 2010 Master of Computer Application (MCA) – Semester 3 MC0072 – Computer Graphics – 4 Credits (Book ID: B0810) Assignment Set – 1 (60 Marks)

1.Describe theory of development of hardware and software for computer

graphics.

Ans –

Advances in computer graphics have transformed how we use computers. Computer graphics has given us the "mouse" input device, "what-you-see-is-what-you-get" document preparation systems, the computer-aided design system used to create the Boeing 777, the ability to visualize molecular dynamics and other scientific phenomena, the animation used in educational software and the advertising and entertainment industries, and virtual reality systems whose applications range from architectural prototyping to surgical training to entertainment. Today, every user of a computer benefits from computer graphics, even in applications such as word processors, spreadsheets, databases, and project planners. Because of user-friendly graphical user interfaces, pre-schoolers now routinely use computers, a revolution undreamt of even a few years ago.

While everyone is familiar with the mouse, multiple "windows" on computer screens, and stunningly realistic images of everything from animated logos in television advertisements to NASA animations of spacecraft flying past Saturn, few people realize that these innovations were spawned by federally sponsored university research.

Historical Perspective

From its beginnings in the late 1960s, when a few DARPA- and NSF-sponsored research laboratories were working on relatively obscure graphics-related projects, the computer graphics community has grown to more than a hundred thousand software and hardware engineers and application developers. Early thrusts included graphics support software and rendering algorithms, graphics hardware architectures, graphical user interfaces, and hypermedia. The availability of graphical tools and systems has vastly influenced developments in computer-aided design and manufacturing, including the automotive and aerospace industries, molecular modeling and drug design, medical imaging, architectural design, and the entertainment industry. Today, many scientific and engineering disciplines that were once

Page 2: MC0072 Computer Graphics Fall 10 Answers

distinct from computer graphics are inextricably interwoven with it. In these disciplines, visualization is no longer an optional tool but a critical enabling technology.

By far the largest segments of today's computing industry are the personal computer and workstation markets. All major vendors (IBM, HP, Sun, DEC, SGI, Intel, Apple) participate in these segments, which total roughly $50 billion and $15 billion, respectively. And note that these dollar volumes only represent the hardware and system-software portions of these markets; system software is a small share of the total software market, which is dominated by applications, almost all of which make use of computer graphics.

Case Studies

The nation should look back with great pride on its research investments in all major areas of computer graphics. Six of the most significant of these areas are discussed below.

User Interfaces

The power of computers cannot be harnessed without a way to access and control that power; the interface between the user and the machine can determine the success or failure of both hardware and software. Apple's graphical desktop interface for the Macintosh computer (and the Microsoft Windows equivalent for PCs), and more recently the introduction of NCSA Mosaic, a graphical browser for the Internet (which was rapidly followed by Netscape Navigator, Microsoft Internet Explorer, etc.), are excellent examples of how applications of computer graphics research can create new markets and broaden old ones.

Both of these applications of graphics technology -- the desktop metaphor and the Mosaic browser -- had their origins in federally sponsored efforts. DARPA-sponsored research at the University of Utah was built on by Alan Kay in the 1970s at Xerox PARC to create the Smalltalk programming environment on the pioneering Alto bitmapped graphics workstation. This environment and PARC's Bravo document editor stimulated the development of the Apple Macintosh (1984) and bitmapped graphically based windowing systems and graphical user interfaces. NSF funding made possible the development of the Mosaic browser at the National Center for Supercomputing Applications. Now, K-12 students throughout America use computers as information access devices -- they treat the Internet as a digital library and truly have information "at their fingertips." Mosaic and its derivatives have not only exponentially increased the number of Internet users, but is also spawning many new companies and enterprises and arousing intense corporate interest.

Computer Graphics Hardware

The hardware used in interactive computer graphics has its genesis in federally sponsored university research. The industry leader in rendering hardware is Silicon Graphics, Inc., founded by Jim Clark. Clark received his Ph.D. from the University of Utah where he and his advisor, Ivan Sutherland, pursued a federally funded program of research in 3D graphics hardware. Joining

Page 3: MC0072 Computer Graphics Fall 10 Answers

the faculty at Stanford, Clark received support from the DARPA VLSI Program for his Geometry Engine project, whose goal was to harness modern custom integrated-circuit technology to create cost-effective high-performance graphics systems. It was this Geometry Engine that formed the basis of SGI.

In 1968, Douglas Engelbart of Stanford Research Institute demonstrated his hypertext system, NLS, which was funded by DARPA. Among other things, this system included the first mouse -- now a standard fixture of computer systems everywhere.

Hypertext/Hypermedia

Hypertext and hypermedia have their roots in Vannevar Bush's famous 1945 Atlantic Monthly article, "As We May Think." Bush described how documents might be interlinked in the fashion of human associative memory. These ideas inspired Doug Engelbart at SRI (funded by DARPA) and Andries van Dam of Brown University (funded by NSF) to develop the first hypertext systems in the 1960s. These systems were the forerunners of today's word-processing programs, including simple what-you-see-is-what-you-get capabilities that were further refined in the Xerox Bravo editor. The ideas and concepts were fundamental to such developments as Apple's popular Hypercard and NCSA Mosaic.

Rendering

High-quality rendering has caught the public's eye and is having a vast impact on the entertainment and advertising industries. From Jurassic Park to simulator rides at Disney World and dancing soda cans in TV commercials, the world has been seduced by computer animation, special effects, and photorealistic imagery of virtual environments. How are these pictures created and where did the techniques for creating them come from?

Graphics Software Systems

The combination of the above advances led to many commercial graphics software systems. Rather than develop new systems for each application area -- be it advertising, animation, molecular modeling, or scientific visualization -- it began to make make sense to utilize general-purpose systems. Offerings from Wavefront, AVS, SGI (Iris Explorer), and IBM (Data Explorer) were developed with strong influence by former Cornell and Brown students, educated in NSF-sponsored graphics laboratories. PostScript, the de facto standard in page-description languages for laser printers, was developed by Adobe, founded by Utah Ph.D. John Warnock. That graduates of federally sponsored university graphics research laboratories move on to lead industrial projects demonstrates the most effective means of technology transfer between universities and industry.

Page 4: MC0072 Computer Graphics Fall 10 Answers

Virtual Reality

The popular idea of virtual reality saw its first implementation in Ivan Sutherland's ground-breaking work at Harvard in 1968. With funding from both commercial and government sources, including ONR, Bell Laboratories, the US Air Force and the CIA, Sutherland's work included the first head-mounted display as well as stereo and see-through displays, head tracking, and a hand-held 3D cursor. Such devices have now become widespread and are used in areas as diverse as video game systems, rapid prototyping for industrial design and architecture, and scientific visualization. Boeing's new 777 airplane was designed electronically throughout, including CAD/CAM 3D models, windtunnel simulation, and virtual-reality-based accessibility studies. This digital design enabled Boeing to avoid $100 million mockups, and the plane came together with far fewer changes and far greater accuracy than any previous design, enabling Boeing to maintain its competitive edge. Sutherland's early VR work also had influence on flight simulators, and his company, Evans and Sutherland, Inc., pioneered the visual simulation market, now a major business for many companies.

2. Explain the following with the help of relevant real time applications:

A) Classification of Applications

B) Development of Hardware and Software for Computer Graphics

Ans –

A) Classification of Applications

In the last section we have seen various uses of computer graphics. These uses can be classified as shown in the figure below. As shown in the figure below the use of computer graphics can be classified according to dimensionality of the object to be drawn: 2D or 3D. It can also be classified according to kind of picture: Symbolic or Realistic. Many computer graphics applications are classified by the type of interaction. The type of interaction determines the user’s degree of control over the object and its image. In controllable interaction user can change the attributes of the images. Role of picture gives the another classification. Computer graphics is either used for representation or it can be an end product such as drawings. Pictorial representation gives the final classification to use computer graphics. It classifies the use of computer graphics to represent pictures such as line drawing, black and white, colour and so on.

Page 5: MC0072 Computer Graphics Fall 10 Answers

B) Development of Hardware and Software for Computer Graphics

The development of hardware of computer graphics involves the development of input and output device technology. Therefore, in all development of computer graphics involves the development in three fields:

1. Output technology

2. Input technology and

3. Software technology

1.6.1 Output Technology

Figure below shows the historical development in the output technology. In early days of computer the hardcopy devices such as teletype printer and line printer were in use with computer driven CRT displays. In mid fifties command and control CRT display consoles were introduced. The more display devices developed in mid-sixties and in common use until the mid-eighties, are called vector, stroke, line drawing or calligraphic displays. The term vector is used as a synonyms for line; a stroke is a short line, and characters are made of sequence of such strokes.

Architecture of a Vector Display

Page 6: MC0072 Computer Graphics Fall 10 Answers

Vector Scan CRT

As shown in Figure above vector scan CRT display directly traces out only the desired lines on CRT i.e If we want a line connection point A with point B on the vector graphics display. We simply drive the beam deflection circuitry which will cause beam to go directly from point A to B. If we want to move the beam from point A to point B without showing a line between points, we can blank the beam as we move it. To move the beam across the CRT, the information about both, magnitude and direction is required. This information is generated with the help of vector graphics generator.

The figure below shows the typical vector display architecture. It consists of display controller, Central Processing Unit (CPU), display buffer memory and a CRT. A display controller is connected as an I/O peripheral to the central processing unit (CPU). The display buffer memory stores the computer produced display list or display program. The program contains point and line plotting commands with (x,y) or (x,y,z) end point coordinates, as well as character plotting commands. The display controller interprets commands for plotting points, lines and characters and sends digital and point coordinates to a vector generator. The vector generator then converts the digital coordinates values to analog voltages for beam-deflection circuits that displace an electron beam writing on the CRT’s phosphor coating.

In vector displays beam is deflected from end point to end point, hence this technique is also called random scan. We know as beam, strikes phosphor it emits light. But phosphor light decays after few milliseconds and therefore it is necessary to repeat through the display list to refresh the phosphor at least 30 times per second to avoid flicker. As display buffer is used to

Page 7: MC0072 Computer Graphics Fall 10 Answers

store display list and it is used for refreshing, the display buffer memory is also called refresh buffer.

Architecture of a Raster Display

The figure gbelow shows the architecture of a raster display. It consists of display controller, central processing unit (CPU), video controller, refresh buffer, keyboard, mouse and the CRT.

Architecture of a raster display

As shown in the Figure above, the display images stored in the form of 1s and 0s in the refresh buffer. The video controller reads this refresh buffer and produces the actual image on the screen. It does this by scanning one scan line at a time, from top to bottom and then back to the top.

Raster Scan CRT

In this method, the horizontal and vertical deflection signals are generated to move the beam all over the screen in a pattern shown in the Figure above.

Page 8: MC0072 Computer Graphics Fall 10 Answers

Here, the beam is swept back forth from left to right across the screen. When the beam is moved from the left to right, it is ON. The beam OFF, when it is moved from right to left as shown by dotted line in Figure above.

When the beam reaches the bottom of the screen, it is made OFF and rapidly retraced back to the top to start again. A display produced in this way is called raster scan display. Raster scanning process is similar to reading different lines on the page of a book. After completion of scanning of one line, the electron beam files back to the start of next line and process repeats. In the raster scan display, the screen images is maintained by repeating scanning the same image. This process is known as refreshing of screen.

Vector Scan Display Raster Scan Display1. In vector scan display the beam is moved between the end points of the graphics primitives

1. In raster scan display the beam is moved all over the screen one scan line at a time, from top to bottom and then back to top.

2. Vector display flickers when the number of primitives in the buffer becomes too large.

2. In raster display, the refresh process is independent of the complexity of the image.

3. Scan conversion is not required.

3. Graphics primitives are specified in terms of their endpoints and must be scan converted into their corresponding pixels in the frame buffer.

4. Scan conversion hardware is not required.

4. Because each primitive must be scan converted, real time dynamics is far more computational and requires separate scan conversion hardware.

5. Vector display draws a continuous and smooth lines

5. Raster display can display mathematically smooth line, polygons and boundaries of curved primitives only by approximating them with pixels on the raster grid.

6. Cost is more. 6. Cost is low.7. Vector display only draws lines and characters.

7. Raster display has ability to display areas filled with solid colors or patterns.

Direct-view storage tubes

In last sixties, the direct-view storage tube (DVST) was introduced in the display technology. The direct-view storage tubes (DVST) give the alternative method of maintaining the screen image. A DVST uses the storage grid which stores the picture information as a charge distribution just behind the phosphor-coated screen.

Page 9: MC0072 Computer Graphics Fall 10 Answers

The Figure below shows the general arrangement of the DVST. It consists of two electron guns: a primary gun and a flood gun.

Arrangement of DVST

A primary gun stores the picture pattern and the flood gun maintains the picture display. A primary gun produces high speed electrons which strike on the storage grid to draw the picture pattern. As electron beam strikes on the storage grid with high speed, it knocks out electrons from the storage grid keeping the positive charge. The knocked out electrons are attracted towards the collector. The net positive charges on the storage grid is nothing but the picture pattern. The continuous low speed electrons from flood gun pass through the control grid and are attracted to the positive charged areas of the storage grid. The low speed electrons then penetrate the storage grid and strike the phosphor coating without affecting the positive charge pattern on the storage grid. During this process the collector just behind the storage grid smooth out the flow of flood electrons.

Advantages of DVST

1. Refreshing of CRT is not required.

2. Because no refreshing is required, very complex pictures can be displayed at very high resolution without flicker.

3. It has flat screen.

Disadvantages of DVST

1. They do not display colors and are available with single level of line intensity.

2. Erasing requires removal of change on the storage grid. Thus erasing and redrawing process takes several seconds.

3. Selective or part erasing of screen is not possible.

Page 10: MC0072 Computer Graphics Fall 10 Answers

4. Erasing of screen produces unpleasant flash over the entire screen surface which prevents its use of dynamic graphics applications.

5. It has poor contrast as a result of the comparatively low accelerating potential applied to the flood electrons.

6. The performance of DVST is some what inferior to the refresh CRT.

In early seventies the inexpensive raster graphics displays were developed. Raster displays store the display primitives (such as lines, characters solid and filled patterns) in a refresh buffer in terms of their corresponding pixels.

Input Technology

Input technology has also improved greatly over the years. Number of input devices were developed over the years. These devices are punch cards, light pens, keyboard, tables, mouse and scanners.

Software Technology

Like output and input technology there is a lot of development in the software technology. In early days low level software were available. Over the years software technology moved from low level to device dependent and then to device independent packages. The device independent packages are high level packages with can drive a wide variety of display and printer devices. As a need for the device independent package a standardization is made and specification are decided. The first graphics specification to be officially standardized was GKS (the Graphical Kernel System). GKS supports the grouping of logically related primitives such as lines, polygons, and character strings and their attributes in collected form called segments. In 1988, a 3D extension of GKS, became an official standard, as did a much more sophisticated but even more complex graphics system called PHIGS (Programmer’s Hierarchical Interactive Graphics System).

PHIGS, as its name implies, supports nested hierarchical grouping of 3D primitives, called structures. In PHIGS, all primitives are subjected to geometric transformations such as scaling, rotation and translation to accomplish dynamic movement. PHIGS also supports a database of structures the programmer may edit and modify. PHIGS automatically updates the display whenever the database has been modified.

Page 11: MC0072 Computer Graphics Fall 10 Answers

3. Explain the following with respect to Graphics Hardware:

A) Color and Grayscale Levels B) Video Mixing

C) Random scan display processor

Ans –

A) Color and Grayscale Levels

Various color and intensity-level options can be made available to a user, depending on the capabilities and design objectives of a particular system. General purpose raster-scan systems, for example, usually provide a wide range of colors, while random-scan monitors typically offer only a few color choices, if any. Color options are numerically coded with values ranging from 0 through the positive integers. For CRT monitors, these color codes are then converted to intensity level settings for the electron beams.

In a color raster system, the number of color choices available depends on the amount of storage provided per pixel in the frame buffer. Also, color-information can be stored in the frame buffer in two ways: We can store color codes directly in the frame buffer, or we can put the color codes in a separate table and use pixel values as an index into this table. With the direct storage scheme, whenever a particular color code is specified in an application program, the corresponding binary value is placed in the frame buffer for each-component pixel in the output primitives to be displayed in that color. A minimum number of colors can be provided in this scheme with 3 bits of storage per pixel, as shown in the table 2.5.

Page 12: MC0072 Computer Graphics Fall 10 Answers

Each of the three bit positions is used to control the intensity level (either on or off) of the corresponding electron gun in an RGB monitor. The leftmost bit controls the red gun, the middle bit controls the green gun, and the rightmost bit controls the blue gun. Adding more bits per pixel to the frame buffer increases the number of color choices. With 6 bits per pixel, 2 bits can be used for each gun. This allows four different intensity settings for each of the three color guns, and a total of 64 color values are available for each screen pixel. With a resolution of 1024 by 1024, a full-color (24bit per pixel) RGB system needs 3 megabytes of storage for the frame buffer. Color tables are an alternate means for providing extended color capabilities to a user without requiring large frame buffers. Lower cost personal computer systems, in particular, often use color tables to reduce frame-buffer storage requirements.

B) Video Mixing

Video controller provides the facility of video mixing. In which it accepts information of two images simultaneously. One from frame buffer and other from television camera, recorder or other source. This is illustrated in fig 2.7. The video controller merges the two received images to form a composite image.

Fig. 2.8: Video mixing

There are two types of video mixing. In first, a graphics image is set into a video image. Here mixing is accomplished with hardware that treats a designated pixel value in the frame buffer as a flag to indicate that the video signal should be shown instead of the signal from the frame buffer, normally the designated pixel value corresponds to the background color of the frame buffer image.

In the second type of mixing, the video image is placed on the top of the frame buffer image. Here, whenever background color of video image appears, the frame buffer is shown, otherwise the video image is shown.

C) Random Scan Display Processor

The fig 2.8 shows the architecture of a random scan display system with display processor. This architecture is similar to the display processor based raster system architecture except the frame buffer. In random scan display no local memory is provided for scan conversion algorithms, since that functionality is typically implemented using PLAs (Programmable Logical Arrays) or microcode.

Page 13: MC0072 Computer Graphics Fall 10 Answers

Fig. 2.9: Random Scan Display System

In random scan displays, the display processor has its own instruction set and instruction address register. Hence it is also called Display Processing Unit ( DPU) or Graphics Controller. It performs instruction fetch, decode and execute cycles found in any computer. To provide a flicker free display, the display processor has to execute its program 30 to 60 times per second. The program executed by the display processor and the graphics package reside in the main memory. The main memory is shared by the general CPU and the display processor.

4. Describe the theory of scan converting circles and the corresponding

algorithms.

Ans –

A circle (through 0) with radius is given by the explicit equation or implicitly

by . The straight forward method of drawing a circle by

approximating the values is ineffective (since it involves: squaring, taking roots and ) and it gives an asymmetric distribution.

Page 14: MC0072 Computer Graphics Fall 10 Answers

Figure: Straight forward scan converting a circle

We can make use of the 8-fold symmetry, so we only have to draw 1/8 of the circle say from to .

Figure: 8-fold symmetry of the circle

Scan Converting Circles

Given:

Center: (h,k)

Radius: r

Equation:

(x-h)2 + (y-k)2 = r2

Page 15: MC0072 Computer Graphics Fall 10 Answers

To simplify we’ll translate origin to center

Simplified Equation:

x2 + y2 = r2

Circle has 8 -fold symmetrySo only need to plot points in 1st octant? x> ? y so step in x direction

Midpoint Circle Algorithm

Figure: Midpoint circle algorithm

The midpoint circle algorithm of Bresenham and is analogously to that of straight lines and goes as follows:

Page 16: MC0072 Computer Graphics Fall 10 Answers

and in the next step:

One can speed up things be calculating the linear increments recursively.

August 2010 Master of Computer Application (MCA) – Semester 3

Page 17: MC0072 Computer Graphics Fall 10 Answers

MC0072 – Computer Graphics – 4 Credits (Book ID: B0810) Assignment Set – 2 (60 Marks)

1. Describe the theory of Polygon and Pattern filling along with their

corresponding algorithms.

Ans –

Introduction to Polygons

There are different types of Polygons

Simple Convex Simple Concave Non-simple : self-intersecting With holes

Convex Concave Self-intersecting

• Convex

A region S is convex if for any x1 and x2 in S, the straight line segment connecting x1 and x2 is also contained in S. The convex hull of an object S is the smallest H such that S .

Scan Line Polygon Fill Algorithms

• A standard output primitive in general graphics package is a solid color or patterned polygon area:

• There are two basic approaches to filling on raster systems.

Page 18: MC0072 Computer Graphics Fall 10 Answers

• Determine overlap Intervals for scan lines that cross that area.

• Start from a given interior point and paint outward from this point until we encounter the boundary

• The first approach is mostly used in general graphics packages, however second approach is used in applications having complex boundaries and interactive painting systems

Seed Fill Algorithm

• These algorithms assume that at least one pixel interior to a polygon or region is known

• Regions maybe interior or boundary defined

A Simple Seed Fill Algorithm

• Push the seed pixel onto the stack

• While the stack is not empty

• Pop a pixel from the stack

Xk+1,yk+1

Xk , yk

Scan Line yk +1

Scan Line yk

Interior-defined region

Interior-defined region

Page 19: MC0072 Computer Graphics Fall 10 Answers

• Set the pixel to the required value

• For each of the 4 connected pixels

• Adjacent to the current pixel, check if it is a boundary pixel or if it has already been set to the required value.

• In either case ignore it. Otherwise push it onto the stack

• The algorithm can be implemented using 8 connected pixels

• It also works with holes in the polygons

Scan Line Polygon Fill Algorithm

• For each scan line crossing a polygon are then sorted from left to right, and the corresponding frame buffer positions between each intersection pair are set to the specified color.

• These intersection points are then sorted from left to right , and the corresponding frame buffer positions between each intersection pair are set to specified color

• In the given example ( previous slide) , four pixel intersections define stretches from x=10 to x=14 and x=18 to x=24

Some scan-Line intersections at polygon vertices require special handling:

• A scan Line passing through a vertex intersects two polygon edges at that position, adding two points to the list of intersections for the scan Line

• In the given example , scan Line y intersects five polygon edges and the scan Line y‘ intersects 4 edges although it also passes through a vertex

• y‘ correctly identifies internal pixel spans ,but need some extra processing

Interior pixels along a scan line passing through a polygon area 10 14 18 24

Page 20: MC0072 Computer Graphics Fall 10 Answers

• One way to resolve this is also to shorten some polygon edges to split those vertices that should be counted as one intersection

• When the end point y coordinates of the two edges are increasing , the y value of the upper endpoint for the current edge is decreased by 1

• When the endpoint y values are monotonically decreasing, we decrease the y coordinate of the upper endpoint of the edge following the current edge

- The topological difference between scan line y and scan line y’ is …

• For Scan line y, the two intersecting edges sharing a vertex are on opposite sides of the scan line

• But for scan line y’, the two intersecting edges are both above the scan line

• Thus, the vertices that require additional processing are those that have connecting edges on opposite sides of scan line.

• We can identify these vertices by tracing around the polygon boundary either in clock-wise or anti-clockwise order and observing the relative changes in vertex y coordinates as we move from one edge to the next.

Adjusting endpoint values for a polygon, as we process edges in order around the polygon perimeter. The edge currently being processed is indicated as a solid like. In (a), the y coordinate of the upper endpoint of the current edge id decreased by 1. In (b), the y coordinate of the upper end point of the next edge is decreased by 1

(a) (b)

Page 21: MC0072 Computer Graphics Fall 10 Answers

• If the endpoint y values of two consecutive edges monotonically increase or decrease, we need to count the middle vertex as a single intersection point for any scan line passing through that vertex.

• Otherwise, the shared vertex represents a local extremum (min. or max.) on the polygon boundary, and the two edge intersections with the scan line passing through that vertex can be added to the intersection list

The scan conversion algorithm works as follows

• Intersect each scanline with all edges

• Sort intersections in x

• Calculate parity of intersections to determine in/out

• Fill the “in” pixels

• Special cases to be handled:

• Horizontal edges should be excluded

• For vertices lying on scanlines,

1

1 2 1

2 1 1

Scan Line y1

Scan Line y

Figure 3-36

Intersection points along the scan lines that intersect polygon vertices. Scan line y generates an odd number of intersections, but scan line y generates an even number of intersections that can be paired to identify correctly the interior pixel spans.

Page 22: MC0072 Computer Graphics Fall 10 Answers

• count twice for a change in slope.

• Shorten edge by one scanline for no change in slope

• Coherence between scanlines tells us that

• Edges that intersect scanline y are likely to intersect y + 1

• X changes predictably from scanline y to y + 1

We have 2 data structures: Edge Table and Active Edge Table

• Traverse Edges to construct an Edge Table

• Eliminate horizontal edges

• Add edge to linked-list for the scan line corresponding to the lower vertex.

• Store the following:

• y_upper: last scanline to consider

• x_lower: starting x coordinate for edge

• 1/m: for incrementing x; compute

• Construct Active Edge Table during scan conversion. AEL is a linked list of active edges on the current scanline, y. Each active edge line has the following information

• y_upper: last scanline to consider

• x_lower: edge’s intersection with current y

• 1/m: x increment

• The active edges are kept sorted by x

Page 23: MC0072 Computer Graphics Fall 10 Answers

Algorithm

1. Set y to the smallest y coordinate that has an entry in the ET; i.e, y for the first nonempty bucket.

2. Initialize the AET to be empty.3. Repeat until the AET and ET are empty:3.1 Move from ET bucket y to the AET those edges whose y_min = y (entering edges).3.2 Remove from the AET those entries for which y = y_max (edges not involved in the next

scanline), the sort the AET on x (made easier because ET is presorted).3.3 Fill in desired pixel values on scanline y by using pairs of x coordinates from AET.3.4 Increment y by 1 (to the coordinate of the next scanline).3.5 For each nonvertical edge remaining in the AET, update x for the new y.

Extensions:

a. Multiple overlapping polygons – prioritiesb. Color, patterns Z for visibility

2. Describe the following with respect to clipping in graphics:

A) Clipping Lines B) Clipping circles and ellipses

Ans –

A)LINE CLIPPING

Figure 6-7 illustrates possible relationships between line positions and a standardrectangular clipping region. A lineclipping procedure involves several parts.First, we can test a given line segment to determine whether it lies completely insidethe clipping window. If it does not, we try to determine whether it lies completelyoutside the window. Finally, if we cannot identify a line as completely insideor completely outside, we must perform intersection calculations with oneor more clipping boundaries. We process lines through the "inside-outside'' testsby checking the line endpoints. A line with both endpoints inside all clippingboundaries, such as the line from P1 to P2, is saved. A line with both endpointsoutside any one of the clip boundaries (line P3P4, in Fig. 6-7) is outside the window.

Page 24: MC0072 Computer Graphics Fall 10 Answers

All other lines cross one or more clipping boundaries, and may require calculationof multiple intersection points. To minimize calculations, we try to devise clipping algorithms that can efficiently identify outside lines and reduce intersection calculations.For a line segment with endpoints (x1, yl) and (x2,y2) and one or both endpoints outside the clipping rectangle, the parametric representation

x= x1 + u(x2 – x1)y=y1 + u(y2-y1) , 0<= u <=1

could be used to determine values of parameter 11 for intersections with the clipping boundary coordinates. If the value of u for an intersection with a rectangle boundary edge is outside the range 0 to 1, the line does not enter the interior of the window at that boundary. If the value of u is within the range from 0 to 1, the line segment does indeed cross into the clipping area. This method can be applied to each clipping boundary edge in turn to determine whether any part of the line segment is to be displayed. Line segments that are parallel to window edges can be handled as special cases.Clipping line segments with these parametric tests require a good deal of computation, and faster approaches to clipping are possible. A number of efficient line clippers have been developed.Some algorithms are designed explicitly for two-dimensional pictures and some are easily adapted to three-dimensional applications.

Few Line Clipping Algorithms are ,

1. Cohen-Sutherland Line Clipping2. Lian-Barsky Line Clipping3. Nicholl-Lee-Nicholl Line Clipping

Page 25: MC0072 Computer Graphics Fall 10 Answers

B)Clipping circles and ellipses

Areas with curved boundaries can be clipped with methods similar to those discussed in the previous .sections. Curve-clipping procedures will involve nonlinear equations, however, and this requires more processing than for objects with linear boundaries.The bounding rectangle for a circle or other curved object can be used first to test for overlap with a rectangular clip window. If the bounding rectangle for the object is completely inside the window, we save the object. If the rectangle is determined to be completely outside the window, we discard the object. In either case, there is no further computation necessary. But if the bounding rectangle testfails, we can look for other computation-saving approaches. For a circle, we can use the coordinate extents of individual quadrants and then octants for preliminary testing before calculating curve-window intersections. For an ellipse, we can test the coordinate extents of individual quadrants. Figure 6-27 illustrates circle clipping against a rectangular window.Similar procedures can be applied when clipping a curved object against a general polygon clip region. On the first pass, we can clip the bounding rectangle of the object against the bounding rectangle of the clip region. If the two regions overlap, we will need to solve the simultaneous line-curve equations to obtain the clipping intersection points.

3. Describe the following with respect to Homogeneous Coordinates:

A) for Translation B) for Rotation C) for Scaling

Ans –

In many cases of computer graphics applications we require sequence of transformations. For example in animation on each next move we may have object to be translated than scaled. Similarly in games an object in a particular moment may have to be rotated as well as translated. That means we have to

Page 26: MC0072 Computer Graphics Fall 10 Answers

perform sequence of matrix operations but the matrix we have seen in the previous lecture have order which restrict them to be operated in sequence. However, with slight reformulated we can bring them into the form where they can easily be operated in any sequence thus efficiency can be achieved.

Homogeneous Coordinates

All the three basic transformations (Translation.Rotation and Scaling) can be expressed by following equation:

P= M1.P + M2

With coordinate positions P and P’ represented as column vectors. Matrix M1 is a 2 by 2 array containing multiplicative factors, and M2 is a two-element column matrix containing translation terms. For translation, M1 is a the identity matrix, For rotation or scaling, M2 contains the translational terms associated with the pivot point or scaling fixed point. To produce a sequence of transformations with these equations, such as scaling followed by rotation then translation, we must calculate the transformed coordinate’s one step at a time. First, coordinate positions are scaled, then these scaled coordinates are rotated, and finally the rotated coordinates are translated.

Now the question is can we find a way to eliminate the matrix addition associated with translation? Yes, we can but for that M1 will have to be rewritten as a 3×3 matrix and also the coordinate positions will have to be expressed as a homogeneous coordinate triple:

(x, y) as (xh, yh, h) where

x = xh ,y = yh

We can choose the h as any non-zero value. However, a convenient choice is 1, thus (x, y) has homogeneous coordinates as (x, y, 1). Expressing positions in homogeneous coordinates allows us to represent all geometric transformation equations as matrix multiplications. Coordinates are represented with three-element column vectors, and transformation operations are written as 3 by 3 matrices.

a) Translation with Homogeneous Coordinates The translation can now be expressed using homogeneous coordinates as:

Abbreviated as: P = T(tx,ty) . P

b) Rotation with Homogeneous Coordinates

The rotation can now be expressed using homogeneous coordinates as:

Page 27: MC0072 Computer Graphics Fall 10 Answers

Abbreviated as: P = R() .P

c) Scaling with Homogeneous Coordinates

The scaling can now be expressed using homogeneous coordinates as:

Abbreviated as: P = S (Sx,Sy) . P Matrix representations are standard methods for implementing transformations in graphics systems. In many systems, rotation and scaling functions produce transformations with respect to the coordinate origin as expressed in the equation above. Rotations and scalings relative to other reference positions are then handled as a succession of transformation operations.

Page 28: MC0072 Computer Graphics Fall 10 Answers

4. Describe the theory and applications of Homogeneous Coordinates and

Matrix representation of 2D Transformations

Ans –

Homogeneous Coordinates :

One of the many purposes of using homogeneous coordinates is to capture the concept of infinity. In the Euclidean coordinate system, infinity is something that does not exist. Mathematicians have discovered that many geometric concepts and computations can be greatly simplified if the concept of infinity is used. This will become very clear when we move to curves and surfaces design. Without the use of homogeneous coordinates system, it would be difficult to design certain classes of very useful curves and surfaces in computer graphics and computer-aided design.

Let us consider two real numbers, a and w, and compute the value of a/w. Let us hold the value of a fixed and vary the value of w. As w getting smaller, the value of a/w is getting larger. If w approaches zero, a/w approaches to infinity! Thus, to capture the concept of infinity, we use two numbers a and w to represent a value v, v=a/w. If w is not zero, the value is exactly a/w. Otherwise, we identify the infinite value with (a,0). Therefore, the concept of infinity can be represented with a number pair like (a, w) or as a quotient a/w.

Let us apply this to the xy-coordinate plane. If we replace x and y with x/w and y/w, a function f(x,y)=0 becomes f(x/w,y/w)=0. If function f(x,y) = 0 is a polynomial, multiplying it with wn will clear all denominators, where n is the degree of the polynomial.

For example, suppose we have a line Ax + By + C = 0. Replacing x and y with x/w and y/w yields A(x/w) + B(y/w) + C = 0. Multiplying by w changes it to

Ax + By + Cw = 0.

Let the given equation be a second degree polynomial Ax2 + 2Bxy + Cy2 + 2Dx + 2Ey + F = 0. After replacing x and y with x/w and y/w and multiplying the result with w2, we have

Ax2 + 2Bxy + Cy2 + 2Dxw + 2Eyw + Fw2 = 0

If you look at these two polynomials carefully, you will see that the degrees of all terms are equal. In the case of a line, terms x, y and w are of degree one, while in the second degree polynomial, all terms (i.e., x2, xy, y2, xw, yw and w2) are of degree two.

Page 29: MC0072 Computer Graphics Fall 10 Answers

Given a polynomial of degree n, after introducing w, all terms are of degree n. Consequently, these polynomials are called homogeneous polynomials and the coordinates (x,y,w) the homogeneous coordinates.

Given a degree n polynomial in a homogeneous coordinate system, dividing the polynomial with wn and replacing x/w, y/w with x and y, respectively, will convert the polynomial back to a conventional one. For example, if the given degree 3 homogeneous polynomial is the following:

x3 + 3xy2 - 5y2w + 10w3 = 0

the result is

x3 + 3xy2 - 5y2 + 10 = 0

This works for three-dimension as well. One can replace a point (x, y, z) with (x/w, y/w, z/w) and multiply the result by w raised to certain power. The resulting polynomial is a homogeneous one. Converting a degree n homogeneous polynomial in x, y, z and w back to the conventional form is exactly identical to the two-variable case.

The Dimensionality of Homogeneous Coordinates

You perhaps have discovered that homogeneous coordinates need 3 and 4 components to represent a point in the xy-plane and a point in space, respectively. Therefore, a point in space (resp., the xy-plane) in homogeneous coordinates actually has four (resp., third) components. Adding a fourth (resp., third) component whose value is 1 to the coordinates of a point in space (resp., the xy-plane) converts it to its corresponding homogeneous coordinates.

Ideal Points or Points at Infinity

As mentioned at the very beginning of this page, homogeneous coordinates can easily capture the concept of infinity. Let a point (x,y) be fixed and converted to a homogeneous coordinate by multiplying with 1/w, (x/w,y/w,1/w). Let the value of w approach to zero, then (x/w,y/w) moves farther and farther away in the direction of (x,y). When w becomes zero, (x/w,y/w) moves to infinity. Therefore, we would say, the homogeneous coordinate (x,y,0) is the ideal point or point at infinity in the direction of (x,y).

Let us take a look at an example. Let (3,5) be a point in the xy-plane. Consider (3/w,5/w). If w is not zero, this point lies on the line y = (5/3) x. Or, if you like the vector form, (3/w,5/w) is a point on the line O +

Page 30: MC0072 Computer Graphics Fall 10 Answers

(1/w)d, where the base point O is the coordinate origin and d is the direction vector <3,5>. Therefore, as w approaches zero, the point moves to infinity on the line. This is why we say (x,y,0) is the ideal point or the point at infinity in the direction of (x,y).

The story is the same for points in space, where (x,y,z,0) is the ideal point or point at infinity in the direction of (x,y,z).

The concept of homogeneous coordinates and points at infinity in certain direction will become very important when we discuss representations of curves and surface.

2-D Geometric Transformation :

We have seen that basic transformations can be expressed in matrix form. But many graphic application involve sequences of geometric transformations. Hence we need a general form of matrix to represent such transformations. This can be expressed as:

Where P and P' - represent the row vectors. T1 - is a 2 by 2 array containing multiplicative factors. T2 - is a 2 element row matrix containing translation terms.

We can combine multiplicative and translational terms for 2D geometric transformations into a single matrix representation by expanding the 2 by 2 matrix representations to 3 by 3 matrices. This allows us to express all transformation equations as matrix multiplications, providing that we also expand the matrix representations for coordinate positions. To express any 2D transformations as a matrix multiplication, we represent each Cartesian coordinate position (x,y) with the homogeneous coordinate triple (xh,yh,h),such that

Thus, a general homogeneous coordinate representation can also be written as (h.x, h.y, h). For 2D geometric transformations, we can choose the homogeneous parameter h to any non-zero value. Thus, there is an infinite number of equivalent homogeneous representations for each coordinate point (x,y). A convenient choice is simply to h=1. Each 2D position is then represented with homogeneous coordinates (x,y,1). Other values for parameter h are needed, for eg, in matrix formulations of 3D viewing transformations.

Expressing positions in homogeneous coordinates allows us to represent all geometric transformation equations as matrix multiplications. Coordinates are represented with three element row vectors and transformation operations are written as 3 by 3 matrices.

Page 31: MC0072 Computer Graphics Fall 10 Answers

For Translation, we have

or

Similarly for Rotation transformation, we have

or

Finally for Scaling transformation, we have

or