Ye Pei - cs.bath.ac.ukmdv/courses/CM30082/projects.bho/2003-4/YeiPe… · geometric accuracy and...

31
Point Graphics Renderer Ye Pei BSc Computer Science (hons) 2004 1

Transcript of Ye Pei - cs.bath.ac.ukmdv/courses/CM30082/projects.bho/2003-4/YeiPe… · geometric accuracy and...

Point Graphics Renderer

Ye Pei

BSc Computer Science (hons)

2004

1

Point Graphics Renderer Submitted by Ye Pei COPYRIGHT Attention is drawn to the fact that copyright of this thesis rests with its author. The Intellectual Property Rights of the products produced as part of the project belong to the University of Bath (see http://www.bath.ac.uk/ordinances/#intelprop). This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be published without the prior written consent of the author. Declaration This dissertation is submitted to the University of Bath in accordance with the requirements of the degree of Batchelor of Science in the Department of Computer Science. No portion of the work in this dissertation has been submitted in support of an application for any other degree or qualification of this or any other university or institution of learning. Except where specifically acknowledged, it is the work of the author.

2

Abstract

Modern laser range and the 3D scanning technology have enable the production of hundreds of millions point samples. Traditional algorithms are impractical for managing huge amount of data and unable to do efficient rendering. Point-based rendering algorithms are more suitable for handling large sets of data and are particularly effective for rendering objects with uniformly-sized geometric detail. We present a simple point-based renderer and demonstrated it’s efficiency in producing pictures with points as the input.

With many thanks to my supervisor

Professor Phil Willis

4

Contents 1 Introduction 1.1 Why use points as rendering primitives…………………6 1.2 The goal………………………………………………….6 2 Literature Review 2.1 A brief history……………………………………………7 2.2 Point-based rendering vs. Image-based rendering ………7 3 Point-based Rendering 3.1 The algorithm……………………………………………..8 3.2 Data structure……………………………………………..9 3.3 Other issues……………………………………………….9 3.3.1 Splat shape……………………………………………………10 3.3.2 How to deal with the lack of connectivity……………………10 4 Our Point Graphics Rendering System 4.1 Evolution of the system…………………………………..11 4.2 Perspective projections……………………………………13 4.3 The z-buffer algorithm……………………………………14 4.4 Experimental results………………………………………15 4.5 Possible future work ………………………………………16 5 Conclusions 17 Bibliography 18 Appendix A 20

5

1 Introduction

1.1 Why use points as rendering primitives? In computer graphics there are many possible surface representations, for example, implicit surfaces, polygons, triangles and points. Complex primitives such as bezier patches are better suited for modelling whilst simpler primitives are suitable for rendering. Currently, triangles are the most common primitives for rendering. Advances in modern laser range and image-based scanning technologies have enabled the production of some of the most complex models to date. One of the challenges with these techniques is handling the huge volume of point samples they generate. A commonly used approach is generating triangle meshes from the point data and rendering them using mesh reduction techniques [Hoppe 92]. However, current workstations can not display meshes containing up to hundreds of millions of samples which are practically attainable by 3D scanning systems. The running time and space requirements of traditional mesh simplification and progressive display algorithms make this approach impractical for scanned meshes with sizes of more than a few million samples. Moreover, many such techniques focus on optimising the placement of individual edges and vertices. They expend a relatively large amount of effort per vertex, while in scanned data locations of the large number of vertices are often imprecise due to noise. Some applications can not tolerate the inherent loss in geometric accuracy and texture fidelity which comes from polygon reduction. This suggests an alternative approach that takes points as rendering primitives and treats individual points as relatively unimportant, and also unconnected to each other. As a consequence, less effort is spent per primitive. Recent research has developed some algorithms (see Chapter 2) employing this paradigm. These algorithms with low per-primitive cost do not treat range data as exact, and actually do not preserve the 3D locations of any samples of the original mesh.

1.2 The goal In this paper we illustrated the point rendering in detail, and present a simple point-based rendering system that demonstrates some basic ideas of point rendering algorithms. The system was developed over some time. It started with only render some basic points and finally was able to produce nice pictures in sufficient time.

6

2 Literature Review

2.1 A brief history

The idea of using points as rendering primitives has evolved over time. It was first mentioned that points can be used to model solid objects two decades ago by Csuri et.al. [Csuri 79]. In 1985 Levoy and Whitted briefly introduced the use of points as a display primitive for continuous surfaces [Levoy85]. A few years later Cline et.al. generated surface points from computed tomography data in order to quickly render parts of the human anatomy [Cline88]. Max and Ohsaki used point samples stored with colour, depth and normal information, obtained from orthographic views, to model and render trees in 1995 [Max95]. Point sample rendering has been revisited by Grossman and Dally in 1998 [Grossman98].

Recent efforts have been focused on direct rendering techniques for point samples without connectivity. They use hierarchical data structures and forward warping to store and render the point data efficiently. Rusinkiewicz and Levoy described a system called QSplat, which is for representing and progressively displaying meshes that combines a multi-resolution hierarchy based on bounding spheres with a rendering system based on points [Rusinkiewicz00]. One year later, Zwicker et.al. proposed a new point rendering technique called surface splatting, focusing on high quality texture filtering in [Zwicker01].

2.3 Point based rendering vs. Image based rendering

Image based rendering is a field with many diverse approaches and it is difficult to categorize. One common thread that runs through all the methods is pre-calculation. Pre-calculating a representation of the scene from which images are derived at run-time can make all methods that cost gain.

As Foley et.al. defined in [Foley96],

“ Image, … , are (at the most basic level) arrays of values, where a value is a collection of numbers describing the attributes of a pixel in the image.”

Image based rendering can be very expensive in memory since a large number of images are required to properly sample an object. Scenes containing multiple objects are difficult to render due to large memory requirements using image

7

based techniques. That’s an obvious problem that one encounters using an image based rendering paradigm.

Another basic problem in image based rendering is that we may end up with geometric artifacts when reconstruct a novel view of an object from existing views without knowing anything about the object’s geometry. The reason behind this is that views of an object are sampled as a collection of view dependent image pixel in image based graphics. Moreover, another side effect of rendering without the complete geometry information available in polygon system appeals: it is unable to use dynamic lighting. This may be seen as an advantage since the static lighting is free and requires no computation at render time while dynamic lighting may cause problems during implementation. However, there are still certain applications which demands dynamic lighting.

Different from image based rendering, point sample rendering has the ability to avoid above problems without sacrificing rendering speed due to the fact that the views are sampled as a collection of infinitesimal view-independent surface points. For each points we can store relevant information in it’s’ data structure. By storing an exact depth for each point geometric artifacts can be avoided. By storing the normal, shininess and specular colour dynamic lighting can be implemented. Finally, memory requirements can be largely reduced due to the use of a view independent primitive. Instead of sampling a given surface element from all directions using image based techniques, we can simply sample it once and then compute its appearance from an arbitrary direction using the current lighting model.

Point based rendering is obviously more efficient than image based rendering. It has been incorporated into commercial products in recent years.

8

3 Point-based rendering

3.1 The algorithm There are many algorithms for different point-based rendering systems, and they focus on the different parts of modelling or rendering process. QSplat [Rusinkiewicz00] is a fast efficient pure point rendering system for large models with millions triangles. Our system is based on this multi-resolution point rendering system. Therefore it is essential to present some basic QSplat algorithms and data structure here. Fig 3.1 from [1] shows the basic structure of QSplat. It takes original triangular mesh through preprocessing process to construct hierarchy data structure. The preprocessed data together with view information is then used to produce final image.

Original Mesh

Preprocessed Data

View Information

Final Image

Figure 3.1: Basic structure of QSplat

3.2 Data structure Data structure is an essential part of the system. QSplat adopts a hierarchy of bounding spheres [Rubin 80] for visibility culling, level-of-detail control, and rendering.

3.3 Other issues

9

3.3.1 Splat shape Shape of the splat is the kernel used to represent a rendered point sample. It has a significant effect on the quality of the final image. Small dot, square, rectangle, circle, ellipse, … all these can be used to draw a splat during rendering. However they will result in different quality of rendered images. In Fig 3.2 [Rusinkiewicz ‘00], Rusinkiewicz and Levoy first choose a non-antialiased OpenGL point, which is rendered as a square. This is the simplest and fastest option. A second choice is an opaque circle, which may be rendered as a group of small triangles or as a single texture-mapped polygon. The third picture used the shape of a fuzzy spot, with an alpha that falls off radially with a Gaussian or some approximation. Those comparisons are made at both constant splat size and constant running time.

Figure 3.2 : Choice for splat shapes

Splat shape can also be changed with the normal at each node. In the above example the splats are always round (or square in the case of OpenGL points) or elliptical. However, the normal at each node can be used to determine the eccentricity and orientation of the ellipse. When the normals point towards the viewer, the splats will be circular. Otherwise, as mentioned in [Rusinkiewicz ‘00], the minor axis of each ellipse will point along the projection of the normal onto the viewing plane, and the ratio of minor to major axes will equal n·v, where n is the normal of the splat and v is a vector pointing towards the viewer. Change of splat shape along with the directions of normals actually gains benefits. The quality of silhouette edges is improved compared to circular splats. It also reduces noise and thickens.

3.3.2 How to deal with the lack of connectivity? As mentioned in Chapter 1, there is no connection between points in point rendering systems. This may cause problems during rendering. There may be holes left between splats if the size of the splat is not large enough. QSplat makes the size of the sphere at a vertex equal to the maximum size of the bounding spheres of all triangles that touch that vertex.

10

4 Our point graphics rendering system

4.1 Evolution of the system The system we present here has been evolved over some time, although it is still not very mature. We started with the shape of a cylinder, and worked in the following steps: • Generate 3D coordinates of points on the surface of a cylinder. • Do back face culling, eliminate those points which would not been seen. • Use perspective projection to generate 2D coordinates of the cylinder. • Render those 2D points in the aid of a pixel-based graphics library called Gigalib.

Figure 4.1: Cylinder1

Fig 4.1 shows the red points which form a perfect cylinder. Since the positions of the eye and the bottom of the cylinder are both on the z-coordinate, the cylinder looks flat on the bottom but curvy on the top in a perspective view. In the second stage, we tried to predict the location of patches which covers each point. For simplicity, rectangular is chosen for the patch shape since the positions of four points belong to the rectangular can be easily calculated and the patch can be drawn by connecting those four points.

11

Figure 4.2: Cylinder 2

In Fig 4.2, points drawn in black are those used for forming patches. We can easily recognize that patches are centred by red points. Fig 4.3 shows the effect of rendering those patches in grey colour.

Figure 4.3: Cylinder 3

The reason why we choose to render it in grey scale is for simplicity. In the next step, we wanted to show that intensity of the surface changes as the normal facing away from viewer. Change of intensity results in different level of greys. As we known, the colour of grey is composed of the same value of red, green and blue components. Hence only one value other than three needs to be determined, which largely simplifies the problem. This value ranges from 0 to 255.

The ratio of intensity ranges from 0 to 1. As in a colour plate, it ranges from black to white and any other grey value is somewhere between black and white. When the normal of patch is exactly toward viewer, this patch reaches highest intensity which appears white colour. Otherwise the patches appear darker and darker when θ, the angle between the surface normal and the vector towards the viewer

12

from centre of the cylinder, gets larger and larger. In mathematical term, the ratio of intensity is proportional to cosθ.

Figure 4.4: Cylinder 4

Fig 4.4 shows the effect of different grey scales.

Figure 4.5: Cylinder 5

Based on the existing system, some refinements were made to produce Fig 4.5. The number of points in cylinder is doubled. Moreover, the scale and shift made to the cylinder, the eye positions are all changed to make a bigger image. The colour of boundary lines between patches is set to the patch colour. This makes those lines invisible. 4.2 Perspective projections

13

In general, projections transform points in a coordinate system of dimension n into points in a coordinate system of dimension less than n. What we used in our system is the projection from 3D to 2D. The projection of a 3D object is defined by straight projection rays, called projectors, emanating from a centre of projection (COP), passing through each point of the object, and intersecting a projection plane to form the projection, as shown in Fig 4.6. (xp, yp) is the image that P=(x,y,z) projects on to the projection plane.

Figure 4.6: Perspective Projections

We know x, y, z are the 3D coordinates of P, and d is the distance from COP to the projection plane. The projection plane is on the xy-plane. To calculate Pp=(xp, yp, zp), the perspective projection of (x, y, z) onto the projection plane at z=0 and the centre of projection at z=-d, similarity of the triangles gives us

,xp x yp yd z d d z

= =+ + d

. (1)

Multiply by d, we get

, .( / ) 1 ( / ) 1

d x x d y yxp ypz d z d z d z d

= = = =+ + + +i i (2)

This is the function we used to calculate the 2D coordinates of 3D points.

4.3 Splat shape Rectangular is used in our system as splat shape. There are lots of other options, such as opaque circle, fuzzy spot etc. The graphics library we used is pixel-based and only supports limited structured drawing routines. However, the shape of patch is not fixed; it changes with the normal of patch. This actually gives us quite nice 3D effect.

14

Further more, the intensity of grey level for individual patches varies as well. It is determined by whether the patch is facing the viewer, and the degree of angle between patch normal and the horizontal vector between cylinder centre and the eye. We may try different splat shapes in the future, and the colour of patch could vary except grey; lighting effects may be included. Certainly this will make more realistic pictures.

4.3 The z-buffer algorithm The z-buffer, or depth-buffer, image-precision algorithm, is one of the simplest visible-surface algorithms to implement in either software or hardware. To implement z-buffer, it requires a frame buffer in which colour values are stored, and also a z-buffer, with the same number of entries, in which a z value is stored for each pixel. The frame buffer is initialised to background colour, change to the colour of pixel with smaller z value during scan-conversion process. The z-buffer will only store the point position with smaller z value as well. We may adopt the z-buffer algorithm to implement back hidden surface removal. For simplicity, the z-buffer is initialised to zero, and holds for a current point (x, y) the smallest z value so far encountered. For each input (x’, y’, z’), if x’=x and y’=y, compare z’ and z; keep the record of point with smaller z, discard the other. The following is the pseudocode for the simplified z-buffer algorithm: void zBuffer(){ /* Initialise the Z-buffer.*/

for (y=0; y<YMAX; y++) For (x=0; x<XMAX;x++){ Z=0; WriteZ(x, y, z); } /* Examine each point, only keep the nearest one.*/

for (each point (x’,y’,z’)) if (x’=x && y’=y && z’<z) WriteZ(x’,y’,z’); }

15

4.4 Experimental results Due to time limit, we only tried to increase the number of points in cylinder to see the practicality and efficiency of our system. In Fig 6.1, picture on the left was drawn with 128 points on each layer of the cylinder, whereas the right one was drawn with 256 points on each layer. The program still compiled very fast without errors. It produced quite satisfactory pictures.

Figure 4.7 : Cylinder when CP=128 &CP=256

4.5 Possible future work Lots of work can be done based on our system. We can try different models except cylinder. The renderer can be separated from the program and redesigned to accept large data sets to produce pictures. The data structure can be more complex, including position, normal, the width of normal cone, colour, etc. The renderer could make good use of that information and implement better and more efficient algorithm. The shape of patch may vary. Different splat shapes will differ the quality of pictures produced. Some of them may speed up the program. Some of them may allow more realistic productions. We may add on lighting effects and work on the colour of individual patches as well. The intensities of red, green, blue colour can be calculated separately.

16

5 Conclusions The point rendering has been cooperated into commercial products and been used in more specialized contexts such as rendering fire, smoke, and trees.

17

Bibliography

Journals: [Hoppe 82]H. Hoppe, T. D., T.Duchampt, J.McDonald, and W.Stuetzle. (July 1992). Surface Reconstruction from Unorganized Points. SIGGRAPH 92 Proceedings, Chicago.

[Cline88] Cline,H.E., Lorensen,W.E., Ludke,S. Crawford,C.R., Teeter,B.C.,

“Two Algorithms for the three-dimensional reconstruction of tomograms”,

Medical Physics, Vol. 15, No. 3, May-June 1988, pp. 320-327.

[Csuri79] Csuri,C., Hackathorn,R., Parent,R., Carlson,W.,

Howard,M., ”Towards an Interactive High Visual Complexity Animation System”,

Computer Graphics( SIGGRAPH’79 Proceedings), Vol. 13, No. 2, August 1979,

pp. 289-299.

[Grossman98] Grossman,J.P. and Dally,W. “Point Sample Rendering”. In

Rendering Techniques’98, page 181-192. Springer, Wien, Vienna, Austria, July

1998.

[Levoy85] Levoy, M., Whitted, T. 1985. “The Use of Points as a Display

Primitive”, Technical Report TR 85-022, The University of North Carolina at

Chapel Hill, Department of Computer Science, 1985

[Max95] Max,N., Ohsaki,K. “Rendering Trees from Precomputed Z-Buffer

Views”, 6th Eurographics Workshop on Rendering, June 1995, pp. 45-54

[Rusinkiewicz00] Rusinkiewicz, S., Levoy, M. 2000. QSplat: A multiresolution

point rendering system for large meshes. In Proceeding of SIGGRAPH 2000.

[Rubin80] Rubin, S.M. and Whitted,T. “A 30-Dimensional representation for

Fast Rendering of Complex Scenes,” Proc. SIGGRAPH,1980.

[Zwicker01] Zwicker,M., Pfister,H., Van Baar,J., and Gross,M.2001. Surface

splatting. In Proceedings of SIGGRAPH 2001.

18

Books:

[Foley 96] Foley,J.D., Van Dam,A., Feiner,S.K., Hughes,J.F.,(1996) Computer Graphics: Principles and Practice, 2nd edition, p. 816.

Webography

[1] “Point Rendering Systems QSplat ” Presented by Minh X. Nguyen (May2004)

URL:

http://www-users.itlabs.umn.edu/classes/Fall-

2001/csci5980/notes/point.ppt

19

Appendix A The attached cd includes v1.c, v2.c, v3.c, v4.c, v5.c with taga files they produced, and the graphics library needed to implement the program. /* This is v5.c , produced the file cylinder1.tga.*/ #include <stdio.h> #include <math.h> #include <stdlib.h> #include "./lib_G_c.h" #define Pi 3.14159265 #define Eye 20 #define R 3 #define CP 32 /* Increase the number of cylinder points. */ /* Colours */ /* A R G B */ #define Black (0x00000000) #define Dark_Grey (0x00999999) #define Grey (0x00DDDDDD) #define White (0x00FFFFFF) #define Blue (0x000000FF) #define Green (0x0000FF00) #define Red (0x00FF0000) #define Dark_Red (0x00990000) #define Yellow (0x00FFFF00) #define Magenta (0x00FF00FF) struct point3D { float x; float y; float z; }; struct point2D { float x; float y; };

20

int main () { int n, i, k, NPOINTS, m, num; float j, theta, alpha, radius, beta; WORD32 drawing, brush; WORD32 size= 256, depth= 24; PIXEL colour, r; struct point3D cylinder[1000]; struct point2D perspec[1000]; struct point3D patch[1000]; CO_ORD patch2D[1000]; CO_ORD array[4]; /* Define points in cylinder in 3D space.Cull backface. */ i=0; for (n=CP/4; n<=CP*3/4; n++){ theta = (2*Pi/CP)*n; for (j=0.0; j<=5.0; j+=0.5){ cylinder[i].x = R*sin(theta); cylinder[i].y = j; cylinder[i].z = R*cos(theta) + 6.0; ++i; } } NPOINTS = i; num=0; alpha = 2*Pi/(CP*2); radius = R/cos(alpha); for (n=CP*2/4-1; n<=CP*2*3/4+1; n+=2) for (j=-0.25; j<5.5; j+=0.5){ patch[num].x = radius*sin(alpha*n); patch[num].y = j; patch[num].z = radius*cos(alpha*n) + 6.0; ++num; } /* Use perspective projection to obtain 2D data. */ /* Scale it to enlarge the image and move it to the centre */ for (k=0; k<NPOINTS; k++){ perspec[k].x = 30*(cylinder[k].x)/(1.0 + cylinder[k].z/Eye) + size/2.0; perspec[k].y = 30*(cylinder[k].y)/(1.0 + cylinder[k].z/Eye) + size/3.0; printf("x1= %f y1= %f \n", perspec[k].x, perspec[k].y); }

21

for (k=0; k<num; k++){ patch2D[k].x = (int)(30*(patch[k].x)/(1.0 + patch[k].z/Eye) + size/2.0); patch2D[k].y = (int)(30*(patch[k].y)/(1.0 + patch[k].z/Eye) + size/3.0); printf("x2= %d y2= %d \n", patch2D[k].x, patch2D[k].y); } printf ("\nSimple cylinder plot programme\n\n"); (void) g_start(); drawing= g_open_canvas ("cylinder5.tga", RECYCLE, size, size, depth); brush = g_open_brush (); if (drawing < 0) { printf ("failed to get the drawing canvas %d\n",drawing); exit (EXIT_FAILURE); } if (brush < 0) { printf("failed to create a new brush package %d\n", brush); exit (EXIT_FAILURE); } (void) g_set_fill_type(brush, SOLID_FILL); g_erase_canvas (drawing, White); /* Make the canvas of uniform colour. */ /* Draw points of the cylinder. */ for (m=0; m<NPOINTS; m++) g_pixel_write(drawing, perspec[m].x, perspec[m].y, (PIXEL) Red); /* Draw points which are used for building up the facets of cylinder. */ for (m=0; m<num; m++) g_pixel_write(drawing, patch2D[m].x, patch2D[m].y, (PIXEL) Dark_Grey); /* Draw cylinder facets centered by cylinder points. */ /* The intensities of facets varies. */ for (k=1; k<=CP/2+1; k++){ beta = Pi/2 - (2*Pi/CP)*(k-1);

22

if (beta<0) beta = -beta; printf ("beta= %f\n", beta); r = (0xFF) * cos(beta); printf ("r= %x\n", r); colour = r + r*(0xFF00/0xFF) + r*(0xFF0000/0xFF); printf ("colour= %x\n", colour); (void) g_set_fill_colour(brush, colour); (void) g_set_line_colour(brush, colour); for (m=(k-1)*12; m<=10+(k-1)*12; m++){ array[0] = patch2D[m]; array[1] = patch2D[m+1]; array[2] = patch2D[m+13]; array[3] = patch2D[m+12]; g_polygon(drawing, brush, array, 4); } } (void) g_close_brush(brush); printf("\nclose status= %d \n", g_close_canvas(drawing) ); (void) g_end(); return(EXIT_SUCCESS); } /* This is v4.c, produced file cylinder4.tga*/ #include <stdio.h> #include <math.h> #include <stdlib.h> #include "./lib_G_c.h" #define Pi 3.14159265 #define Eye 20 #define R 3 #define CP 16 /* Colours */ /* A R G B */

23

#define Black (0x00000000) #define Dark_Grey (0x00999999) #define Grey (0x00DDDDDD) #define White (0x00FFFFFF) #define Blue (0x000000FF) #define Green (0x0000FF00) #define Red (0x00FF0000) #define Dark_Red (0x00990000) #define Yellow (0x00FFFF00) #define Magenta (0x00FF00FF) struct point3D { float x; float y; float z; }; struct point2D { float x; float y; }; int main () { int n, i, k, NPOINTS, m, num; float j, theta, alpha, radius, beta; WORD32 drawing, brush; WORD32 size= 256, depth= 24; PIXEL colour, r; struct point3D cylinder[500]; struct point2D perspec[500]; struct point3D patch[1000]; CO_ORD patch2D[1000]; CO_ORD array[4]; /* Define points in cylinder in 3D space.Cull backface. */ i=0; for (n=CP/4; n<=CP*3/4; n++){ theta = (2*Pi/CP)*n; for (j=0.0; j<=5.0; j+=0.5){ cylinder[i].x = R*sin(theta); cylinder[i].y = j; cylinder[i].z = R*cos(theta) + 6.0; ++i; } }

24

NPOINTS = i; num=0; alpha = 2*Pi/(CP*2); radius = R/cos(alpha); for (n=CP*2/4-1; n<=CP*2*3/4+1; n+=2) for (j=-0.25; j<5.5; j+=0.5){ patch[num].x = radius*sin(alpha*n); patch[num].y = j; patch[num].z = radius*cos(alpha*n) + 6.0; ++num; } /* using perspective projection to obtain 2D data. */ /* Scale it to enlarge the image and move it to the centre */ for (k=0; k<NPOINTS; k++){ perspec[k].x = 20*(cylinder[k].x)/(1.0 + cylinder[k].z/Eye) + size/2.0; perspec[k].y = 20*(cylinder[k].y)/(1.0 + cylinder[k].z/Eye) + size/2.0; printf("x1= %f y1= %f \n", perspec[k].x, perspec[k].y); } for (k=0; k<num; k++){ patch2D[k].x = (int)(20*(patch[k].x)/(1.0 + patch[k].z/Eye) + size/2.0); patch2D[k].y = (int)(20*(patch[k].y)/(1.0 + patch[k].z/Eye) + size/2.0); printf("x2= %d y2= %d \n", patch2D[k].x, patch2D[k].y); } printf ("\nSimple cylinder plot programme\n\n"); (void) g_start(); drawing= g_open_canvas ("cylinder4.tga", RECYCLE, size, size, depth); brush = g_open_brush (); if (drawing < 0) { printf ("failed to get the drawing canvas %d\n",drawing); exit (EXIT_FAILURE); } if (brush < 0) { printf("failed to create a new brush package %d\n",

25

brush); exit (EXIT_FAILURE); } (void) g_set_fill_type(brush, SOLID_FILL); (void) g_set_line_colour(brush,Grey); g_erase_canvas (drawing, White); /* Make the canvas of uniform colour */ /* Draw points of the cylinder*/ for (m=0; m<NPOINTS; m++) g_pixel_write(drawing, perspec[m].x, perspec[m].y, (PIXEL) Red); /* Draw points which are used for building up the facets of cylinder*/ for (m=0; m<num; m++) g_pixel_write(drawing, patch2D[m].x, patch2D[m].y, (PIXEL) Dark_Grey); /* Draw cylinder facets centered by cylinder points*/ /* The intensity of facets varies */ for (k=1; k<10; k++){ beta = Pi/2 - (2*Pi/16)*(k-1); if (beta<0) beta = -beta; printf ("beta= %f\n", beta); r = (0xFF) * cos(beta); printf ("r= %x\n", r); colour = r + r*(0xFF00/0xFF) + r*(0xFF0000/0xFF); printf ("colour= %x\n", colour); (void) g_set_fill_colour(brush, colour); for (m=(k-1)*12; m<=10+(k-1)*12; m++){ array[0] = patch2D[m]; array[1] = patch2D[m+1]; array[2] = patch2D[m+13]; array[3] = patch2D[m+12]; g_polygon(drawing, brush, array, 4); } } (void) g_close_brush(brush); printf("\nclose status= %d \n", g_close_canvas(drawing) ); (void) g_end();

26

return(EXIT_SUCCESS); }

/* This is v3.c, produce the file cylinder3.tga.*/

#include <stdio.h> #include <math.h> #include <stdlib.h> #include "./lib_G_c.h" #define Pi 3.14159265 #define Eye 20 #define R 3 /* Colours */ /* A R G B */ #define Black (0x00000000) #define Dark_Grey (0x00999999) #define Grey (0x00DDDDDD) #define White (0x00FFFFFF) #define Blue (0x000000FF) #define Green (0x0000FF00) #define Red (0x00FF0000) #define Dark_Red (0x00990000) #define Yellow (0x00FFFF00) #define Magenta (0x00FF00FF) #define gAmbientColor (0xFF444444) struct point3D { float x; float y; float z; }; struct point2D { float x; float y; }; int main () { int n, i, k, NPOINTS, m, num;

27

float j, theta, alpha, radius; WORD32 drawing, brush; WORD32 size= 256, depth= 24; struct point3D cylinder[500]; struct point2D perspec[500]; struct point3D patch[1000]; CO_ORD patch2D[1000]; CO_ORD array[4]; /* Define points in cylinder in 3D space.Cull backface. */ i=0; for (n=4; n<=12; n++){ theta = (2*Pi/16)*n; for (j=0.0; j<=5.0; j+=0.5){ cylinder[i].x = R*sin(theta); cylinder[i].y = j; cylinder[i].z = R*cos(theta) + 6.0; ++i; } } NPOINTS = i; num=0; alpha = 2*Pi/32; radius = R/cos(alpha); for (n=7; n<=25; n+=2) for (j=-0.25; j<5.5; j+=0.5){ patch[num].x = radius*sin(alpha*n); patch[num].y = j; patch[num].z = radius*cos(alpha*n) + 6.0; ++num; } /* Use perspective projection to obtain 2D data. */ /* Scale it to enlarge the image and move it to the centre */ for (k=0; k<NPOINTS; k++){ perspec[k].x = 20*(cylinder[k].x)/(1.0 + cylinder[k].z/Eye) + size/2.0; perspec[k].y = 20*(cylinder[k].y)/(1.0 + cylinder[k].z/Eye) + size/2.0; printf("x1= %f y1= %f \n", perspec[k].x, perspec[k].y); } for (k=0; k<num; k++){ patch2D[k].x = (int)(20*(patch[k].x)/(1.0 + patch[k].z/Eye) + size/2.0); patch2D[k].y = (int)(20*(patch[k].y)/(1.0 + patch[k].z/Eye)

28

+ size/2.0); printf("x2= %d y2= %d \n", patch2D[k].x, patch2D[k].y); } printf ("\nSimple cylinder plot programme\n\n"); (void) g_start(); drawing= g_open_canvas ("cylinder3.tga", RECYCLE, size, size, depth); brush = g_open_brush (); if (drawing < 0) { printf ("failed to get the drawing canvas %d\n",drawing); exit (EXIT_FAILURE); } if (brush < 0) { printf("failed to create a new brush package %d\n", brush); exit (EXIT_FAILURE); } (void) g_set_fill_type(brush, SOLID_FILL); (void) g_set_fill_colour(brush, Grey); g_erase_canvas (drawing, White); /* Make the canvas of uniform colour */ for (m=0; m<NPOINTS; m++) g_pixel_write(drawing, perspec[m].x, perspec[m].y, (PIXEL) Red); for (m=0; m<num; m++) g_pixel_write(drawing, patch2D[m].x, patch2D[m].y, (PIXEL) Dark_Grey); /* Draw patches by connecting every four points. */ for (k=1; k<10; k++) for (m=(k-1)*12; m<=10+(k-1)*12; m++){ array[0] = patch2D[m]; array[1] = patch2D[m+1]; array[2] = patch2D[m+13]; array[3] = patch2D[m+12]; g_polygon(drawing, brush, array, 4); }

29

(void) g_close_brush(brush); printf("\nclose status= %d \n", g_close_canvas(drawing) ); (void) g_end(); return(EXIT_SUCCESS); }

30

31