Aliasing The jaggies: a form of aliasing. Aliasing occurs because pixels are displayed in a fixed...

Post on 04-Jan-2016

228 views 1 download

Tags:

Transcript of Aliasing The jaggies: a form of aliasing. Aliasing occurs because pixels are displayed in a fixed...

Aliasing

• The jaggies: a form of aliasing.

• Aliasing occurs because pixels are displayed in a fixed rectangular array.

Aliasing (2)

• Each pixel is set to black based on sampling at its center.

• If the rectangle covers the center, the color of the entire pixel area is set to white or black.a). b).

Sampling Effects

• Small objects can disappear entirely if the object lands between pixel centers (left).

• An object can blink on and off in an animation (right).

Why Aliasing Occurs

• A rapidly varying signal is sampled infrequently, causing the appearance of a lower “alias” frequency.

Anti-Aliasing Techniques

• Anti-aliasing techniques involve blurring to smooth the image.

• For a black rectangle against a white background, the sharp transition from black to white is softened by using a mixture of gray pixels near the rectangle's border.

• When the picture is looked at from afar, the eye blends the gracefully varying shades of gray together and sees a smoother edge.

Anti-Aliasing Techniques (2)

• Three approaches to anti-aliasing are commonly used: – prefiltering– supersampling– postfiltering

Prefiltering

• Prefiltering techniques compute pixel colors based on an object’s coverage: the fraction of the pixel area that is covered by the object.

• A pixel that is half-covered by the polygon should be given the intensity 1/2; one that is one-third covered should be given the intensity 1/3; and so forth.

Prefiltering (2)

• Prefiltering with 16 shades of grey:

Prefiltering (3)

• Prefiltering operates on the detailed geometric shape of the object(s) being scan converted and computes an average intensity for each pixel based on the objects found lying within each pixel's area.

• For shapes other than polygons, it can be expensive computationally.

Supersampling

• Since aliasing arises from sampling an object at too few points, we can try to reduce its effects by sampling more often than one sample per pixel.

• This is called supersampling: taking more intensity samples of the scene than are displayed.

Supersampling (2)

• Each display pixel value (square) is formed as the average of several samples (x).

Supersampling (3)

• Each final display pixel can be formed as the average of the nine neighbor samples: the center one and the eight surrounding ones.

Supersampling (4)

• The pixel at A has six samples within the bar and three samples of background.

• Its color is set to two-thirds the bar's color + one-third the background’s color.

Supersampling (5)

• Left: a scene displayed at a resolution of 300-by-400 pixels. The jaggies are readily apparent.

• Right: the same scene sampled at a resolution of 600-by-800 samples. Each of the 300-by-400 display pixels is an average of nine neighbors. The jaggies have been softened considerably.

Supersampling (6)

• Supersampling computes Ns scene samples in both x and y for each display pixel, averaging some number of neighbor samples to form each display pixel value.– Supersampling with Ns = 4, for example,

averages 16 samples for each display pixel.

Supersampling With Ns = 1

• The scene is sampled at the corner of each pixel.

• Each pixel is set to the average of the four samples taken at its corners.

• Some softening of the jaggies is still observed even though there is no supersampling.

Postfiltering

• Postfiltering computes each display pixel as a weighted average of an appropriate set of neighboring samples of the scene.

Postfiltering (2)

• Each value represents the intensity of a scene sample, the ones in gray indicating the centers of the various display pixels.

• The square mask or window function of weights is laid over each gray square in turn.

Postfiltering (3)

• Each weight is multiplied by its corresponding sample; the nine products are summed to form the pixel intensity.

• The weights must always sum to 1.

Postfiltering (4)

• Example: when the mask shown is laid over the sample of intensity 30, the weighted average is found to be (30)/2 + (28 + 16 + 4 + 42 + 17 + 53 + 60 + 62)/16 = 32.625, which rounds to intensity 33.

Postfiltering (5)

• Supersampling is a special case of postfiltering, in which all the weights have value 1/9.

• Larger masks, 5-by-5 or even 7-by-7 look farther into the neighborhood of the center sample and can provide additional smoothing.

• Postfiltering can be performed for any value of oversampling Ns. – If Ns = 4 is used, a 5-by-5, 7-by-7, or even 9-by-9

mask is appropriate. – If Ns = 1, a 3-by-3 mask that weights the center pixel

most heavily works best.

Anti-aliasing for Textures

• Mapped textures are particularly prone to aliasing effects.

• Above: aliased texture.

• Below: anti-aliased texture.

Anti-aliasing for Textures (2)

• Texture is defined as a function texture(s, t) in texture space, which undergoes a complex sequence of mappings before it is finally depicted on the display.

• The rendering task is to work the other way, and, for each given display pixel at coordinates (x, y), find the corresponding color in the texture() function.

Anti-aliasing for Textures (3)

• The figure shows a pixel at (x, y) being rendered, and the value (s*, t*) in texture space that is accessed.

Anti-aliasing for Textures (4)

• Let T() be the mapping from pixel space to texture space, so (s*, t*) = T(x, y).

• Pixels have area. The whole pixel at (x, y) maps to texture space as a quadrilateral.

Anti-aliasing for Textures (5)

• We call this the “texture quad” for the screen pixel in question.

• The texture space is covered with such quads, each arising from a screen pixel.

• The size and shape of each texture quad depends on the nature of T() and can be costly to find.

• If texture(,) varies inside the quad, yet the screen pixel is colored using only the single sample texture(s*, t*), significant information is missed, and there is substantial aliasing.

Anti-aliasing for Textures (6)

• To reduce the effects of aliasing, we should color each screen pixel with some average of the colors lying in the corresponding texture quad.

• Finding the area of each texel that lies inside the texture quad is very slow.

• We need some approximate techniques.

Anti-aliasing for Textures (7)

• Elliptical weighted average (EWA) filter– Covers each screen pixel by a circularly-symmetric

filter function. – The concentric circles indicate different weighting

levels and map the filter function into texture space. – Once in texture space the levels become a form of

ellipse that roughly resembles the shape of the texture quad.

• Samples of the filter function, stored in a look-up table, are used to weight different points within the ellipse, and these weighted values are summed to form the average.

Anti-aliasing for Textures (8)

• This can all be done incrementally and very efficiently at the cost of a few arithmetic operations per texel.

Anti-aliasing for Textures (9)

• Stochastic sampling avoids difficult calculations in forming an average texture color by sampling texels in the quad in a randomized pattern and averaging the results.

Anti-aliasing for Textures (10)

• Stochastic sampling uses the average

• αk and βk are small random quantities that are easy to create using a random number generator.

• Their distribution can be tuned if desired to the general size of the texture quad.

averageN

texture s tk kk

1( * , * )

Anti-aliasing in OpenGL

• Anti-aliasing in OpenGL uses the accumulation buffer, an extra area that OpenGL can create and draw into.

• It is created by glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_ACCUM | GLUT_DEPTH);

• Clear it using glClear(GL_ACCUM_BUFFER_BIT);

Anti-aliasing in OpenGL (2)

• The anti-aliasing method resembles stochastic sampling. – The scene is drawn 8 times, each time

translating the camera in x and y by a small displacement stored in an array jitter[ ] of vectors.

– Each new drawing is scaled by 1/8 and added pixel by pixel to the accumulation buffer using glAccum(GL_ACCUM, 1/8.0).

Anti-aliasing in OpenGL (3)

– When the eight renditions have been drawn, the accumulation buffer is copied into the frame buffer using glAccum(GL_RETURN, 1.0).

• The result is an average of several pixels.

Anti-aliasing in OpenGL: Code

glClear(GL_ACCUM_BUFFER_BIT); // clear ACCUMfor (int i=0; i < 8; i++){

cam.slide(f * jitter[i].x, f * jitter[i].y, 0); //slide cameradisplay(); // draw the sceneglAccum(GL_ACCUM, 1/8.0); //add to the accumulation buffer

}glAccum(GL_RETURN, 1.0);//copy accumulation

buffer into frame buffer

Anti-aliasing in OpenGL: Code (2)

• The jitter vector contains eight points that lie in x and y between –0.5 and 0.5.

• The header file jitter.h uses the values (-0.3348, 0.4353), ( 0.2864, -0.3934), ( 0.4594, 0.1415), (-0.4144, -0.1928), (-0.1837, 0.0821), (-0.0792, -0.3173), (0.1022, 0.2991), ( 0.1642, -0.0549).

• These mimic eight randomly chosen offsets from a circularly symmetric probability distribution, reminiscent of the EWA method described earlier.

• jitter.h also contains other jitter vectors, both shorter and longer, that can be used to try different levels of anti-aliasing.

Example

• Left: no anti-aliasing; right: eight jittered versions averaged in the accumulation buffer. Performance is reduced, since the scene is rendered 8 times for each frame.

Creating More Shades and Colors

• 32 bits per pixel is common now.

• Considering how to make multiple colors from a very much smaller number of bits will allow us to understand better the ways that the human-perceptual system interacts with an image.

Halftoning

• Halftoning (used for pictures in newspapers) trades spatial resolution for color resolution. Only black ink is used, yet an image appears to have many levels of gray.

• This is achieved by using smaller or larger blobs of black ink spaced closely together. – Areas where most of the blobs are large

appear darker to the eye because the average level of blackness is higher.

– Places where the blobs are smaller appear as a lighter shade of gray.

Halftoning (2)

– The eye combines the blobs, and perceives an average darkness over small regions.

• The spatial resolution of a newspaper picture is much less than that of a photograph, however, because it is made up of distinct blobs, which cannot be arbitrarily small.

Computer Halftoning

• Digital halftoning, or patterning, uses arrays of small dots instead of variable-sized blobs.

• The figure shows 2-by-2 arrays of dots (each dot is 0 or 1) being used to simulate larger blobs having five possible intensity levels.

• The eye sees the average intensity in each 2-by-2 blob, and so can see five levels.

0 1 2 3 4 2 2a). b).

Computer Halftoning (2)

• Uses: an original image uses a 100-by-100 array of pixels whose intensity values range from 0 to 4.

• We have only a bi-level display available, so we display the image using a 200-by-200 pixel area.

• We shade each 2-by-2 block of pixels appropriately to create a semblance of one of the gray shades 0,...,4. Again spatial resolution is exchanged for intensity resolution.

Computer Halftoning (3)

• The positions of the black elements in the cell were chosen to be as irregular as possible.

• If, instead, either of the patterns shown on the right were used for level 2, the image might have horizontal or vertical stripes in certain patterns.

0 1 2 3 4 2 2a). b).

Computer Halftoning (4)

• Left: 256 shades of grey; right: a bi-level display when 2-by-2 patterning is used

Computer Halftoning (5)

• Larger cell sizes can be used to create a larger number of gray levels.

• An n-by-n cell of zeros and ones can produce n2 + 1 gray levels.

Computer Halftoning (6)

• Patterning is most applicable when the original image is of lower resolution than is the display device to be used.

Error Diffusion

• Error diffusion provides another technique for displaying multi-level pixmaps on a display which supports a small number of colors.

• Suppose each pixel of the original pixmap has intensities between 0 and 255, and that we need to replace each pixel by 0 or 1.

Error Diffusion (2)

• If a pixel has intensity A, we replace it by 0 if A < 128, and by 1 if A 128.

• When A is anything other than exactly 0 or 255, this produces some error between the truth and the displayed values.

• If A = 42, for instance, we set the display pixel to 0 which is too low by the amount 42.

• If A = 167, we display a 1 (the highest intensity, corresponding to a pixel value of 255), which is too high by the amount 255 -167 = 88.

Error Diffusion (3)

• In error diffusion we try to compensate for the unavoidable errors by subtracting them from some of the neighboring pixels in the pixmap.

• We pass portions of the error on to neighboring pixels that haven’t been thresholded yet, so that when they get thresholded later it’s the new adjusted value that is tested against the threshold.

• In this way the error diffuses through the image, maintaining proper values of average intensity.

Error Diffusion (4)

• The figure shows part of the original (multi-level) pixmap.

• It is processed top to bottom and left to right.

• The shaded pixels have been processed. Pixel p (actual value A) has just been compared with 128, and either 0 or 1 has been output.

a

db c

p

upper-left corner of bitmap

Error Diffusion (5)

• If A is less than 128 the display pixel is set to 0 and the error E is -A (we are displaying a value A too low).

• If A is greater than or equal to 128 the display pixel is set to 1 and the error E is 255-A (we are displaying a value too high by the amount 255-A).

Error Diffusion (6)

• Fractions of the resulting error E are now passed to pixels a, b, c, and d. Old values are replaced with:

• a = a - fa E {adjust pixel to the right}

• b = b - fb E {adjust pixel at lower left}

• c = c - fc E {adjust pixel below}

• d = d - fd E {adjust pixel at lower right}

Error Diffusion (7)

• A typical choice of the fractions is (fa, fb, fc, fd) = (7/16, 3/16, 5/16, 1/16).

• These values sum to one, so the entire amount of error has been passed off to neighbors of p.

• This acts to preserve the average intensity of a region.

Error Diffusion (8)

• When the end of a scan line is reached the errors that would go to a and d do not get passed on to the start of the next scan line.– They can be discarded, or the algorithm could

diffuse the entire error to pixels b and c.

• Experience shows that it is best to alternate the direction in which successive scan lines are processed: first left to right then right to left.

Error Diffusion (9)

• The pattern in the figure reverses on the next line, (e.g. a is to the left).

• The snake-like shape of this scanning has caused it to be called a serpentine raster pattern.

a

db c

p

upper-left corner of bitmap

Example

• The figure shows a 512 x 512 pixmap after error diffusion, viewed on an out-dated display.

• The error diffusion method used the serpentine raster and the coefficients (7/16, 3/16, 5/16, 1/16).

Error Diffusion (10)

• Extending this technique to displays support more than 2 colors is easy. – At each pixel the closest displayable

level is found, and the resulting error is passed on exactly as described.

– For color images each of the three color components is error diffused independently.