Lab 7. The Z-Buffer Algorithm

M. Stone and A. Pshenichkin. 11.30.04.

A New Pixel Model

Given this lab's reliance on iterative algorithms and our general frustration with some malfunctions in our drawing code, we made numerous modifications to the pixel structure. Individual Pixels, which have been modified to store depth information and have greatly improved blending, are grouped in PixelRegionss, which represent areas of an image. They are currently implemented as square regions of the image, but could be represented by, say, scanline polygons. A PixelBridge associates pixel regions with the image, and also acts as a buffer of sorts, caching changes which are then all commited together. We have modified our existing code to take PixelIterators, which work a lot like STL iterators. These greatly simplify certain operations, and generally speed up the programs because they can also act like bounding areas.

Because of our line drawing algorithm inherently implements anti-aliasing and uses alpha transparency to do so, each Pixel must now be an A buffer.

The Z and A Buffers

When drawing multiple three-dimensional objects, figuring out their relative positions is a non-trivial task. To do this, we turn to a Z-buffer algorithm, which adds depth values to our 2-dimensional projections. We do this with what is, in essence, a modified scanline fill. Each of the edges in our active edge list is assigned a z intersect value and a dz per scanline (dz/dy). For each row, there is also a dz per column (dz / dx). These partials allow us to track the depth of the polygon as we move through the image. For a perspective projection, z varies inversely with x and y, so we actually store 1/z in all of our image data. There is no such issue with parallel projection, where we can just use z directly. As we draw an object, we calculate the depth for each pixel and store it as a floating-point value within our pixel structure. A standard Z buffer algorithm would then compare the z values of a pixel with the value already stored in the output pixel and only draw the new pixel over the old one if it was the nearer of the two. To allow for transparency, however, we have had to make certain modifications to this system.

Though we keep most of the z buffer implementation, ours does not, in fact, clobber pixel values when it writes to a pixel. Instead, the Pixel class stores a list of all so-called Colors, which stores depth in addition to RGBA values. Once all objects in the image have been drawn, the Flush function in PixelBridge is called to calculate the actual pixel values for the image. We obtain these by blending the Colors in an image onto each other in depth order. This allows us to draw images with transparency or, in the case of images with only opaque polygons, to do anti-aliasing by running our linedrawing algorithm on the edges.

Sample Images

This is an example of the intersection of two finite rectangular planes, drawn semitransparent to demonstrate the A buffering.

When we disarrange the depth values such that our points are no longer coplanar, the scanline fill algorithm can still gladly interpolate between them and draws the resultant image... with odd results that match the odd input, of course.

The A-buffer is fully integrated with the rendering pipeline. Our subsequent pages contain more visuals of the A-buffer in action.


M. Stone and A. Pshenichkin.