Volume visualization and volume rendering techniques · 2000. 1. 1. · volume rendering in...

37
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228557366 Volume visualization and volume rendering techniques Article · January 2000 CITATIONS 9 READS 61 4 authors, including: Hanspeter Pfister Harvard University 244 PUBLICATIONS 7,922 CITATIONS SEE PROFILE Craig M. Wittenbrink NVIDIA 68 PUBLICATIONS 1,460 CITATIONS SEE PROFILE All content following this page was uploaded by Craig M. Wittenbrink on 12 June 2014. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately.

Transcript of Volume visualization and volume rendering techniques · 2000. 1. 1. · volume rendering in...

  • Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/228557366

    Volumevisualizationandvolumerenderingtechniques

    Article·January2000

    CITATIONS

    9

    READS

    61

    4authors,including:

    HanspeterPfister

    HarvardUniversity

    244PUBLICATIONS7,922CITATIONS

    SEEPROFILE

    CraigM.Wittenbrink

    NVIDIA

    68PUBLICATIONS1,460CITATIONS

    SEEPROFILE

    AllcontentfollowingthispagewasuploadedbyCraigM.Wittenbrinkon12June2014.

    Theuserhasrequestedenhancementofthedownloadedfile.Allin-textreferencesunderlinedinblueareaddedtotheoriginaldocument

    andarelinkedtopublicationsonResearchGate,lettingyouaccessandreadthemimmediately.

    https://www.researchgate.net/publication/228557366_Volume_visualization_and_volume_rendering_techniques?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/228557366_Volume_visualization_and_volume_rendering_techniques?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanspeter_Pfister?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanspeter_Pfister?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Harvard_University?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Hanspeter_Pfister?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Craig_Wittenbrink?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Craig_Wittenbrink?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/NVIDIA?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Craig_Wittenbrink?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Craig_Wittenbrink?enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg%3D%3D&el=1_x_10&_esc=publicationCoverPdf

  • EUROGRAPHICS’2000 Tutorial

    VolumeVisualization and VolumeRendering Techniques

    M. Meißner�, H. Pfister

    �, R. Westermann

    �, andC.M. Wittenbrink

    �Abstract

    There is a wide range of devicesand scientificsimulationgenerating volumetricdata. Visualizing such data,rangingfromregular datasetsto scattereddata,is a challengingtask.This coursewill give an introductionto the volumerenderingtransporttheoryand the involvedissuessuch asinterpolation, illumination, classificationand others. Different volumerenderingtechniqueswill be presentedillustrating their fundamentalfeatures and differencesas well as their limitations. Furthermore, accelerationtechniqueswill bepresentedincludingpure software optimizationsaswell asutilizing specialpurposehardwareasVolumePro but alsodedicatedhardware such aspolygongraphicssubsystems.

    1. Intr oduction

    Volume renderingis a key technologywith increasingim-portancefor the visualizationof 3D sampled,computed,ormodeleddatasets.The taskis to displayvolumetricdataasa meaningfultwo-dimensionalimagewhich revealsinsightsto the user. In contrastto conventionalcomputergraphicswhereone hasto deal with surfaces,volume visualizationtakes structuredor unstructured3D datawhich is the ren-deredinto two-dimensionalimage.Dependingon thestruc-tureandtype of data,differentrenderingalgorithmscanbeappliedanda variety of optimizationtechniquesare avail-able.Within thesealgorithms,several renderingstagescanbeusedto achieve a varietyof differentvisualizationresultsat differencost.Thesestagesmight changetheir orderfromalgorithmto algorithmor might evennot beusedby certainapproaches.

    In thefollowing section,we will give a generalintroduc-tion to volumerenderingandthe involved issues.Section3then presentsa schemeto classify different approachesto�

    WSI/GRIS,Universityof Tübingen,Auf derMorgenstelle10/C9,Germany, e-Mail: [email protected]

    MERL - A MitsubishiElectricResearchLaboratory, 201Broad-way, Cambridge,MA 02139,USA, e-Mail: [email protected]

    Computer Graphics Group, University of Stuttgart,something road 4711, 74711 Stuttgart, Germany, e-Mail:[email protected]

    Hewlett-PackardLaboratories,PaloAlto, CA 94304-1126,USA,e-Mail: [email protected]

    volumerenderingin categories.Accelerationtechniquestospeedup therenderingprocessin section4. Section3 and4area modifiedversionof tutorial notesfrom R. Yagelwhichwe would like to thankfully acknowledge.A side by side comparisonof the four most commonvol-umerenderingalgorithmsis givenin section5. Specialpur-posehardwareachieving interactive or real-timeframe-ratesis presentedin section6 while section7 focuseson applica-tionsbasedon 3D texturemapping.Finally, we presentren-deringtechniquesandapproachesfor volumedatanot rep-resentedon rectilinearcartesiangridsbut on curvilinearandunstructuredgrids.

    2. Volumerendering

    Volume rendering differs from conventional computergraphicsin many waysbut alsosharesrenderingtechniquessuchasshadingor blending.Within thissection,wewill giveashortintroductioninto thetypesof dataandwhereit origi-natesfrom. Furthermore,we presenttheprincipleof volumerendering,the differentrenderingstages,andthe issuesin-volvedwheninterpolatingdataor color.

    2.1. Volumedata acquisition

    Volumetric data can be computed,sampled,or modeledand thereare many different areaswhere volumetric datais available. Medical imaging is one areawherevolumet-ric data is frequently generated.Using different scanningtechniques,internalsof the humanbody can be acquiredusingMRI, CT, PET, or ultrasound.Volumerenderingcan

    c�

    TheEurographicsAssociation2000.

    https://www.researchgate.net/profile/Craig_Wittenbrink?el=1_x_100&enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg==https://www.researchgate.net/profile/Hanspeter_Pfister?el=1_x_100&enrichId=rgreq-ed83fb531d67d293ebf9ce7e62a53125-XXX&enrichSource=Y292ZXJQYWdlOzIyODU1NzM2NjtBUzoxMDg2NTYxNDYyNTk5NzBAMTQwMjkxNzA1Mjc5Mg==

  • Meißneret al. / VolumeRendering

    be applied to color the usually scalar data and visualizedifferentstructurestransparent,semi-transparent,or opaqueand hence,can give useful insights.Different applicationsevolved within this areasuch as cancerdetection,visual-ization of aneurisms,surgical planning,andeven real-timemonitoringduringsurgery.

    Nondestructive materialtestingandrapid prototypingisanotherexamplewherefrequentlyvolumetricdatais gener-ated.Here,the structureof an objectis of interestto eitherverify thequality or to reproducetheobjects.IndustrialCTscannersandultrasoundaremainly usedfor theseapplica-tions.

    The disadvantageof the above describedacquisitionde-vices is the missing color information which needsto beaddedduring the visualizationprocesssinceeachacquisi-tion techniquesgeneratesscalarvaluesrepresentingdensity(CT), oscillation(MRI), echoes(ultrasound),andothers.Foreducationalpurposeswheredestructingthe original objectis acceptable,onecanslice thematerialandtake imagesofeachlayer. This revealscolor informationwhich sofar can-not becapturedby otheracquisitiondevices.A well-knownexampleis the visible humanproject wherethis techniquehasbeenappliedto a maleanda femalecadaver.

    Microscopicanalysisis yet anotherapplicationfield ofvolumerendering.With confocalmicroscopes,it is possibleto gethigh-resolutionopticalslicesof a microscopicobjectwithout having to disturbthespecimen.

    Geoseismicdatais probablyoneof thesourcesthat gen-eratesthelargestjunk of data.Usually, at least10243 voxels(1 GByteandmore)aregeneratedandneedto bevisualized.Themostcommonapplicationfield is oil explorationwherethe costscanbe tremendouslyreducedby finding the rightlocationwhereto drill thewhole.

    Anotherlarge sourceof volumetricdatais physicalsim-ulationswherefluid dynamicsare simulated.This is oftendoneusing particlesor samplepoints which move aroundfollowing physical laws resulting in unstructuredpoints.Thesepointscaneitherbe visualizeddirectly or resampledinto any grid structurepossiblysacrificingquality.

    Besidesall theabovementionedareas,therearemany oth-ers.For furtherreadingwe recommend61.

    2.2. Grid structur es

    Dependingonthesourcewherevolumetricdatacomesfromit mightbegivenasacartesianrectilineargrid, or asacurvi-linear grid, or maybeeven completelyunstructured.Whilescanningdevicesmostlygeneraterectilineargrids(isotropicor anisotropic),physical simulationsmostly generateun-structureddata.Figure1 illustratesthesedifferentgrid typesfor the2D case.For thedifferentgrid structuresdifferental-gorithmscanbeusedto visualizethevolumetricdata.Withinthe next sections,we will focus on rectilineargrids beforepresentingapproachesfor theothergrid typesin section8.

    (a) (b) (c)

    Figure1: Differentgrid structures:Rectilinear(a), curvilin-ear (b), andunstructured(c).

    2.3. Absorption and emission

    In contrastto conventionalcomputergraphicswhereobjectsarerepresentedassurfaceswith materialproperties,volumerenderingdoesnot directly dealwith surfaceseven thoughsurfacescan be extractedfrom volumetric data in a pre-processingstep.

    Eachelementof thevolumetricdata(voxel) canemit lightaswell asabsorblight. Theemissionof light canbequitedif-ferentdependingonthemodelused.I.e.,onecanimplementmodelswherevoxels simply emit their own light or wherethey additionally realizesingle scatteringor even multiplescattering.Dependingon the model used,different visual-izationeffectscanberealized.Generally, scatteringis muchmore costly to realizethan a simple emissionand absorp-tion model, one of the reasonswhy they are hardly usedin interactive or real-timeapplications.While the emissiondeterminesthe color and intensity a voxel is emitting, theabsorptioncan be expressedas opacity of a voxel. Only acertainamountof light will bepassedthroughavoxel whichcanbeexpressedby 1 opacity andis usuallyreferredto asthetransparency of a voxel.

    The parametersof the emission (color and inten-sity) as well as the parametersof the absorption(opac-ity/transparency) canbespecifiedon a pervoxel-valuebaseusingclassification.This is describedin moredetail in thefollowing section.For different optical modelsfor volumerenderingreferto 66.

    2.4. Classification

    Classificationenablestheuserto find structureswithin vol-umedatawithout explicitly definingtheshapeandextentofthat structure.It allows the userto seeinsidean objectandexplore its inside structureinsteadof only visualizing thesurfaceof that structureas donein conventionalcomputergraphics.

    In theclassificationstage,certainpropertiesareassignedto asamplesuchascolor, opacity, andothermaterialproper-ties.Also shadingparametersindicatinghow shiny a struc-tureshouldappearcanbeassigned.Theassignmentof opac-ity to a samplecanbe a very complex operationandhasamajor impacton the final 2D imagegenerated.In order toassignthesematerialpropertiesit is usually helpful to use

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    histogramsillustratingthedistributionof voxel valuesacrossthedataset.

    Theactualassignmentof color, opacity, andotherproper-tiescanbebasedon thesamplevalueonly but othervaluescanbeaswell takenasinput parameters.Usingthegradientmagnitudeasfurtherinputparameter, sampleswithin homo-geneousspacecan be interpreteddifferently than the oneswith heterogeneousspace.This is a powerful techniqueingeoseismicdatawherethescalarvaluesonly changenotice-ably in betweendifferentlayersin theground.

    2.5. Segmentation

    Empoweringtheuserto seeacertainstructureusingclassifi-cationis not alwayspossible.A structurecanbesomeorganor tissuebut is representedasa simplescalarvalue.Whenlooking at volumetricdataacquiredwith a CT scanner, dif-ferenttypesof tissuewill resultin samedensityvaluesdueto the natureof CT. Therefore,no classificationof densityvaluescanbefoundsuchthatstructureswhich similarly ab-sorbX-rayscouldbeseparated.To separatesuchstructures,they needto be labeledor segmentedsuchthat they canbedifferentiatedfrom eachother. Dependingontheacquisitionmethodand the scannedobject, it can be relatively easily,hard,or even impossibleto segmentsomeof the structuresautomatically. Most algorithmsaresemi-automaticor opti-mizedfor segmentinga specificstructure.Oncea volumetric datasetis segmented,for eachsegmenta certainclassificationcan be assignedand appliedduringrendering.

    2.6. Shading

    Shadingor illuminationreferto a well-know techniqueusedin conventionalcomputergraphicsto greatlyenhancetheap-pearanceof a geometricobjectthatis beingrendered.Shad-ing tries to modeleffectslike shadows, light scattering,andabsorptionin the real world when light falls on an object.Shadingcanbeclassifiedinto global methods,directmeth-ods,andlocalmethods.While globalilluminationcomputesthe light being exchangedbetweenall objects,direct illu-mination only accountsfor the light the directly falls ontoanobject.Unfortunately, bothmethodsdependon thecom-plexity of theobjectsto berenderedandareusuallynot in-teractive. Therefore,the local illumination methodhasbeenwidely used.Figure2 shows a skull renderedwith local andwith directillumination.While directilluminationtakesintoaccounthow much light is presentat eachsample(figure2(b)), local illumination is much cheaperto computebutstill achieves reasonableimagequality (figure 2(a)). Localillumination consistsof an ambient,a diffuseanda specu-lar component.While ambientcomponentis available ev-erywhere,thediffusecomponentcanbecomputedusingtheanglebetweenthenormalvectoratthegivenlocationandthevectorto the light. Thespecularcomponentdependson the

    (a) (b)

    Figure 2: Comparisonof shading:Local illumination (a)anddirect illumination (b).

    angleto thelight andtheangleto theeye position.All threecomponentscanbecombinedby weightingeachof themdif-ferentlyusingmaterialproperties.While tissueis lesslikelyto have specularcomponents,teethmight reflectmorelight.Figure3 shows a skull without andwith shading.

    (a) (b)

    Figure3: Comparisonof shading:Noshading(a) andlocalshading(b).

    For furtherreadingreferto 61 66.2.7. Gradient computation

    As mentionedin the previous section,a normal is requiredto beableto integrateshadingeffects.However, volumetricdataitself doesnotexplicitly consistof surfaceswith associ-atednormalsbut of sampleddatabeingavailableongrid po-sitions.Thisgrid of scalarvaluescanbeconsideredasagreylevel volumeandseveral techniqueshave beeninvestigatedin thepastto computegrey-level gradientsfrom volumetricdata.

    A frequentlyusedgradientoperatoris the centraldiffer-enceoperator. For eachdimensionof the volume,the cen-tral differenceof the two neighbouringvoxels is computedwhichgivesanapproximationof thelocalchangeof thegrayvalue.It canbewrittenasGradientx � y� z �� 1 0 1� . Gen-erally, the centraldifferenceoperatoris not the necessarilythebestonebut verycheapto computesinceit requiresonly

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    six voxelsandthreesubtractions.A disadvantageof thecen-tral differenceoperatoris that it producesanisotropicgradi-ents.

    Theintermediatedifferenceoperatoris similar to thecen-tral differenceoperatorbut hasa smallerkernel. It can bewritten as Gradientx � y� z �� 1 1� . The advantageof thisoperatoris thatit detectshigh frequencieswhich canbelostwhenusingthe centraldifferenceoperator. However, whenflipping the orientationa differentgradientis computedforthe identical voxel position which can causeundesiredef-fects.

    A much better gradientoperatoris the Sobel operatorwhich usesall 26 voxels that surroundonevoxel. This gra-dient operatorwas developedfor 2D imaging but volumerenderingborrows many techniquesfrom imageprocessingandtheSobeloperatorcaneasilybeextendedto 3D. A nicepropertyof this operatoris that it producesnearlyisotropicgradientsbut it is fairly complex to compute61.

    2.8. Compositing

    All samplestakenduringrenderingneedto becombinedintoa final imagewhich meansthat for eachpixel of the imagewe needto combinethe color of the contributing samples.This can be donein randomorder if only opaquesamplesare involved but sincewe deal with semi-transparentdata,the blendingneedsto be performedin sortedorder whichcanbeaccomplishedin two ways:Front to backor backtofront. For front to back,thediscreteray castingintegral canthenbewritten as:

    Trans = 1.0; - fullInten = I[0]; - initial valuefor (i=0; i

  • Meißneret al. / VolumeRendering

    quality is fairly low andwhenusingmagnification,a blobbystructureappears.

    Trilinear interpolationassumesa linear relationbetweenneighbouringvoxelsandit is separable.Therefore,it canbedecomposedinto sevenlinearinterpolations.Theachievableimagequality is much higher than with nearestneighbourinterpolation.However, whenusinglargemagnificationfac-tors,threedimensionaldiamondstructuresor crossesappeardueto thenatureof thetrilinear kernel.

    Betterqualitycanbeachievedusingevenhigherorderin-terpolationmethodssuchas cubic convolution or B-splineinterpolation.However, thereis a trade-off betweenqualityandcomputationalcostaswell asmemorybandwidth.Thesefiltersrequireaneighbourhoodof 64voxelsandasignificantlargeramountof computationsthantrilinear interpolation.Itdependson the applicationandthe requirementswhich in-terpolationschemeshouldbeused.

    2.10. Color filtering

    The previously mentionedtechniquescan be performedindifferentorderresultingin differentimagequalityaswell asbeingproneto certainartifacts.Interpolationof scalarvaluesis usuallyproneto aliasingsincedependingon the classifi-cationused,high frequency detailsmight bemissed.On theotherside,color interpolationby themeansof classificationandshadingof availablevoxel valuesandinterpolationof theresultingcolor valuesis proneto color bleedingwheninter-polatingcolor andα-valueindependentfrom eachother119.A simpleexampleof this is bonesurroundedby fleshwheretheboneis classifiedopaquewhite andthefleshis transpar-entbut red.Whensamplingthis color volumeoneneedstochosetheappropriateinterpolationscheme.Simply interpo-lating the neighbouringcolor andopacity valuesresultsincolorbleedingasillustratedin figure5. To obtainthecorrect

    Figure 5: Color bleeding: Independentinterpolation ofcolor and opacityvalues(left) and opacityweightedcolorinterpolation(right).

    color andopacity, oneneedsto mulitply eachcolor with thecorrespondingopacityvaluebeforeinterpolatingthe color.While it caneasilybe noticedin figure5 dueto the chosencolor scheme,it is lessobviousin monochromeimages.

    Figure6 illustratesanotherexamplewheredarkeningar-tifactscanbe noticed.This exampleis a volumetricdataset

    Figure 6: Darkeningand aliasing: Independentinterpola-tion of color andopacityvalues(left) andopacityweightedcolor interpolation(right).

    from image basedrenderingthat originatesas color (red,green,andblue)ateachgrid position.Therefore,it illustrateswhatwouldhappenwhenvisualizingthevisiblehumanwithandwithout opacityweightedcolor interpolation.The arti-factsarequitesevereanddisturbing.

    2.11. Summary

    Within thissection,weprovidedanoverview of thedifferenttypesof gridsaswell assourcesof volumedata.Usingclas-sification, gradient estimation,shading,and compositing,extremlydifferentvisualizationresultscanbeachieved.Alsothe selectionof the filter usedto interpolatedataor colorhasastronginfluenceontheresultingimage.Therefore,onehasto carefullychoosedependingontheapplicationrequire-mentswhich of thedescribedtechniquesschemesshouldbeintegratedandwhich interpolationschemeused.

    3. VolumeViewing Algorithms

    The task of the renderingprocessis to display the primi-tivesusedto representthe 3D volumetricsceneonto a 2Dscreen.Renderingis composedof a viewing processwhichis the subjectof this section,andthe shadingprocess.Theprojectionprocessdetermines,for eachscreenpixel, whichobjectsareseenby thesightray castfrom this pixel into thescene.The viewing algorithm is heavily dependenton thedisplayprimitivesusedto representthevolumeandwhethervolumerenderingor surfacerenderingareemployed. Con-ventionalviewing algorithmsand graphicsenginescan beutilizedto displaygeometricprimitives,typically employingsurfacerendering.However, whenvolumeprimitivesaredis-playeddirectly, a specialvolumeviewing algorithmshouldbeemployed.This algorithmshouldcapturethecontentsofthevoxelsonthesurfaceaswell astheinsideof thevolumet-ric object being visualized.This sectionsurveys andcom-parespreviouswork in thefield of directvolumeviewing.

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    3.1. Intr oduction

    Thesimplestway to implementviewing is to traverseall thevolume regarding eachvoxel as a 3D point that is trans-formed by the viewing matrix and then projectedonto aZ-buffer and drawn onto the screen.Somemethodshavebeensuggestedin orderto reducethe amountof computa-tions neededin the transformationby exploiting the spatialcoherency betweenvoxels.Thesemethodsaredescribedinmoredetailsin Section4.1.

    Theback-to-front(BTF) algorithmis essentiallythesameasthe Z-buffer methodwith oneexceptionthat is basedontheobservationthatthevoxel arrayis presortedin a fashionthatallowsscanningof its componentin anorderof decreas-ing or increasingdistancefrom theobserver. Exploiting thispresortednessof thevoxel arrays,traversalof thevolumeintheBTF algorithmis donein orderof decreasingdistancetotheobserver. This avoidstheneedfor a Z-buffer for hiddenvoxel removal considerationsby applyingthepainter’salgo-rithm by simply drawing the currentvoxel on top of previ-ouslydrawn voxelsor by compositingthecurrentvoxel withthescreenvalue24.

    Thefront-to-back(FTB) algorithmis essentiallythesameasBTF only that now thevoxelsaretraversedin increasingdistanceorder. Front-to-backhasthepotentialof a moreef-ficient implementationby employing a dynamicdatastruc-ture for screenrepresentation87 in which only un-lit pixelsare processedand newly-lit pixels are efficiently removedfrom the datastructure.It shouldbe observed that while inthebasicZ-buffer methodit is impossibleto supporttheren-dition semi-transparentmaterialssincevoxelsaremappedtothescreenin arbitraryorder. Compositingis basedonacom-putationthat simulatesthe passageof light throughseveralmaterials.In this computationthe orderof materialsis cru-cial. Therefore,translucency caneasilybe realizedin bothBTF andFTB in which objectsaremappedto thescreenintheorderin which thelight traversesthescene.

    Anothermethodof volumetricprojectionis basedon firsttransformingeachslice from voxel-spaceto pixel-spaceus-ing 3D affine transformation(shearing)33 90 52 and thenprojecting it to the screenin a FTB fashion,blending itwith theprojectionformedby previousslices22. Shear-warprendering52 is currently the fastestsoftware algorithm. Itachieves1.1 Hz on a single150MHzR4400processorfor a256 � 256 � 225 volumewith 65 secondsof pre-processingtime 51. However, the 2D interpolationmay lead to alias-ing artifactsif thevoxel valuesor opacitiescontainhighfre-quency components50.

    Westover 107 108 hasintroducedthesplattingtechniqueinwhich eachvoxel is transformedinto screenspaceandthenshaded.Blurring, basedon2D lookuptablesis performedtoobtaina setof points(footprint) that spreadsthe voxels en-ergy acrossmultiple pixels.Thesearethencompositedwiththeimagearray. Sobierajskietal. havedescribed94 asimpli-fied splattingfor interactive volumeviewing in which only

    voxelscomprisingtheobject’s surfacearemaintained.Ren-dering is basedon the usageof a powerful transformationenginethat is fed with multiple pointspervoxel. Additionalspeedupis gainedby culling voxelsthathaveanormalpoint-ing away from the observer andby adaptive refinementofimagequality.

    The ray castingalgorithmcastsa ray from eachpixel onthescreeninto thevolumedataalongtheviewing vectorun-til it accumulatesanopaquevalue4 99 100 56. Levoy 59 57 hasusedthetermray tracingof volumedatato referto ray cast-ing andcompositingof even-spacedsamplesalongthe pri-mary viewing rays.However, more recently, ray tracing isreferredto as the processwherereflectedand transmittedraysare traced,while ray castingsolely considersprimaryrays,andhence,doesnot aim for “photorealistic”imaging.Rayscanbetracedthroughavolumeof coloraswell asdata.Raycastinghasbeenappliedto volumetricdatasets,suchasthosearisingin biomedicalimagingandscientificvisualiza-tion applications(e.g.,22 100).

    We now turn to classify and compareexisting volumeviewing algorithms.In section4 we survey recentadvancesin accelerationtechniquesfor forwardviewing (section4.1),backward viewing (section4.2) and hybrid viewing (sec-tion 4.3).

    3.2. Classificationof VolumeViewing Methods

    Projectionmethodsdiffer in several aspectswhich can beusedfor a their classificationin variousways.First,we haveto observe whetherthe algorithmtraversesthe volumeandprojectsits componentsontothescreen(calledalsoforward,object-order, or voxel-spaceprojection)24 107 29, doesit tra-versethepixelsandsolvethevisibility problemfor eachoneby shootinga sightray into thescene(calledalsobackward,image-order, or pixel-spaceprojection)46 54 88 99 100 120 124,or doesit performsomekind of ahybridtraversal40 100 52 50.

    Volume renderingalgorithmscan also be classifiedac-cordingto the partial voxel occupancy they support.Somealgorithms86 35 87 99 125 124 assumeuniform (binary) occu-pancy, thatis, avoxel is eitherfully occupiedby someobjector it is devoid of any objectpresence.In contrastto uniformvoxel occupancy, methodsbasedon partialvoxel occupancyutilize intermediatevoxel valuesto representpartial voxeloccupancy by objectsof homogeneousmaterial.This pro-videsa mechanismfor thedisplayof objectsthataresmallerthantheacquisitiongrid or thatarenotalignedwith it. Partialvolumeoccupancy canbeusedto estimateoccupancy frac-tions for eachof a setof materialsthat might be presentina voxel 22. Partial volumeoccupancy is alsoassumedwhen-ever gray-level gradient36 is usedasa measurefor thesur-faceinclination.Thatis, voxel valuesin theneighborhoodofa surfacevoxel areassumedto reflecttherelative averageofthevarioussurfacetypesin them.

    Volumerenderingmethodsdiffer alsoin theway they re-

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    gardthe materialof thevoxels.Somemethodsregardedallmaterialsasopaque27 29 37 87 98 99 while othersallow eachvoxel to have an opacity attribute 22 54 88 100 107 120 124 52.Supporting variable opacities models the appearanceofsemi-transparentjello andrequirescompositionof multiplevoxelsalongeachsightray.

    Yetanotheraspectof distinctionbetweenrenderingmeth-ods is the numberof materialsthey support.Early meth-ods supportedscenesconsistingof binary-valued voxelswhile more recent methodsusually support multi-valuedvoxels. In the first caseobjectsare representedby occu-pied voxels while the backgroundis representedby voidvoxels 24 35 87 99. In the latter approach,multi-valuedvox-elsareusedto representobjectsof non-homogeneousmate-rial 27 36 98. It shouldbe observed that given a setof voxelshaving multiple valueswe can either regard them as fullyoccupiedvoxelsof variousmaterials(i.e., eachvaluerepre-sentsadifferentmaterial)or wecanregardthevoxel valueasanindicatorof partialoccupancy by a singlematerial,how-ever we cannot have both.In orderto overcomethis limita-tion, someresearchersadoptthemultiple-materialapproachasabasisfor aclassificationprocessthatattachesamaterial-label to eachvoxel. Onceeachvoxel hasa material label,theseresearchersregardthe original voxel valuesaspartialoccupancy indicatorsfor thelabeledmaterial22.

    Finally, volumerenderingalgorithmscanalsobe classi-fied accordingto whetherthey assumeconstantvalueacrossthevoxel extent 46 or do they assume(trilinear) variationofthevoxel value54.

    A severeproblemin the voxel-spaceprojectionis that atsomeviewing points,holesmight appearin the scene.Tosolve this problem one can regard eachvoxel in our im-plementationasa groupof points(dependingon the view-point) 95 or maintaina ratio of 1 : � 3 betweena voxel apixel 13. Another solution is basedon a hybrid of voxel-spaceandpixel-spaceprojectionsthatis basedon traversingthevolumein a BTF fashionbut computingpixel colorsbyintersectingthe voxel with a scanline (plane)andthenin-tegratingthe colors in the resultingpolygon 100. Sincethiscomputationis relatively time consumingit is moresuitableto small datasets.It is alsopossibleto apply to eachvoxela blurring function to obtaina 2D footprint that spreadsthesample’s energy onto multiple imagepixels which are lat-ter composedinto the image108. A major disadvantageinthe splattingapproachis that it tendsto blur the edgesofobjectsandreducetheimagecontrast.Anotherdeficiency inthevoxel-spaceprojectionmethodis thatit musttraverseandprojectall thevoxelsin thescene.Sobierajskietal.havesug-gestedthe useof a normalbasedculling in orderto reduce(possiblyby half) theamountof processedvoxels95. On theotherhand,sincevoxel-spaceprojectionoperatesin object-space,it is mostsuitableto variousparallelizationschemesbasedon objectspacesubdivision 28 79 107.

    The main disadvantagesof the pixel-spaceprojection

    schemearealiasing(speciallywhenassuminguniformvalueacrossvoxel extent)andthedifficulty to parallelizeit. Whilethe computationinvolved in tracingrayscanbe performedin parallel,memorybecomesthebottleneck.Sinceraystra-versethevolumein arbitrarydirectionsit seemsto benowayto distribute voxels betweenmemorymodulesto guaranteecontentionfreeaccess55.

    Beforepresentinga sideby sidecomparisonof the fourmost popularvolume renderingalgorithms,we will intro-ducegeneralaccelerationtechniquesthat canbe appliedtoforwardandbackwardviewing algorithms.

    4. AccelerationTechniques

    Either forward projectionor backward projectionrequiresthe scanningof the volume buffer which is a large bufferof size proportional to the cubic of the resolution.Con-sequently, volume renderingalgorithmscan be very time-consumingalgorithms.This sectionfocuseson techniquesfor expeditingthesealgorithms.

    4.1. Expediting Forward Viewing

    The Z-buffer projection algorithm, although surprisinglysimple,is inherentlyvery inefficient andwhennaively im-plemented,produceslow quality images.The inefficiencyattributeof this methodis rootedin theN3 vector-by-matrixmultiplicationsit calculatesand the N3 accessesto the Z-buffer it requires.Inferior imagequality is causedby thismethod’s inability to supportcompositingof semitranspar-ent materials,due to the arbitrary order in which voxelsaretransformed.In addition,transforminga setof discretepointsis asourcefor varioussamplingartifactssuchasholesandjaggies.

    Somemethodshave beensuggestedto reducetheamountof computationsneededfor thetransformationby exploitingthe spatialcoherency betweenvoxels. Thesemethodsare:recursive “divide and conquer” 27 69, pre-calculatedtables24, andincrementaltransformation44 65.

    Thefirst methodexploitscoherency in voxel spaceby rep-resentingthe3D volumeby anoctree.A groupof neighbor-ing voxelshaving thesamevalue(or similar, up to a thresh-old value)may, undersomerestrictions,be groupedinto auniform cubic subvolume.This aggregateof voxels canbetransformedandrenderedasa uniform unit insteadof pro-cessingeachof its voxels.In addition,sinceeachoctreenodehaseight equally-sizedoctants,given the transformationoftheparentnode,thetransformationof its sub-octantscanbeefficiently computed.This methodrequires,in 3D, threedi-visionsandsix additionspercoordinatetransformation.

    Thetable-driventransformationmethod24 is basedontheobservation that volumetransformationinvolves the multi-plication of the matrix elementswith integer valueswhich

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    arealwaysin therange 1 ����� N � whereN is thevolumereso-lution. Therefore,in a shortpreprocessingstageeachmatrixelementti j is storedin table tabi j N � such that tabi j k� �ti j � k � 1 � k � N. During the transformationstage,coordi-nateby matrix multiplication is replacedby table lookup.This methodrequires,in 3D, nine table lookup operationsandnineadditions,percoordinatetransformation.

    Finally, the incrementaltransformationmethodis basedon theobservation that thetransformationof a voxel canbeincrementallycomputedgiventhetransformedvectorof thevoxel. To begin the incrementalprocesswe needone ma-trix by vector multiplication to computethe updatedposi-tion of the first grid point. The remaininggrid points areincrementallytransformed,requiringthreeadditionsperco-ordinate.However, to employ this approach,all volumeel-ements,including the emptyones,have to be transformed.This approachis thereforemore suitableto parallel archi-tecturewhereit is desiredto keepthecomputationpipelinebusy65.

    Sofarwehavebeenlookingatmethodsthateasethecom-putation burdenassociatedwith the transformation.How-ever, consultingthe Z-buffer N3 times is also a sourceofsignificantslow down. Theback-to-front(BTF) algorithmisessentiallythesameastheZ-buffer methodwith oneexcep-tion theorderin which voxelsarescanned.It is basedontheobservation that the voxel arrayis spatiallypresorted.Thisattributeallows therendererto scanthevolumein an orderof decreasingdistancefrom theobserver. By exploiting thispresortednessof thevoxel arrays,onecandraw thevolumein a back-to-frontorder, that is, in orderof decreasingdis-tanceto theobserver. This avoidstheneedfor a Z-buffer forhiddenvoxel removal by applying the painter’s algorithm.That is, the currentvoxel is simply drawn on top of previ-ouslydrawn voxels.If compositingis performed,thecurrentvoxel is compositedwith thescreenvalue23 24. Thefront-to-back(FTB) algorithmis essentiallythe sameasBTF, onlythat now the voxels aretraversedin increasingdistanceor-der.

    As mentionedabove in thebasicZ-buffer methodit is im-possibleto supporttherenditionof semitransparentmateri-als becausevoxels aremappedto the screenin an arbitraryorder. In contrast,translucency caneasilyberealizedin bothBTF andFTB becausein thesemethodsobjectsaremappedto thescreenin viewing order.

    Anotherapproachto forward projectionis basedon firsttransformingthevolumefrom voxel-spaceto pixel-spacebyemploying a decompositionof the3D affine transformationinto five 1D shearingtransformations33. Then, the trans-formedvoxel is projectedonto the screenin an FTB order,which supportsthe blendingof voxels with the projectionformedby previous(farther)voxels22. Themajoradvantageof this approachis its ability (usingsimpleaveragingtech-niques)to overcomesomeof thesamplingproblemscausingthe productionof low quality images.In addition, this ap-

    proachreplacesthe 3D transformationby five 1D transfor-mationswhichrequireonly onefloating-pointadditioneach.

    Anothersolutionto theimagequalityproblemmentionedabove is splatting 108, in which eachvoxel is transformedinto screenspaceandthenit is shaded.Blurring,basedon2Dlookuptables,is performedto obtainasetof points(acloud)thatspreadsthevoxel’s energy acrossmultiple pixelscalledfootprint. Thesearethencompositedwith the imagearray.However this algorithmwhich requiresextensive filtering istime consuming.

    Sobierajskiet al. have described94 a simplified approxi-mationto thesplattingmethodfor interactive volumeview-ing in which only voxelscomprisingtheobject’s surfacearemaintained.Eachvoxel is representedby several 3D points(a 3D footprint). Renderingis basedon theusageof a con-temporarygeometryenginethat is fed with thosemultiplepoints per voxel. Additional speedupis gainedby cullingvoxels that have a normalpointingaway from theobserver.Finally, adaptive refinementof image quality is also sup-ported:whenthevolumeis manipulatedonly onepoint pervoxel is rendered,interactively producinga low quality im-age.When the volume remainsstationaryand unchanged,for someshortperiod,therenderingsystemrenderstherestof thepointsto increaseimagequality.

    Another efficient implementationof the splatting algo-rithm, called hierarchicalsplatting53 usesa pyramid datastructureto hold a multiresolutionrepresentationof thevol-ume.For volumeof N3 resolutionthepyramiddatastructureconsistsof a sequenceof logN volumes.The first volumecontainstheoriginaldataset,thenext volumein thesequenceis half theresolutionof thepreviousone.Eachof its voxelscontainstheaverageof eightvoxels in thehigherresolutionvolume.Accordingto the desiredimagequality, this algo-rithm scansthe appropriatelevel of the pyramid in a BTForder. Eachelementis splattedusingthe appropriatesizedsplat.The splatsthemselvesareapproximatedby polygonswhich canefficiently berenderedby graphicshardware.

    4.2. Expediting Backward Viewing

    Backward viewing of volumes,basedon castingrays,hasthreemajor variations:parallel (orthographic)ray casting,perspective ray casting,and ray tracing. The first two arevariationsof ray casting,in which only primary rays, thatis, rays from the eye through the screen,are followed.Thesetwo methodshave beenwidely appliedto volumet-ric datasets,suchasthosearisingin biomedicalimagingandscientificvisualizationapplications(e.g.,22 100). Levoy 57 58hasusedthe term ray tracingof volumedatato refer to raycastingandcompositingof even-spacedsamplesalongtheprimaryviewing rays.

    Raycastingcanfurtherbedividedinto methodsthatsup-port only parallel viewing, that is, when the eye is at in-finity and all rays are parallel to one viewing vector. This

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    viewing schemeis usedin applicationsthatcouldnot bene-fit from perspective distortionsuchasbiomedicine.Alterna-tively, ray castingcanbe implementedto supportalsoper-spective viewing.

    Sinceray castingfollows only primary rays, it doesnotdirectly supportthe simulationof light phenomenasuchasreflection,shadows, andrefraction.As analternative, Yagelet al. have developedthe3D rasterray tracer(RRT) 122 thatrecursively considersboth primary andsecondaryraysandthuscancreate“photorealistic”images.It exploits thevoxelrepresentationfor theuniformrepresentationandraytracingof sampledand computedvolumetric datasets,traditionalgeometricscenes,or intermixingthereof.

    Theexaminationof existing methodsfor speedingup theprocessof ray castingreveals that most of them rely ononeor moreof the following principles:(1) pixel-spaceco-herency (2) object-spacecoherency (3) inter-ray coherencyand(4) space-leaping.

    We now turn to describeeachof thosein moredetail.

    1. Pixel-spacecoherency: There is a high coherency be-tweenpixels in imagespace.That is, it is highly prob-able that betweentwo pixels having identical or similarcolor we will find anotherpixel having thesame(or sim-ilar) color. Thereforeit is observed that it might be thecasethatwe couldavoid sendinga ray for suchobviouslyidenticalpixels.

    2. Object-spacecoherency: The extension of the pixel-spacecoherency to 3D statesthat thereis coherency be-tweenvoxelsin objectspace.Therefore,it is observedthatit shouldbepossibleto avoid samplingin 3D regionshav-ing uniform or similar values.

    3. Inter -ray coherency: Thereis a greatdealof coherencybetweenraysin parallelviewing, thatis,all rays,althoughhaving different origin, have the sameslope.Therefore,thesetof stepstheseraystakewhentraversingthevolumearesimilar. We exploit this coherency so asto avoid thecomputationinvolvedin navigatingtheray throughvoxelspace.

    4. Space-leaping:Thepassageof a ray throughthevolumeis two phased.In thefirst phasetheray advancesthroughthe empty spacesearchingfor an object. In the secondphasethe ray integratescolors and opacitiesas it pene-tratesthe object (in the caseof multiple or concave ob-jectsthesetwo phasescanrepeat).Commonly, thesecondphaseinvolvesoneor a few steps,dependingon the ob-ject’s opacity. Sincethepassageof emptyspacedoesnotcontribute to the final imageit is observed that skippingtheemptyspacecouldprovide significantspeedup with-out affectingimagequality.

    The adaptive image supersampling,exploits the pixel-spacecoherency. It wasoriginally developedfor traditionalray-tracing7 and later adaptedto volume rendering57 60.First, raysarecastfrom only a subsetof the screenpixels(e.g., every other pixel). “Empty pixels” residingbetween

    pixelswith similar valueareassignedaninterpolatedvalue.In areasof high imagegradientadditionalrays are casttoresolve ambiguities.

    Van Walsum et al. 104 have used the voxel-spaceco-herency. In his methodthe ray startssamplingthe volumein low frequency (i.e., large stepsbetweensamplepoints).If a large value differenceis encounteredbetweentwo ad-jacentsamples,additionalsamplesaretaken betweenthemto resolve ambiguitiesin thesehigh frequency regions.Re-cently, this basicideawasextendedto efficiently lower thesamplingratein eitherareaswhereonly smallcontributionsof opacitiesaremade,or in regionswherethevolumeis ho-mogeneous20. Thismethodefficiently detectsregionsof lowpresenceor low variation by employing a pyramid of vol-umesthat decodethe minimum andmaximumvoxel valuein a small neighborhood,as well as the distancebetweenthesemeasures.

    The template-basedmethod120 124 utilizes the inter-raycoherency. Observingthat,in parallelviewing, all rayshavethesameform it wasrealizedthatthereis no needto reacti-vatethediscreteline algorithmfor eachray. Instead,we cancomputetheform of therayonceandstoreit in a datastruc-ture calledray-template.All rayscanthenbe generatedbyfollowing theray template.The rays,however, differ in theexactpositioningof theappropriateportionof thetemplate,anoperationthathasto beperformedverycarefully. For thispurposea planethatis parallelto oneof thevolumefacesischosento serve asa base-planefor the templateplacement.The imageis projecteda by sliding the templatealongthatplaneemitting a ray at eachof its pixels. This placementguaranteescompleteanduniformtessellationof thevolume.Theregularityandsimplicity of thisefficientalgorithmmakeit very attractive for hardwareimplementation121.

    So far we have seenmethodsthat exploit sometype ofcoherency to expeditevolumetricray casting.However, themostprolific andeffectivebranchof volumerenderingaccel-erationtechniquesinvolvetheutilizationof thefourthprinci-plementionedabove – speedingup raycastingby providingefficient meansto traversetheemptyspace.

    Thehierarchicalrepresentation(e.g.,octree)decomposesthevolumeinto uniform regionsthat canbe representedbynodesin ahierarchicaldatastructure.An adjustedraytraver-salalgorithmskipsthe(uniform)emptyspaceby maneuver-ing throughthe hierarchicaldatastructure57 89. It wasalsoobservedthattraversingthehierarchicaldatastructureis in-efficient comparedto the traversalof regular grids.A com-binationof theadvantagesof bothrepresentationsis theuni-form buffer. The “uniformity information” decodedby theoctreecanbestoredin theemptyspaceof aregular3D raster.That is, voxels in the uniform buffer containeither a datavalueor informationindicatingto which sizeemptyoctantthey belong.Rayswhich arecastinto thevolumeencountereithera datavoxel, or a voxel containing“uniformity infor-mation” which instructsthe ray to perform a leap forward

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    thatbringsit to thefirst voxel beyondtheuniform region 16.This approachsaves the needto perform a tree searchfortheappropriateneighbor– anoperationthatis themosttimeconsumingand the major disadvantagein the hierarchicaldatastructure.

    When a volume consistsof one object surroundedbyempty space,a commonand simple methodto skip mostof this empty spaceuses the well known techniqueofbounding-boxes. The object is surroundedby a tightly fitbox (or othereasy-to-intersectobjectsuchassphere).Raysareintersectedwith the boundingobjectandstart their ac-tualvolumetraversalfrom this intersectionpointasopposedto startingfrom thevolumeboundary. The PARC (PolygonAssistedRayCasting)approach3 strivesto have a betterfitby allowing a convex polyhedralenvelopeto beconstructedaroundthe object. PARC utilizes available graphicshard-wareto renderthefront facesof theenvelope(to determine,for eachpixel, the ray entry point) and back faces(to de-terminethe ray exit point). The ray is then traversedfromentry to exit point. A ray that doesnot hit any objectis nottraversedat all.

    It is obviousthattheemptyspacedoesnothaveto besam-pled – it hasonly to be crossedas fastaspossible.There-fore,Yagelet al. have proposed123 122 to utilize onefastandcrudeline algorithm in the empty space(e.g.,3D integer-based26-connectedline algorithm)andanother, slower butmoreaccurate(e.g.,6-connectedintegeror 3D DDA floatingpoint line algorithm),in thevicinity andinterior of objects.The effectivenessof this approachdependson its ability toefficiently switchbackandforth betweenthetwo line algo-rithm, andits ability to efficiently detecttheproximity of oc-cupiedvoxels.This is achievedby surroundingtheoccupiedvoxelsby aone-voxel-deep“cloud” of flag-voxels,thatis,allemptyvoxelsneighboringanoccupiedvoxel areassigned,ina preprocessingstage,a special“vicinity flag”. A cruderayalgorithm is employed to rapidly traversethe empty spaceuntil it encountersa vicinity voxel. This flags the needtoswitch to a moreaccurateray traversalalgorithm.Encoun-teringlateranemptyvoxel (i.e.,unoccupiedandnotcarryingthevicinity flag)cansignalaswitchbackto therapidtraver-salof emptyspace.

    Theproximity-cloudsmethod16 128 is basedontheexten-sionof this ideaevenfurther. Insteadof having a one-voxel-deepvicinity cloud this methodcomputes,in a preprocess-ing stage,for eachemptyvoxel, the distanceto the closestoccupiedvoxel. When a ray is sentinto the volume it caneitherencounteran occupiedvoxel, to be handledasusual,or a “proximity voxel” carrying the value . This suggeststhattheray cantake a -stepleapfor-n n ward,beingassuredthatthereis no objectin theskippedspanof voxels.Theef-fectivenessof this algorithmis obviously dependenton theability of the line traversalalgorithmto efficiently jump ar-bitrary numberof steps16.

    YagelandShi 127 have reportedon a methodfor speeding

    up the processof volume renderinga sequenceof images.It is basedonexploiting coherency betweenconsecutive im-agesto shortenthepathraystakethroughthevolume.Thisisachievedby providing eachray with theinformationneededto leapover theemptyspaceandcommencevolumetraver-sal at the vicinity of meaningfuldata.The algorithmstartsby projectingthevolumeinto aC-buffer (Coordinate-buffer)whichstores,ateachpixel location,theobject-spacecoordi-natesof thefirst nonemptyvoxel visiblefrom thatpixel. Foreachchangein theviewing parameters,theC-buffer is trans-formedaccordingly. In the caseof rotationthe transformedC-buffer goesthrougha processof eliminatingcoordinatesthatpossiblybecamehidden30. Theremainingvaluesin theC-buffer serveasanestimateof thepointwherethenew raysshouldstarttheir volumetraversal.

    4.3. Hybrid Viewing

    The most efficient renderingalgorithm usesa ray-castingtechnique with hybrid object/image-orderdata traversalbasedon the shear-warp factorizationof the viewing ma-trix 124 91 52. The volume datais definedin object coordi-nates � u � v� w� , which are first transformedto isotropic ob-ject coordinatesby a scaleandshearmatrix L. This allowsto automaticallyhandleanisotropicdatasets,in which thespacingbetweenvoxelsdiffers in thethreedimensions,andgantry tilted datasets,in which the slicesare sheared,byadjustingthe warp matrix. A permutationmatrix P trans-forms the isotropicobject to permutedcoordinates� x � y� z� .Theorigin of permutedcoordinatesis thevertex of thevol-umenearestto theimageplaneandthez axis is theedgeofthevolumemostparallelto theview direction.A shearma-trix S representstherenderingoperationthatprojectspointsin thepermutedvolumespaceontopointson thebaseplane,which is thefaceof thevolumedatathat is mostparalleltotheviewing plane.

    In the shear-warp implementation by Lacroute andLevoy 52, the volume is storedthreetimes, run-lengthen-codedalong the major viewing direction. The projectionis performedusingbi-linear interpolationandback-to-frontcompositingof volume slices parallel to the baseplane.Pfister et al. 83 perform the projection using ray-casting.This preventsview-dependentartifactswhenswitchingbaseplanes and accommodatessupersamplingof the volumedata.Insteadof castingraysfrom imagespace,raysaresentinto thedatasetfrom thebaseplane.This approachguaran-teesthat thereis a one-to-onemappingof samplepointstovoxels124 91.

    The baseplaneimageis transformedto the imageplaneusingthe warp matrix W � M � L � 1 � P � 1 � S� 1. To re-sampletheimage,onecanuse2D texturemappingwith bi-linear interpolationon a companiongraphicscard.The ad-ditional 2D imageresamplingresultsin a slight degradationof imagequality. It enables,however, aneasymappingto anarbitraryuser-specifiedimagesize.

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    Themainadvantageof theshear-warpfactorizationis thatvoxelscanbereadandprocessedin planesof voxels,calledslices,thatareparallelto thebaseplane.Slicesareprocessedin positive z direction. Within a slice, scanlineof voxels(called voxel beams)are readfrom memoryin top to bot-tom order. This leadsto regular, object-orderdataaccess.Inaddition,it allows parallelismby having multiple renderingpipelineswork on severalvoxelsin a beamat thesametime.

    4.4. Progressive Refinement

    Onepracticalsolutionto the renderingtime problemis thegenerationof partialimagesthatareprogressively refinedasthe userinteractswith the crudeimage.Both forward andbackward approachcan supportprogressive refinement.Inthe caseof forward viewing this techniqueis basedon apyramiddatastructure.First,thesmallervolumein thepyra-mid is renderedusing large-footprintsplats.Later, higherresolutioncomponentsof thepyramidarerendered53.

    Providing progressive refinementin backward viewing isachievedby first samplingthescreenin low resolution.Theregions in the screenwhereno rayswereemittedfrom re-ceive a valueinterpolatedfrom someclosepixels that wereassignedrays.Latermoreraysarecastandtheinterpolatedvalue is replacedby the more accurateresult 60. Addition-ally, raysthatareintendedto cover largescreenareascanbetracedin thelower-resolutioncomponentsof a pyramid57.

    Not only screen-spaceresolutioncanbeprogressively in-creased.Samplingrateandstoppingcriteria canalsobere-fined.An efficient implementationof this techniquewasre-portedby DanskinandHanrahan20.

    5. The four most popular Approaches

    As we have seenin the previous sections,therearenumer-ousapproachsthat canbe taken in volumevisualization.Asideby sidecomparisonof all theseapproacheswouldcovermany pagesandwould probablynot give many insightsdueto the overwhelmingamountof information and the largeparameterset.Generally, therearetwo avenuesthat canbetaken:

    1. Thevolumetricdataarefirst convertedinto asetof polyg-onaliso-surfaces(i.e.,via MarchingCubes63) andsubse-quentlyrenderedwith polygonrenderinghardware.Thisis referredto asindirectvolumerendering(IVR).

    2. Thevolumetricdataaredirectly renderedwithout thein-termediateconversionstep.This is referredto asdirectvolumerending(DVR) 20 88 100.

    Theformerassumes(i) thata setof extractableiso-surfacesexists,and(ii) that with the infinitely thin surfacethe poly-gon meshmodelsthe true object structuresat reasonablefidelity. Neither is always the case,as illustrative exam-ples may serve: (i) amorphouscloud-like phenomena,(ii)smoothlyvarying flow fields, or (iii) structuresof varying

    depth(andvarying transparenciesof an isosurface)that at-tenuatetraversinglight correspondingto thematerialthick-ness.But evenif bothof theseassumptionsaremet,thecom-plexity of theextractedpolygonalmeshcanoverwhelmthecapabilitiesof the polygonsubsystem,anda direct volumerenderingmay prove moreefficient 81, especiallywhentheobjectis complex or large,or whentheisosurfaceis interac-tively variedandthe repeatedpolygonextractionoverheadmustbefiguredinto therenderingcost5.

    Within this section,we concernourselvessolelywith thedirectvolumerenderingapproach,in which four techniqueshave emergedasthe mostpopular:Raycasting99 54, Splat-ting 108, Shear-warp 52, and3D texture-mappinghardware-basedapproaches9.

    5.1. Intr oduction

    Over the years,many researchershave worked indepen-dently on refining thesefour methods,anddueto this mul-tifariouseffort, all methodshave now reacheda high levelof maturity. Mostof thisdevelopment,however, hasevolvedalongseparatepaths(althoughsomefundamentalscientificprogresshasbenefitedall methodssuchasadvancesin fil-ter design10 6 73 or efficient shading103 105). A numberoffrequentlyusedandpublicly availabledatasetsexists (e.g.,theUNC CT / MRI headsor the CT lobster),however, dueto the large numberof parametersthat werenot controlledacrosspresentedresearch,it hasso far beendifficult to as-sessthebenefitsandshortcomingsof eachmethodin adeci-sive manner. Thegenerallyuncontrolledparametersinclude(apartfrom hardwarearchitecture,availablecache,andCPUclock speed):shadingmodel,viewing geometry, sceneillu-mination,transferfunctions,imagesizes,andmagnificationfactors.Further, so far, no commonsetof evaluationcrite-ria existsthatenablesfair comparisonsof proposedmethodswith existing ones.Within this section,we will addressthisproblem,andpresentanappropriatesetupfor benchmarkingandevaluatingdifferentdirectvolumerenderingalgorithms.Somework in this direction hasalreadybeendonein thepast:Bartz5 hascomparedDVR usingraycastingwith IVRusingmarchingcubesfor iso-surfaceextraction,while Tiede97 hascomparedgradientfilters for raycastingand march-ing cubes.However, a clear answerto which algorithm isbestcannotbe provided for the generalcasebut the resultspresentedhereareaimedat providing certainguidelinestodetermineunderwhatconditionsandpremiseseachvolumerenderingalgorithmis mostadequatelychosenandapplied.

    5.2. CommonTheoretical Framework

    We canwrite all four investigatedvolumerenderingmeth-ods as approximationsof the well-known low-albedovol-umerenderingintegral, VRI 8 48 41 66. TheVRI analyticallycomputesI l (x,r), theamountof light of wavelengthl com-ing from ray directionr that is receivedat locationx on the

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    imageplane:

    Iλ � x � r � � � Lo

    Cλ � s� µ � s� e� ��� s0 µ � t � dt � ds (1)Here,L is the lengthof ray r. If we think of the volumeasbeingcomposedof particleswith certaindensities(or lightextinctioncoefficients66) µ, thentheseparticlesreceive lightfrom all surroundinglight sourcesandreflect this light to-wardsthe observer accordingto their specularand diffusematerialproperties.In addition,theparticlesmay alsoemitlight on theirown. Thus,in (1),Cλ is thelight of wavelengthl reflectedand/oremittedat locations in the directionof r.To accountfor thehigherreflectanceof particleswith largerdensities,we mustweigh the reflectedcolor by the particledensity. Thelight scatteredats is thenattenuatedby theden-sitiesof theparticlesbetweens andtheeye accordingto theexponentialattenuationfunction.

    At least in the generalcase,the VRI cannot be com-putedanalytically66. Hence,practicalvolumerenderingal-gorithmsdiscretizetheVRI into a seriesof sequentialinter-valsi of width ∆s:

    Iλ � x � r � � L ∆s∑i ! 0 Cλ � si � µ � si � ∆si � 1∏j ! 0e� µ � sj � ∆s� (2)

    Usinga Taylorseriesapproximationof theexponentialtermanddroppingall but thefirst two terms,we get thefamiliarcompositingequation57:

    Iλ � x � r � � L ∆s∑i ! 0 Cλ � si � α � si � i � 1∏j ! 0 � 1 α � sj �"� (3)

    WedenotethisexpressionasdiscretizedVRI (DVRI), whereα � 1 � 0 transparency. Expression3 representsa commontheoreticalframework for all surveyedvolumerenderingal-gorithms.All algorithmsobtaincolorsandopacitiesin dis-crete intervals along a linear path and compositethem infront to back order or back to front order, seesection2.8.However, the algorithmscan be distinguishedby the pro-cessin which thecolorsC � si � andopacitiesα � si � arecalcu-latedin eachinterval i, andhow wide the interval width ∆sis chosen.The position of the shadingoperatorin the vol-umerenderingpipelinealsoaffectsC(si) anda(si). For thispurpose,wedistinguishthepre-shadedfrom thepost-shadedvolume renderingpipeline. In the pre-shadedpipeline, thegrid samplesareclassifiedandshadedbeforetheraysampleinterpolationtakesplace.We denotethis asPre-DVRI (pre-shadedDVRI) andits mathematicalexpressionis identicalto formula3. Pre-DVRI generallyleadsto blurry images,es-peciallyin zoomedviewing, wherefine objectdetailis oftenlost 39 77.

    The blurrinessis eliminatedby switching the order ofclassification/shadingand ray sampleinterpolation.Then,the original densityvolumef is interpolatedandthe result-ing samplevalues f � i � areclassified,via transferfunctions,to yield material,opacity, andcolor. All blurry partsof the

    edgeimagecanbeclippedawayusingtheappropriateclassi-ficationfunction77. Shadingfollows immediatelyafterclas-sificationandrequiresthecomputationof gradientsfrom thedensitygrid. Theresultingexpressionis termedPost-DVRI(post-shadedDVRI) andis written asfollows:

    Iλ � x � r � � L ∆s∑i ! 0 Cλ � f � si �"� α � f � si �#� i � 1∏j ! 0 � 1 α � f � sj �"�"� (4)

    C andα arenow transferfunctions,commonlyimplementedaslookup-tables.Sincein Post-DVRI theraw volumedensi-tiesareinterpolatedandusedto index thetransferfunctionsfor colorandopacity, finedetailin thesetransferfunctionsisreadilyexpressedin thefinal image.Oneshouldnote,how-ever, that Post-DVRI is not without problems:Due to thepartial volume effect, a densitymight be interpolatedthatis classifiedas a materialnot really presentat the samplelocation,which can leadto falsecolors in the final image.This canbeavoidedby prior segmentation,which,however,canaddseverestaircasingartifactsdueto introducedhigh-frequency. Basedon formulas3 and4, we will now presentthefour surveyedalgorithmsin detail.

    5.3. Distinguishing Featuresof the differ ent algorithms

    Ourcomparisonwill focusontheconceptualdifferencesbe-tweenthe algorithms,andnot so muchon ingeniousmea-suresthat speedruntime.Sincenumerousimplementationsfor eachalgorithmexist – mainly providing acceleration–we will selectthe most generalimplementationfor each,employing themostpopularcomponentsandparameterset-tings.Morespecificimplementationscanthenusethebench-marksintroducedlater to comparethe impact of their im-provements.We have summarizedthe conceptualdiffer-encesof thefour algorithmsin Table1.

    5.3.1. Raycasting

    Of all volumerenderingalgorithms,Raycastinghasseenthelargestbodyof publicationsovertheyears.ResearchershaveusedPre-DVRI 57 54 aswell asPost-DVRI 2 38 97. Theden-sity and gradient(Post-DVRI), or color and opacity (Pre-DVRI), in eachDVRI interval aregeneratedvia point sam-pling, most commonlyby meansof a trilinear filter fromneighboringvoxels (grid points)to maintaincomputationalefficiency, andsubsequentlycomposited.Mostauthorsspacethe ray samplesapart in equaldistances∆s, but someap-proachesexist that jitter thesamplingpositionsto eliminatepatternedsamplingartifacts,or applyspace-leaping20 127 foracceleratedtraversalof emptyregions.For strict iso-surfacerendering,recentresearchanalytically computesthe loca-tion of the iso-surface,whenthe ray stepsinto a voxel thatis traversedby one81. But in the generalcase,the Nyquisttheoremneedsto be followed which statesthat we shouldchoose∆s � 1 � 0 (i.e., onevoxel length)if we do not knowanything aboutthe frequency contentin the sample’s local

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    neighborhood.Then,for Pre-DVRI andPost-DVRI raycast-ing, theC � si �$� α � si �$� andf � si � termsin equations3 and4, re-spectively, arewritten as:

    Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (5)f � si � � f � i∆s�

    Note that a needsto be normalizedfor ∆s %� 1 � 0 62. In theusedimplementation,we useearly ray termination,whichis a powerful accelerationmethodof raycastingwhererayscanbeterminatedwhentheaccumulatedopacityhasreacheda value closeto unity. Furthermore,all samplesandcorre-spondinggradientcomponentsarecomputedby trilinear in-terpolationof therespective grid data.

    5.3.2. Splatting

    Splattingwasproposedby Westover108, andit worksby rep-resentingthevolumeasanarrayof overlappingbasisfunc-tions, commonlyGaussiankernelswith amplitudesscaledby thevoxel values.An imageis thengeneratedby project-ing thesebasisfunctionsto the screen.The screenprojec-tion of theseradially symmetricbasisfunction canbe effi-ciently achievedby therasterizationof a precomputedfoot-print lookuptable.Here,eachfootprint tableentrystorestheanalyticallyintegratedkernelfunctionalongatraversingray.A majoradvantageof splattingis thatonly voxelsrelevanttotheimagemustbeprojectedandrasterized.Thiscantremen-dously reducethe volume datathat needsto be both pro-cessedand stored78. However, dependingon the zoomingfactor, eachsplatcancover up to hundredsof pixels whichneedto beprocessed.

    The preferredsplattingapproach108 summedthe voxelkernels within volume slices most parallel to the imageplane.This wasproneto severebrightnessvariationsin an-imatedviewing andalsodid not allow the variation of theDVRI interval distance∆s. Mueller76 eliminatedthesedraw-backsby processingthevoxel kernelswithin slabsof width∆s, alignedparallelto theimageplane– hencetheapproachwastermedimage-alignedsplatting:All voxel kernelsthatoverlapaslabareclippedto theslabandsummedinto asheetbuffer, followedby compositingthesheetwith thesheetbe-fore. Efficient kernelslice projectionis achieved by analyt-ical pre-integration of an array of kernel slicesand usingfastslicefootprint rasterizationmethods78. Both Pre-DVRIandPost-DVRI 77 arepossible,andtheC � si �$� α � si �$� andf � si �termsin equations3 and4 arenow written as:

    Cλ � si � �'& � i ( 1� ∆si∆s Cλ � s� ds∆sα � si � �'& � i ( 1� ∆si∆s α � s� ds∆s (6)f � si � � & � i ( 1� ∆si∆s f � s� ds∆s

    We observe that splattingreplacesthe point sampleof ray-castingby a sampleaverageacross∆s. This introducesanadditionallow-passfiltering stepthat helpsto reducealias-ing, especiallyin isosurfacerenderingsandwhen∆s ) 1 � 0.Splattingalso typically usesrotationally symmetricGaus-siankernels,which have betteranti-aliasingcharacteristicsthan linear filters, with the side effect of performingsomesignalsmoothing.However, whensplattingdatavaluesandnot color, classificationis an unsolved problem since theoriginal datavalue, i.e. density, is smoothed,accumulated,andthenclassified.This aggravatestheso-calledpartialvol-umeeffect andsolvingthis remainsfutureresearch.

    Splattingcanusea conceptsimilar to early ray termina-tion: early splatelimination,basedon a dynamicallycom-putedscreenocclusionmap,that (conservatively) culls in-visible splatsearlyfrom therenderingpipeline78. Themainoperationsof splattingare the transformationof eachrele-vant voxel centerinto screenspace,followed by an indexinto theocclusionmapto testfor visibility, andin caseit isvisible,therasterizationof thevoxel footprint into thesheet-buffer. The dynamicconstructionof the occlusionmapre-quiresa convolution operationafter eachsheet-buffer com-posite,which, however, can be limited to buffer tiles thathave received splat contributions in the current slab 78. Itshouldbenotedthat,althoughearlysplateliminationsavesthe costof footprint rasterizationfor invisible voxels, theirtransformationmuststill beperformedto determinetheiroc-clusion.This is different from early ray terminationwherethe ray can be stoppedandsubsequentvoxels arenot pro-cessed.

    5.3.3. Shear-Warp

    Shear-warpwasproposedby LacrouteandLevoy 52 andhasbeenrecognizedas the fastestsoftwarerendererto date.Itachievesthis by employing a clever volumeandimageen-codingscheme,coupledwith asimultaneoustraversalof vol-umeandimagethat skipsopaqueimageregionsandtrans-parentvoxels.In a pre-processingstep,voxel runsareRLE-encodedbasedon pre-classifiedopacities.This requirestheconstructionof a separateencodedvolume for eachof thethreemajorviewing directions.Therenderingis performedusingaraycasting-likescheme,whichis simplifiedby shear-ing the appropriateencodedvolume suchthat the rays areperpendicularto the volume slices. The rays obtain theirsamplevaluesvia bilinearinterpolationwithin thetraversedvolumeslices.A final warpingsteptransformsthevolume-parallelbaseplaneimageinto the screenimage.The DVRIinterval distance∆ s is view-dependent,sincetheinterpola-tion of samplevaluesonly occursin shearedvolumeslices.It variesfrom 1 � 0 for axis-alignedviews to 1 � 41 for edge-on views to 1 � 73 to corner-on views,andit cannotbevariedto allow for supersamplingalongthe ray. Thusthe Nyquisttheoremis potentially violated for all but the axis-alignedviews.

    TheVolpackdistributionfrom Stanford(avolumerender-

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    Samplingrate freely selectable freely selectable fixed 1 � 0 � 1 � 73� freely selectableInterpolationkernel trilinear Gaussian bilinear trilinear

    Acceleration earlyray termination earlysplatelimination RLE opacityencoding graphicshardware

    Voxelsconsidered all relevant mostlyrelevant all

    Table1: Distinguishingfeaturesandcommonlyusedparameters of thefour differentalgorithms.

    ing packagethat usesthe shearwarp algorithm) only pro-videsfor Pre-DVRI (with opacityweightedcolors),but con-ceptuallyPost-DVRI is alsofeasible,however, withoutopac-ity classificationif shear-warp’s fastopacity-basedencodingis used.TheC � si � , α � si � , and f � si � termsin equations3 and4 arewritten similar to raycasting,but with the addedcon-straintthat∆s is dependenton theview direction:

    Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (7)f � si � � f � i∆s�

    ∆s �+* , dxdz - 2 . , dydz - 2 . 1 (8)

    where dx � dy� dz� T is the normalizedviewing vector, re-orderedsuchthatdz is themajorviewing direction.In Vol-pack,thenumberof rayssentthroughthevolumeis limitedto the numberof pixels in the baseplane(i.e., the resolu-tion of thevolumeslicesin view direction).Largerviewportsareachievedby bilinearinterpolationof theresultingimage(after back-warping of the baseplane),resultingin a verylow imagequality if thesizeof theview-port is significantlylargerthanthevolumeresolution.Thiscanbefixedby usinga scaledvolumewith a highervolumeresolution.

    5.3.4. 3D Texture-Mapping Hardware

    This is a shortintroductionto 3D texturemappingandmoredetailsare disclosedin section7.// The useof 3D texturemappingwaspopularizedby Cabral9 for non-shadedvol-umerendering.The volumeis loadedinto texture memoryandthe hardwarerasterizespolygonalslicesparallel to theviewplane.Theslicesarethenblendedbackto front, duetothemissingaccumulationbuffer for a. Theinterpolationfil-teris atrilinearfunction(onSGI’sRE2 andIR architectures,quadlinearinterpolationis alsoavailable,whereit addition-ally interpolatesbetweentwo mipmaplevels),andtheslicedistance∆ s canbe chosenfreely. A numberof researchershave addedshadingcapabilities19 70 26 106, and both Pre-DVRI 26 andPost-DVRI 19 70 106 arepossible.Usually, therenderingis brute-force,without any opacity-basedtermi-nationacceleration,but someresearchershave donethis 19.The drawbacksof 3D texture mappingis that larger vol-umesrequirethe swappingof volume bricks in andout of

    thelimited-sizedtexturememory(usuallya few MBytesforsmallermachines).Fortunately, 3D texturemappingrecentlybecamepopular in PC basedgraphicshardware. Texture-mappinghardware interpolatessamplesin similar ways toraycastingand hencethe C � si � , α � si � , and f � si � terms inequations3 and4 arewritten as:

    Cλ � si � � Cλ � i∆s�α � si � � α � i∆s� (9)f � si � � f � i∆s�

    5.4. Comparison

    For a fair comparisonof thepresentedfour algorithms,oneneedsto defineidenticalviewing andrenderingparameters.Thefirst onecanbe accomplishedusinga commoncameramodel, i.e. like OpenGLwhile the latter includesmultipleparameterssuchasfiltering method,classification,shading,blending,and so forth. Someof thesepropertiesare illus-tratedin table1. Otherssuchasshading,classification,andblendingneedto beadoptedaccrossthealgorithmssuchthatall usethesameoperations.

    It is difficult to evaluaterenderingquality in aquantitativemanner. Often,researcherssimply put imagesof competingalgorithmssideby side,appointingthehumanvisualsystem(HVS) to bethejudge.It is well known thattheHVS is lesssensitiveto someerrors(stochasticnoise)andmoreto others(regularpatterns),andinterestingly, sometimesimageswithlarger numericalerrors,e.g.,RMS, arejudgedsimilar by ahumanobserver than imageswith lower numericalerrors.So it seemsthat the visual comparisonis moreappropriatethanthenumerical,sinceafterall weproduceimagesfor thehumanobserver andnot for error functions.In that respect,anerrormodelthat involvesthe HVS characteristicswouldbemoreappropriatethana purelynumericalone.But never-theless,to performsucha comparisonwe still needthetruevolume renderedimage,obtainedby analytically integrat-ing thevolumevia equation(1) (neglectingtheprior reduc-tion of the volume renderingtask to the low-albedocase).As waspointedout by Max 66, analyticalintegrationcanbedonewhen assumingthatC � s� and µ � s� are piecewise lin-ear. This is, however, somewhat restrictive on our transferfunctions,so in the following we employ visual quality as-sessmentonly.

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    Raycasting Splatting Shear-warp 3D texturemapping

    post-shadedDVRI post-shadedDVRI pre-shadedDVRI pre-shadedDVRI

    opacityweighted not opacityweightedcolor interpolation color interpolation

    Figure 7: Comparisonof the four algorithms.Columnsfromleft to right: Raycasting, splatting, shear-warp, and3D texturemapping. Rowsfrom top to bottom:CT scanof a humanskull, zoomedview of the teeth,CT scanof brain arteriesshowingan aneurism,Marschner-Lobb functioncontaininghigh frequencies,and simulationof the potentialdistribution of electronsaroundatoms.

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    5.5. Results

    All presentedresultsweregeneratedon the sameplatform(SGI Octane).Thegraphicshardwarewasonly usedby the3D texturemappingapproach.Figure7 showsrepresentativestill framesof different datasetsthat we rendered.We ob-serve that the imagequality achieved with texture mappinghardwareshows severecolor-bleedingartifactsdueto inter-polationof colorsindependentfrom theα-value119 (seesec-tion 2.10,aswell asstaircasing.Furthermore,highly trans-parentclassificationresultsin darker images,dueto limitedprecisionof theRGBA-channelsof thehardware(8 bit).

    Volpack shear-warp performsmuch better, with qualitysimilar to raycastingandsplattingwhenever the resolutionof the imagematchesthe resolutionof the baseplane.Fortheotherimages,therenderedbaseplaneimagewasof lowerresolutionthan the screenimageand had to be magnifiedusingbilinear interpolationin the warpingstep.This leadsto excessive blurring, especiallyfor the Marschner-Lobbdataset,wherethemagnificationis veryhigh.A morefunda-mentaldraw-backcanbe observed in the 45 degreeneghipview in Figure7, where– in additionto theblurring– signif-icantaliasingin theform of staircasingis present.Thisis dueto theray samplingratebeinglessthan1.0,andcanbedis-turbingin animatedviewing of somedatasetsbut is lessno-ticeablein still images.TheMarschner-Lobb datasetrender-ingsfor raycastingandsplattingdemonstratethedifferencesof point sampling(raycasting)andsampleaveraging(splat-ting). While raycasting’s point samplingmissessomedetailof thefunctionat thecrestsof thesinusoidalwaves,splattingaveragesacrossthewavesandrendersthemasblobbyrims.For theotherdatasetstheaveragingeffect is moresubtle,butstill visible. For example,raycastingrendersthe skull andthe magnifiedblood with crisperdetail thansplattingdoes,but cansuffer from aliasingartifacts,if thesamplingrateisnot chosenappropriately(Nyquist rate).However, thequal-ity is quitecomparable,for all practicalpurposes.

    5.6. Summary

    Generally, 3D texture mappingand shear-warp have sub-secondrenderingtimesfor moderately-sizeddatasets.Whilethe quality of the display obtainedwith our mainstreamtexture mappingapproachis limited and can be improvedas demonstratedin section7, the quality of shear-warp ri-vals that of the muchmoreexpensive raycastingandsplat-ting whentheobjectmagnificationis aboutunity. Handlinghighermagnificationsis possibleby relaxingthe conditionthatthenumberof raysmustmatchtheresolutionof thevol-ume.Althoughhigherinterpolationcostswill be the result,the renderingframe rate will most likely still be high (es-pecially if view frustumculling is applied).A moreseriousconcernis thedegradationof imagequalityatoff-axisviews.In thesecases,onecould usea volumewith extra interpo-latedslices,which is Volpack’s standardsolutionfor higherimageresolutions.But the fact that shear-warp requiresan

    opacity-encodedvolumemakesinteractive transferfunctionvariationachallenge.In applicationswheretheselimitationsdonotapply, shear-warpprovesto beaveryusefulalgorithmfor volumerendering.Theside-by-sidecomparisonof splat-ting and raycastingyielded interestingresultsas well: Wesaw that image-alignedsplattingoffers a renderingqualitysimilar to thatof raycasting.It, however, producessmootherimagesdueto thez-averagedkernelandtheanti-aliasingef-fectof thelargerGaussianfilter. It is hencelesslikely to misshigh-frequency detail.Raycastingis fasterthansplattingfordatasetswith a low numberof non-contributing samples.Onthe otherhand,splattingis betterfor datasetswith a smallnumberof relevantvoxels andsheetbuffers.Sincethequal-ity is so similar andthe sametransferfunctionsyield simi-lar renderingresults,onecouldbuild a rendererthatapplieseither raycastingor splatting,dependingon the numberofrelevant voxels andthe level of compactnessof the dataset.Onecouldeven usedifferentrenderersin differentportionsof the volume,or for the renderingof disconnectedobjectsof differentcompactness.

    More detailson this comparisoncanbefoundin 71.

    6. The VolumePro Real-Time Ray-CastingSystem

    Softwarebasedvolumerenderingapproachescanbeacceler-atedsuchthatinteractive frame-ratescanbeachieved.How-ever, this requirescertaintrade-offs in quality or parametersthatcanbechangedinteractively. In orderto achieve interac-tive or evenreal-timeframe-ratesat highestquality andfullflexibility , dedicatedhardwareis necessary. Within this sec-tion we will have a closerlook at specialpurposehardware,i.e. theVolumeProsystem.

    6.1. Intr oduction

    Specialpurposehardware for volume renderinghas beenproposedby variousresearchers,but only a few machineshave beenimplemented.VIRIM wasbuilt at theUniversityof Mannheim,Germany 31. The hardware consistsof fourVME boardsandimplementsray-casting.VIRIM achieves2.5frames/secfor 2563 volumes.

    The first PCI basedvolume renderingacceleratorhasbeenbuilt by the University of Tübingen,Germany. TheirVIZARD systemimplementstrue perspective ray-castingand consistsof two PCI acceleratorcards 47. An FPGA-basedsystemachievesupto 10frames/secfor 2563 volumes.To circumventthelossydatacompressionandthelimitationsof changesin classificationor shadingparametersdueto thepre-processing,a follow-up systemis currentlyunderdevel-opment72 21. This VIZARD II systemwill becapableof upto 20 framesper secondfor datasetsof 2563 voxels. Thestrengthof this systemis its ray traversalenginewith anop-timizedmemoryinterfacethatallows for fly throughswhichis mandatoryfor immersive applications.Thesystemusesa

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    singleray processingpipeline(RPU) andexploits algorith-mic optimizationssuchas early ray terminationandspaceleaping.

    Within this section,we will describeVolumePro,thefirstsingle-chipreal-timevolumerenderingsystemfor consumerPCs83. Thefirst VolumeProboardwasoperationalin April1999(seeFigure8).

    Figure8: TheVolumeProPCI card.

    The VolumeProsystemis basedon the Cube-4volumerenderingarchitecturedevelopedat SUNY Stony Brook 84.MitsubishiElectric licensedtheCube-4technologyandde-velopedthe EnhancedMemory Cube-4(EM-Cube)archi-tecture80. TheVolumeProsystem,animprovedcommercialversionof EM-Cube,is commerciallyavailablesinceMay1999at a price comparableto high-endPC graphicscards.Figure9 shows several imagesrenderedon the VolumeProhardwareat 30 frames/sec.

    6.2. RenderingAlgorithm

    VolumeProimplementsray-casting54, oneof themostcom-monlyusedvolumerenderingalgorithms.Ray-castingoffershighimagequalityandis easyto parallelize.Thecurrentver-sionof VolumeProsupportsparallelprojectionsof isotropicandanisotropicrectilinearvolumeswith scalarvoxels.

    VolumeProis a highly parallelarchitecturebasedon thehybrid ray-castingalgorithm shown in Figure 10 124 91 52.Rays are sent into the datasetfrom eachpixel on a base

    Image Plane

    Warp

    Base Plane

    VolumeData

    Voxel Slice

    Figure10: Template-basedray-casting.

    plane,which is co-planarto thefaceof thevolumedatathatis mostparallelandnearestto theimageplane.Becausetheimageplaneis typically at someangleto thebase-plane,theresultingbase-planeimageis warpedontotheimageplane.

    Themainadvantageof thisalgorithmis thatvoxelscanbereadandprocessedin planesof voxels(socalledslices)thatareparallelto thebase-plane.Within a slice,voxelsarereadfrom memoryascanlineof voxelsata time,in top to bottomorder. This leadsto regular, object-orderdataaccess.

    In contrastto theshear-warpimplementationby Lacrouteand Levoy 52, VolumeProperformstri-linear interpolationandallows raysto startat sub-pixel locations.This preventsview-dependentartifactswhenswitchingbaseplanesandac-commodatessupersamplingof thevolumedata.

    6.3. VolumePro SystemAr chitecture

    VolumePro is implementedas a PCI card for PC classcomputers.The cardcontainsonevolume renderingASIC(calledthevg500) and128or 256MBytesof volumemem-ory. Thewarpinganddisplayof thefinal imageis doneonanoff-the-shelf3D graphicscardwith 2D texturemapping.Thevg500volumerenderingASIC,shown in Figure11,containsfour identicalrenderingpipelines,arrangedsideby side,run-ningat125MHz each.It is anapplicationspecificintegratedcircuit (ASIC) with approximately3.2million randomlogictransistorsand2 Mbits of on-chipSRAM. The vg500alsocontainsinterfacesto voxel memory, pixel memory, andthePCI bus.

    Interpolation

    GradientEstimation

    Shading &Classification

    Compositing

    Pipeline 1

    Pipeline 2

    Pipeline 3

    Voxel Memory Interface

    Pixel Memory Interface

    SDRAM SDRAM SDRAM SDRAM

    SDRAM SDRAM SDRAM SDRAM

    PC

    I Interface

    vg500ASIC

    Slice B

    uffers

    Pipeline 0

    PLL &

    Glue

    Figure 11: The vg500 volume renderingASIC with fouridenticalray-castingpipelines.

    Eachpipelinecommunicateswith voxel andpixel mem-ory andtwo neighboringpipelines.Pipelineson the far leftandright areconnectedto eachotherin a wrap-aroundfash-ion (indicatedby grey arrows in Figure11).A maincharac-teristicof VolumeProis thateachvoxel is readfrom volumememoryexactlyonceperframe.Voxelsandintermediatere-sultsarecachedin socalledslicebufferssothatthey becomeavailablefor calculationspreciselywhenneeded.

    Eachrenderingpipelineimplementsray-castingandsam-ple valuesalongraysarecalculatedusingtri-linear interpo-lation. A 3D gradientis computedusingcentraldifferencesbetweentri-linearsamples.Thegradientis usedin theshaderstage,which computesthesampleintensityaccordingto the

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    Figure9: Severalvolumesrenderedon theVolumeProhardwareat 30 framespersecond.

    Phongillumination model.Lookup tablesin the classifica-tion stageassigncolor and opacity to eachsamplepoint.Finally, the illuminatedsamplesareaccumulatedinto baseplanepixelsusingfront-to-backcompositing.

    Volumememoryuses16-bit wide synchronousDRAMs(SDRAMs) for up to 256 MBytes of volume storage.2 �2 � 2 cells of neighboringvoxels,so calledminiblocks,arestoredlinearly in volumememory. Miniblocks arereadandwritten in burstsof eight voxels using the fast burst modeof SDRAMs. In addition, VolumeProusesa linear skew-ing of miniblocks45. Skewing guaranteesthattherenderingpipelinesalwayshave accessto four adjacentminiblocksinany of thethreesliceorientations.A miniblockwith position xyz� in the volumeis assignedto the memorymodulek asfollows:

    k � �0/ x2 1 . / y2 1 . / z2 1 � mod4 � (10)

    6.4. VolumePro PCI Card

    The VolumeProboard is a PCI Short Card with a 32-bit66 MHz PCI interface(seeFigure 8). The boardcontainsa singlevg500renderingASIC, twenty 64 Mbit SDRAMswith 16-bit datapaths,clock generationlogic, anda voltageconverterto make it 3.3 volt or 5 volt compliant.Figure12shows a block diagramof thecomponentson theboardandthebussesconnectingthem.

    Voxel Memory

    PCI-Bus

    vg500

    V-Bus

    Pixel MemorySection Memory

    S-Bus P-Bus

    SDRAMSDRAMSDRAMSDRAM

    SDRAMSDRAM SDRAMSDRAM

    EEPROM Clk Gen

    SDRAMSDRAMSDRAMSDRAM

    SDRAMSDRAMSDRAMSDRAM

    SDRAMSDRAMSDRAMSDRAM

    Module0 Module1 Module2 Module3

    Figure12: VolumeProPCI boarddiagram.

    The vg500 ASIC interfacesdirectly to the systemPCI-Bus.Accessto thevg500’s internalregistersandto theoff-chip memoriesis accomplishedthroughthe32-bit 66 MHzPCIbusinterface.Thepeakburstdatarateof this interfaceis264MB/sec.Someof this bandwidthis consumedby imageupload,someof it by otherPCI systemtraffic.

    6.5. Supersampling

    Supersampling32 improves the quality of the renderedim-ageby samplingthe volumedatasetat a higherfrequencythanthevoxel spacing.In thecaseof supersamplingin thexandy directions,thiswouldresultin moresamplesperbeamandmorebeamsperslice,respectively. In thez direction,itresultsin moresampleslicespervolume.

    VolumeProsupportssupersamplingin hardware only inthez direction.Additional slicesof samplesareinterpolatedbetweenexisting slicesof voxels. The software automati-cally correctstheopacityaccordingto theviewing angleandsamplespacingby reloadingtheopacitytable.

    Figure13 shows theCT scanof a foot (152x 261x 200)renderedwith no supersampling(left) andsupersamplinginz by 3 (right). The artifactsin the left imagestemfrom theinsufficient samplingrateto capturethehigh frequenciesofthe foot surface.Notice the reducedartifactsin the super-sampledimage.VolumeProsupportsup to eight times su-persampling.

    Figure 13: No supersampling(left) andsupersamplingin z(right).

    6.6. Supervolumesand Subvolumes

    Volumes of arbitrary dimensionscan be stored in voxelmemory without padding. Becauseof limited on-chip

    c�

    TheEurographicsAssociation2000.

  • Meißneret al. / VolumeRendering

    buffers, however, the VolumeProhardwarecanonly rendervolumeswith a maximumof 256voxels in eachdimensionin one pass.In order to rendera larger volume (called asupervolume),softwaremustfirst partition the volumeintosmallerblocks.Eachblock is renderedindependently, andtheir resultingimagesarecombinedin software.

    The VolumeProsoftware automaticallypartitionssuper-volumes,takescareof thedataduplicationbetweenblocks,and blendsintermediatebaseplanesinto the final image.Blocks are automaticallyswappedto and from host mem-ory if a supervolumedoesnot fit into the128MB of volumememoryontheVolumeProPCI card.Thereis no limit to thesizeof a supervolume,although,of course,renderingtimeincreasesdueto thelimited PCI downloadbandwidth.

    Volumeswith lessthan256voxels in eachdimensionarecalledsubvolumes.VolumePro’s memorycontrollerallowsreadingandwriting singlevoxels,slices,or any rectangularslab to and from Voxel Memory. Multiple subvolumescanbepre-loadedinto volumememory. Subvolumescanbeup-datedin-betweenframes.This allows dynamicand partialupdatesof volumedatato achieve 4D animationeffects.Italsoenablesloadingsectionsof alargervolumein pieces,al-lowing theuserto effectively panthroughavolume.Subvol-umesincreaserenderingspeedto thepoint wheretheframerate is limited by the baseplanepixel transferand driveroverhead,which is currentlyat 30 frames/sec.

    6.7. Cropping and Cut Planes

    VolumeProprovides two featuresfor clipping the volumedatasetcalledcroppingandcutplanes.Thesemake it possi-bleto visualizeslices,cross-sections,or otherportionsof thevolume,thusproviding theuseranopportunityto seeinsidein creativeways.Figure14(a)showsanexampleof croppingon theCT foot of thevisible man.Figure14(b)shows a cutplanethroughtheenginedata.

    (a) (b)

    Figure14: (a) Cropping.(b) Cut plane.

    6.8. VolumePro Performance

    Eachof thefour SDRAMsprovidesburst-modeaccessat upto 125MHz, for asustainedmemorybandwidthof 4 � 125 �

    106 � 500million 16-bit voxelspersecond.Eachrenderingpipeline operatesat 125 MHz andcan accepta new voxelfrom its SDRAM memoryeverycycle.500million tri-linearsamplesper secondis sufficient to render2563 volumesat30 framespersecond.

    6.9. VLI - The Volume Library Interface

    Figure15 shows the softwareinfrastructureof the Volume-Pro system.The VLI API is a setof C++ classesthat pro-

    Figure 15: Software infrastructureof the VolumeProsys-tem.

    vide full accessto the vg500 chip features.VLI doesnotreplaceanexistinggraphicsAPI. Rather, VLI workscooper-atively with a 3D graphicslibrary, suchasOpenGL,to man-agetherenderingof volumesanddisplayingtheresultsin a3D scene.Higherlevel toolkits (suchasvtk – TheVisualiza-tion Toolkit) andscenegraphson top of theVLI will likelybecometheprimaryinterfacelayerto applications.TheVLIclassescanbegroupedasfollows:2 Volume datahandling.VLIVolume managesvoxel data

    storage,voxel dataformat,andtransformationsof thevol-ume data such as shearing,scaling, and positioning inworld space.2 Renderingelements.Thereare several VLI classesthatprovide accessto the VolumeProfeatures,suchascolorand opacity lookup tables,cameras,lights, cut planes,clipping,andmore.2 Renderingcontext. The VLI classVLIContext is a con-tainerobjectfor all attributesneededto renderthevolume.It is usedto specify the volume dataset andall render-ing parameters(suchas classification,illumination, andblending)for thecurrentframe.

    TheVLI automaticallycomputesreflectancemapsbasedonlight placement,setsupα-correctionbasedonviewing angleand samplespacing,supportsanisotropicand gantry-tilteddatasetsby correctingthe viewing andimagewarp matri-ces,andmanagessupervolumes,supersampling,andpartialupdatesof volumedata.In addition,thereareVLI functionsthatprovide initialization,configuration,andterminationfortheVolumeProhardware.

    6.10. Summary

    This sectiondescribesthe algorithm,architecture,andfea-turesof VolumePro,the world’s first single-chipreal-timevolumerenderingsystem.Therenderingcapabilitiesof Vol-umePro– 500mil