A Versatile Optical Model for Hybrid Rendering of Volume...

15
1 A Versatile Optical Model for Hybrid Rendering of Volume Data Fei Yang, Qingde Li, Dehui Xiang, Yong Cao, and Jie Tian, Fellow, IEEE Abstract—In volume rendering, most optical models currently in use are based on the assumptions that a volumetric object is a collection of particles and that the macro behavior of particles, when they interact with light rays, can be predicted based on the behavior of each individual particle. However, such models are not capable of characterizing the collective optical effect of a collection of particles which dominates the appearance of the boundaries of dense objects. In this article, we propose a generalized optical model that combines particle elements and surface elements together to characterize both the behavior of individual particles and the collective effect of particles. The framework based on the new model provides a more powerful and flexible tool for hybrid rendering of iso-surfaces and transparent clouds of particles in a single scene. It also provides a more rational basis for shading, so the problem of normal based shading in homogeneous regions encountered in conventional volume rendering can be easily avoided. The model can be seen as an extension to the classical model. It can be implemented easily, and most of the advanced numerical estimation methods previously developed specifically for the particle-based optical model, such as pre-integration, can be applied to the new model to achieve high-quality rendering results. Index Terms—direct volume rendering, optical models, isosurfaces, pre-integration, ray casting, transfer function 1 I NTRODUCTION Unlike traditional polygon based rendering, there is generally no such thing as geometric primitives in direct volume rendering. Instead, objects contained in a volume dataset are usually considered as collections of particles that absorb, transmit and emit light. The series of optical models proposed by Max [1] are based on this concept. It can be clearly seen that these models tend to assume that light-object interaction can be reduced to light’s interaction with individual particles, and the macro op- tical behavior can be predicted based on the density distribution of the particles. For materials formed by loosely coupled particles such as dust or mist, these assumptions are obviously reasonable. However, the particle-based optical models are lim- ited when describing light-object interaction, especially around the boundary of a dense object. For a dense object, there is a strong collective effect between particles of the same type. In this case, light’s interaction with the object behaves more like a direct interaction with the boundary surfaces rather than with the individual particles (See Fig. 1). In reality, this is the foundation of Fei Yang and Dehui Xiang are with the Institute of Automa- tion Chinese Academy of Sciences, Beijing, 100190, China; E-mail: {fei.yang,dehui.xiang}@ia.ac.cn, [email protected]. Jie Tian is with the Medical Image Processing Group, Institute of Automation Chinese Academy of Sciences, Beijing, 100190, China. Telephone:8610-82628760; Fax: 8610-62527995. Email: [email protected]; [email protected]. Website: http://www.mitk.net; http://www.3dmed.net. Qingde Li is with the Department of Computer Science, University of Hull, Hull, HU67RX, UK; Email: [email protected]. Yong Cao is with the Department of Computer Science, Virginia Poly- technic Institute and State University, 2202 Kraft Drive, Blacksburg, VA 24060, USA; Email: [email protected]. (a) (b) (c) (d) Fig. 1. Particle and surface. (a) Illustration of isolated par- ticles. (b) Illustration of the formation of a surface from the collective effect of particles. (c) Illustration of light-particle interaction. (d) Illustration of light-surface interaction. the most frequently used optical models in traditional surface based graphics. Most of the traditional shading models, such as Blinn-Phong shading, were developed to simulate the light-surface interaction, which cannot be used readily to model the light-particle interaction. Motivated by these observations, we propose a hybrid model that combines particle elements and surface ele- ments together to characterize both the behaviors of indi- vidual particles and the collective effect of particles. For a scalar volume, the scalar value is used for two purposes. First, by using a transfer function, it is mapped to the density of the particles. Second, as a field, it defines a set Digital Object Indentifier 10.1109/TVCG.2011.113 1077-2626/11/$26.00 © 2011 IEEE IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Transcript of A Versatile Optical Model for Hybrid Rendering of Volume...

Page 1: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

1

A Versatile Optical Model for Hybrid Renderingof Volume Data

Fei Yang, Qingde Li, Dehui Xiang, Yong Cao, and Jie Tian, Fellow, IEEE

Abstract—In volume rendering, most optical models currently in use are based on the assumptions that a volumetric object is acollection of particles and that the macro behavior of particles, when they interact with light rays, can be predicted based on thebehavior of each individual particle. However, such models are not capable of characterizing the collective optical effect of a collectionof particles which dominates the appearance of the boundaries of dense objects. In this article, we propose a generalized opticalmodel that combines particle elements and surface elements together to characterize both the behavior of individual particles and thecollective effect of particles. The framework based on the new model provides a more powerful and flexible tool for hybrid rendering ofiso-surfaces and transparent clouds of particles in a single scene. It also provides a more rational basis for shading, so the problem ofnormal based shading in homogeneous regions encountered in conventional volume rendering can be easily avoided. The model canbe seen as an extension to the classical model. It can be implemented easily, and most of the advanced numerical estimation methodspreviously developed specifically for the particle-based optical model, such as pre-integration, can be applied to the new model toachieve high-quality rendering results.

Index Terms—direct volume rendering, optical models, isosurfaces, pre-integration, ray casting, transfer function

1 INTRODUCTION

Unlike traditional polygon based rendering, there isgenerally no such thing as geometric primitives in directvolume rendering. Instead, objects contained in a volumedataset are usually considered as collections of particlesthat absorb, transmit and emit light. The series of opticalmodels proposed by Max [1] are based on this concept.It can be clearly seen that these models tend to assumethat light-object interaction can be reduced to light’sinteraction with individual particles, and the macro op-tical behavior can be predicted based on the densitydistribution of the particles. For materials formed byloosely coupled particles such as dust or mist, theseassumptions are obviously reasonable.

However, the particle-based optical models are lim-ited when describing light-object interaction, especiallyaround the boundary of a dense object. For a denseobject, there is a strong collective effect between particlesof the same type. In this case, light’s interaction withthe object behaves more like a direct interaction withthe boundary surfaces rather than with the individualparticles (See Fig. 1). In reality, this is the foundation of

• Fei Yang and Dehui Xiang are with the Institute of Automa-tion Chinese Academy of Sciences, Beijing, 100190, China; E-mail:{fei.yang,dehui.xiang}@ia.ac.cn, [email protected].

• Jie Tian is with the Medical Image Processing Group, Institute ofAutomation Chinese Academy of Sciences, Beijing, 100190, China.Telephone:8610-82628760; Fax: 8610-62527995. Email: [email protected];[email protected]. Website: http://www.mitk.net; http://www.3dmed.net.

• Qingde Li is with the Department of Computer Science, University ofHull, Hull, HU67RX, UK; Email: [email protected].

• Yong Cao is with the Department of Computer Science, Virginia Poly-technic Institute and State University, 2202 Kraft Drive, Blacksburg, VA24060, USA; Email: [email protected].

(a) (b)

(c) (d)

Fig. 1. Particle and surface. (a) Illustration of isolated par-ticles. (b) Illustration of the formation of a surface from thecollective effect of particles. (c) Illustration of light-particleinteraction. (d) Illustration of light-surface interaction.

the most frequently used optical models in traditionalsurface based graphics. Most of the traditional shadingmodels, such as Blinn-Phong shading, were developedto simulate the light-surface interaction, which cannotbe used readily to model the light-particle interaction.

Motivated by these observations, we propose a hybridmodel that combines particle elements and surface ele-ments together to characterize both the behaviors of indi-vidual particles and the collective effect of particles. For ascalar volume, the scalar value is used for two purposes.First, by using a transfer function, it is mapped to thedensity of the particles. Second, as a field, it defines a set

Digital Object Indentifier 10.1109/TVCG.2011.113 1077-2626/11/$26.00 © 2011 IEEE

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 2: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

2

of isosurfaces, which are used as ideal geometric shapesto capture the collective effect of the particles. Particlesand surfaces behave differently both in the way light ispropagated to the viewer and the way it is attenuatedalong its direction. For the particles, light is randomlyscattered to the viewer, and the attenuation is relatedto the density of the particles and the distance a lightray travels. For the surfaces, light may be reflected andrefracted, and the attenuation depends on the opacitiesof the surfaces and the number of surfaces a light raypasses. For a sufficiently short ray segment, the totallight propagated to the viewer can be flexibly modeledas a linear combination of the amount of light from theparticles and that from the surfaces.

Based on the hybrid model, a unified rendering frame-work is developed. It benefits both the integration of thesurface elements and local illumination, which are twoimportant aspects of volume rendering. On one hand, itprovides a more flexible tool for hybrid rendering of iso-surfaces and transparent clouds of particles in a singlescene. The display of iso-surfaces can be controlled ina continuous way using transfer-functions rather thanhaving them discretely placed at finite isovalues, whichis capable of generating soft boundaries that have lessimage space aliasing problems. On the other hand, thenew model provides a more rational basis for localillumination. In conventional volume rendering, one of-ten encounters difficulty in illuminating homogeneousregions, a problem usually said to be due to a lack of a“well defined” normal vector in homogeneous regions.However, this problem can be easily avoided using theproposed hybrid model. Thanks to the model, directedreflection is bound with the light-surface interaction justas in surface graphics. A surface is always defined (theisosurface) before shading is applied, so the problem offinding normal vectors will never occur. Compared withsome solutions to this problem which resolves it basedon numerical hints such as the gradient magnitude, ourmethod tackles the issues at an optical model level,which makes it physically sound.

As a generalization to the conventional shadingmodel, the hybrid model inherits all of the simplicityof the conventional particle-based model. It is simple toimplement and easy to use with some well-establishedvolume rendering techniques concerning rendering effi-ciency and effectiveness, such as pre-integration, shad-owing and ambient occlusion. However, some subtlemodifications might be required. In this paper, we willonly describe the way we applied the pre-integrationalgorithms introduced in [2] and [3] for the new model.

Although more transfer-functions are introduced intothe framework, there is not as much increased effortrequired from the user to manipulate them, because eachset of transfer-functions has a distinct physical meaning,which allows the user to incrementally add controllersinto the transfer-functions with a well-defined objectivein mind. In practice, this is a more efficient approach inadjusting the visual appearance.

In the remainder of this paper, we will discuss in detailvarious issues related to the development and applica-tions of our model. In Section 2, we give a brief overviewof related techniques previously used for direct volumerendering. In Section 3, we describe the concepts of thenew model and the new volume rendering integral. InSection 4 , two estimation algorithms of the model aregiven. In Section 5, implementation issues, includingtransfer function specification and some simplificationsare discussed. The experimental results are presented inSection 6. Some potential applications and the possibleextensions of the model are discussed in Section 7.

2 RELATED WORK

2.1 Conventional Volume Rendering

In conventional volume rendering, optical behavior ofa volumetric object is modeled using the concept ofparticles. Max [1] proposed a series of models basedon the discussions about ”geometric optics effects of theindividual particles.” As long as shadows and multiplescattering are not taken into account, the basic behaviorsof light-particle interaction, such as absorption, emissionand single scattering, can be expressed using the volumerendering integral (Equation 1).

I = I0 exp(− ∫ Far

0μ (l) dl

)+

∫ Far

0c (l) μ (l) exp

(− ∫ l

0μ (t) dt

)dl

(1)

The integration is performed for a viewing ray castfrom the viewpoint, where l = 0, to the far end, wherel = Far. I0 is the light coming from the background,μ is the per-unit-length extinction coefficient, and c isa wavelength dependent intensity factor of emission orscattering, which is also called the color of the particles.

In object ordered rendering techniques, sometimesparticles can also be generated explicitly [4]. At anoptical model level, it is basically the same as the con-tinuous integration form.

2.2 Shading Problem and Existing Solutions

The Blinn-Phong illumination model [5] was originallydeveloped for surface shading in polygon-based render-ing, and has been popularly used for volume renderingsince it was introduced in this area by Levoy [6]. Eventhough Phong shading is well suited for rendering ma-terial boundaries, its application to homogenous regionscan be a problem, which is said to be due to a lack of a“well defined” normal vector in such regions. This hasbeen widely observed by researchers such as Kniss [7]and Hadwiger [8], and different solutions have beengiven.

Kniss [7] tackled this problem using a “surface scalar”term S to interpolate between shaded and unshadedeffects, where S is between 0 and 1 and is positivelycorrelated to the gradient magnitude.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 3: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

3

It has also been suggested that by considering multiplescattering, ambient occlusion and the attenuation of inci-dent light, surface shading can be avoided in some cases[8] [9] [10]. However, the specular reflection featureprovided by surface rendering is still important for manyapplications. In such cases, the “surface scalar” methodis still often used.

With clipping surfaces, the shading problem ofparticle-based volume rendering methods can becomeeven more obvious, partially due to the inconsistencyof the normal vectors calculated from the volume andthe clipping surface. Weiskopf [11], suggested that thenormal vector of a clipping surface should be used inthe vicinity of the clipping surface. The situation wasquite similar when segmentation was applied [12] beforevolume rendering.

However, we find none of the previous work attemptto analyze the cause of the problem and solve it at anoptical model level. Although sometimes the “surfacescalar” method can generate some plausible renderingresults, the optical mechanism behind such formulationcan hardly be imagined. To fundamentally solve theproblem, an optical-model-based solution will be pro-vided in this paper.

2.3 Surface Elements in Volume Rendering

In order to simulate light’s behavior at the boundariesin volume rendering, it is crucial that the boundarysurfaces should be properly represented in the firstplace. Various techniques for representing the surfaceshave been proposed. Polygons are introduced into vol-ume rendering by Kreeger [13]. Rotteger [14] suggestedthat an opaque isosurface can be implicitly inserted byadding a Dirac’s delta function with infinite amplitudeinto the extinction transfer function. Contribution of asemitransparent isosurface should be estimated everytime the isosurface is passed. This can be done efficientlyusing pre-integration (for isosurfaces) proposed by En-gel [2]. Kraus [15] pointed out that, the limit of an infinitenumber of semitransparent isosurfaces is equivalent toan “integral in data space.” In this paper, a similar idea isused for modeling the surface response. Surface elementsare also extremely important for the simulation of lightrefraction [16] [17].

Our work is significantly inspired by these researchideas. We found that shading is smoothly integratedwhen surface elements are used, and the problem ofshading homogeneous region never occurs in theseframeworks. An interesting finding is that the modelbehind these surface elements is different from the par-ticle model used for volume rendering both in physicalmeaning and mathematical expression, but it is possibleto merge the different models.

In this paper, we extend the existing hybrid renderingframeworks by establishing a versatile optical modelincluding both the particle aspect and the surface aspectof a volume data to allow multiple semi-transparent

(may be soft) isosurfaces to be rendered together withparticle-like contents within a unified framework.

2.4 Pre-integrated Volume RenderingPre-integrated volume rendering [2] is an algorithmfor reducing the required sampling frequencies along aray by pre-integrating the ray segments with all possi-ble combinations of starting values and ending valuesin transfer function precision. Some improvements tothe original method were made by Rotteger [18] andLum [3]. In Lum’s work, a fast algorithm for calculatingthe look-up-tables without problematic approximationsand a method for smoothly shading multiple transparentisosurfaces were given.

Any modification to the rendering model can be achallenge to the applicability of pre-integration tech-niques. Fortunately, we find that they can be appliedto the new model proposed in this paper with somesubtle modifications, with all of their good propertieswell preserved.

2.5 Other Volume Rendering ModelsApart from the basic particle model mentioned above,many other models have also been proposed for volumerendering, especially in recent years. Global illumina-tion for volume rendering can date back to Sobierajski(1994) [19]. Various approximation methods includingshadowing [20] [7] [21], ambient occlusion [8],multiple scattering [10] [22] [23], and the Kubelka-Munk model [24] [25] have been used. Note that usingthe model proposed in this paper doesn’t prevent onefrom using the above techniques. Our technique can beuseful whenever surface shading is required.

As for modeling color in volume rendering, thereare two main approaches. One is based on the trichro-matic theory using the RGB color space. The other isto use a spectral representation of color [26] [27] [24].Theoretically, modeling color using the concept of thewavelength can be more flexible and physically appro-priate. However, representing color in RGB mode canbe much simpler in terms of implementation if physicalcorrectness regarding color is not important. As as result,we chose to describe our model using wavelength whileimplementing the model in RGB color space.

3 THE HYBRID MODEL

3.1 AssumptionsBefore illustrating the details of our proposed model,let us describe some simple assumptions of this workfirst. As Fig. 2 shows, a volumetric object is consideredto be composed of two kinds of contents interlacingeach other: particles, which characterize the individualbehavior of particles, and surfaces, which characterizethe collective behavior of particles.

For the particle part, just like the conventional model,the attenuation of light is proportional to the density

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 4: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

4

Fig. 2. Microstructure of the hybrid model

of the particles and the distance a light ray travels.The response from particles can be described by μp, theextinction of a unit Δl, and cp, the scattering intensity ofthe particles.

For the surface part, the attenuation of a light ray isproportional to the number of surfaces it passes andthe opacities of the surfaces. Therefore, instead of usingthe extinction of a unit Δl, we define μs as the averageextinction of a unit Δt, so that μs is independent of thedirection of the light ray. We use cs for the reflectionintensity of the surfaces, which can be calculated from asurface shading model. Note that these two parametersstill make sense when the space between neighboringsurfaces tends to be zero, although the number of sur-faces tends to be infinite. Refraction is not considered inthis paper.

3.2 Total Response of the MixtureWe begin with an ideal situation where the surfacesare simply a set of parallel planes with finite spacebetween neighboring layers. The parameters of both thesurfaces and the particles can be assumed to be uniformin a sub-volume when it is sufficiently small. Using μp,the opacity of each ray segment among the particles isαp n (θ) = 1 − exp

(−μpΔl(θ)

n

). Similarly, the opacity of

each surface layer should be αs n = 1− exp(−μsΔt

n

). It

can be easily seen that the total opacity along a viewingray going through the mixture is:

α (θ) = 1 − (1 − αs n)n(1 − αp n (θ))n

= 1 − exp (−μsΔt − μpΔl (θ)) (2)

The average extinction coefficient of a unit Δl is:

μ (θ) = −ln (1 − α (θ))/Δl (θ)= μs cos θ + μp

(3)

It can also be derived that when we let n tend toinfinity and the space between surfaces Δt

n tends to bezero, the total intensity from a view ray can be expressedin the following form:

c (θ) μ (θ) = csμs cos θ + cpμp (4)

See appendix for more details.

3.3 New Volume Rendering IntegralBased on Equation (3) and Equation (4) which describethe local feature of the hybrid model, we will derive anew volume rendering integral for the case of a singlescalar volume in this subsection.

Let f (x, y, z) be the continuous spatial function inter-polated from the scalar volume. In math, the distributiondensity of isosurfaces should be proportional to thegradient magnitude of f to keep the consistency ofisosurfaces. Hence the extinction coefficient of surfacesμs should be considered as the product of the density|∇f | and the per-surface extinction coefficient μsg asEquation (5) shows. μsg is considered as a constant of anisosurface, which can be specified for each scalar value.

μs = μsg |∇f | (5)

A viewing ray is a parameterized path through thevolume, which can be expressed as (x (l) , y (l) , z (l)).x (l), y (l) and z (l) are linear functions of l. The speedvector of the viewing ray r (l) = (x′ (l) , y′ (l) , z′ (l)) isconstant for the viewing ray, and the scalar value alongthe viewing ray is v (l) = f (x (l) , y (l) , z (l)).

Using the normalized gradient vector ∇f|∇f | as the nor-

mal vector of the surface, the term cos θ in Equations (3)and (4) can be replaced by a dot-product of the speedvector r and the vector ∇f

|∇f | , as Equations (6) and (7)shows.

cos θ = |r · N| =|r · ∇f ||∇f | =

∣∣dvdl

∣∣/|∇f | (6)

μs cos θ = μsg

∣∣∣∣dv

dl

∣∣∣∣ (7)

Equations (3) and (4) can be transformed into Equa-tion (8).

⎧⎨⎩

μ = μsg

∣∣dvdl

∣∣ + μp

cμ = csμsg

∣∣dvdl

∣∣ + cpμp

(8)

A new volume rendering integral can then be formu-lated as Equation (9).

I = I0 exp(− ∫ Far

0(μsg (l) |dv (l)| + μp (l) dl)

)+

∫ Far

0(cs (l) μsg (l) |dv (l)| + cp (l) μp (l) dl)

× exp(− ∫ l

0(μsg (t) |dv (t)| + μp (t) dt)

) (9)

3.4 Surface ShadingJust as in polygon-based rendering, the reflection inten-sity of surface cs is calculated from some normal-vector-based shading model, such as the Blinn-Phong model.To clarify the wavelength-dependency of parameters,we subscript all wavelength-dependant terms with aλ. Using surface shading, a primary color cs λ shouldbe defined for the surface. Using a shading model, thesurface intensity cs λ can then be calculated, which often

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 5: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

5

takes form such as cs λ = kλcs λ + bλ, where kλ andbλ are spacial functions dependent on the position andposture of the volumetric object as well as the propertyof light sources.

3.5 Different Kinds of ParticlesThe new model prevents the use of surface shading forindividual particles. However, to extend the flexibilityto render different kinds of materials, the individualbehavior of particles can be assumed to be a mixtureof different kinds of particles, with the total particleresponse described as μp and cp. In this section, we givethe details about what kinds of particles can be definedand how they can be mixed. Here, we define three kindsof different particles, each having unique behaviors thatcannot be covered by others, and then combine themtogether into a general form.

• Isotropic Scattering Isotropic scattering particlesscatter light from the light source evenly in alldirections, and the attenuation of light is equalfor all wavelengths. Such behavior is equal to theabsorption plus emission particles described in [1].We use a wavelength-dependent term cp s λ and awavelength independent term μp s to describe thecontribution of this kind of particles.

• Selective Absorptive Selective absorptive particlesare purely absorbant, which means they don’t emitnor scatter light. However, the attenuation of light iswavelength-dependent. Such behavior is called se-lective absorption in spectral volume rendering [24].We describe it using the term μp a λ.

• Pure Emissive Pure emissive particles only emitlight without extinction. Since the light is self-emitted, it doesn’t need to be weighted by theattenuation. We use the term ep λ to describe it.

The mixed effect of these three kinds of particles canbe formulated as Equation (10).⎧⎨

⎩μp λ = μp s + μp a λ

cp λμp λ = cp s λμp s + ep λ

(10)

3.6 Special CasesBy setting some parameters in Equation (9) to zero, twoextreme cases can be reached. On one hand, if we setμsg to zero, we will get a particle only case which isthe same as the traditional volume rendering integralwithout considering the collective effect. On the otherhand, if we set μp to zero, we will get a surface onlycase which is the same as the “integral in data space”described in [15].

In principle, two transfer functions should be definedfor the isosurfaces, TFμsg (v) and TF cs λ (v). There’syet another special case, which addresses the transferfunction TFμsg (v). If we add a Dirac’s delta functionitem μδ iδ (v − vδ i) (the limit case of a pulse func-tion) into the transfer function, a discrete isosurface

is inserted, and the opacity of the discrete isosurfaceis αi = 1 − e−μδ i . This case actually has the sameoptical model as the volume-polygon hybrid renderingtechniques. A pre-integrated algorithm should be used,otherwise the sampling rate required will be infinite.

4 ESTIMATION ALGORITHMS

Estimating the new volume rendering integral (Equa-tion 9) is similar to estimating a conventional volumerendering integral. However, some subtle changes areneeded, especially when trying to apply a pre-integratedalgorithm to achieve high-quality rendering.

Graphics hardware has witnessed a boost in its pro-grammability and processing power during recent years.As a result, GPU-based ray casting becomes a popularvolume rendering technique. The advantages of usingray casting over slice-based rendering can be found in[28] [17]. We will discuss three types of ray castingalgorithms for our newly proposed optical model: oneis the direct ray casting algorithm which uses a straight-forward sampling and summing scheme, the other twoare pre-integrated ray casting schemes evolved from [2]and [3]. In this section, we assume that the parametersμsg , cs λ, μp λ and cp λ relating to surfaces and particlescan be achieved using corresponding transfer functionsTFμsg , TF cs λ,TFμp λ and TFcp λ.

4.1 Direct Ray CastingIn our direct ray casting scheme, scalar samples weretaken along each viewing ray. Using the transfer func-tions, we have μsg , cs λ, μp λ and cp λ. Applying Blinn-Phong shading to cs λ, we get cs λ. Normal vectors ofthe isosurfaces can be estimated using either gradientinterpolation or on the fly gradient estimation. For theestimation of dv

dl , the previous scalar sample is kept, andthe next scalar sample is preloaded. Letting v0, v1, v2 bethe previous, the current and the next sample, in eachstep of a ray respectively, we pass the value of v1 tov0, v2 to v1, and load the next sample value into v2.Using a sample distance Δl, dv

dl is estimated by a centraldifference as Equation (11) shows.

dv

dl≈ v2 − v0

2Δl(11)

The opacity of a sample is estimated as Equation (12),letting μλΔl denote μsg

|v2−v0|2 + μp λΔl.

αλ ≈ 1 − exp (−μλΔl)

= 1 − exp(−(

μsg

∣∣dvdl

∣∣ + μp λ

)Δl

)

≈ 1 − exp(−μsg

|v2−v0|2 − μp λΔl

)

= 1 − exp (−μλΔl)

(12)

The opacity weighted intensity of the sample is esti-mated as Equation (13).

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 6: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

6

cλαλ ≈ cλμλΔl 1−exp(−μλΔl)μλΔl

=(cs λμsg

∣∣dvdl

∣∣ + cp λμp λ

)Δl 1−exp(−μλΔl)

μλΔl

≈(cs λμsg

|v2−v0|2 + cp λμp λΔl

)1−exp(−μλΔl)

μλΔl

(13)

Then, the composition is done sample by sample usingEquation (14), where Aλ (i) is the accumulated opacityand Cλ (i) is the accumulated intensity of the ith step.

{Aλ (i) = Aλ (i − 1) + (1 − Aλ (i − 1)) αλ (i)Cλ (i) = Cλ (i − 1) + (1 − Aλ (i − 1)) αλ (i) cλ (i)

(14)Assuming there are n steps casting a ray, the back-

ground intensity I0 λ is added using Equation (15).

Iλ = Cλ (n) + (1 − Aλ (n)) I0 λ (15)

4.2 Ray Casting with Pre-IntegrationIn volume rendering, it is often required to let thetransfer functions contain some sharp structures such asedges or spikes, which often lead to an extremely highrequirement of sampling frequency for direct ray casting.The pre-integration algorithm has been an efficient toolfor tackling this kind of problem.

In order to use the existing pre-integration algorithmsfor the new model and have their good properties wellpreserved, some subtle modifications are required. Whenusing the hybrid model, shading is only applied tothe isosurfaces, not to the particles. Thus, two sets oflook-up-tables are required: one for the isosurfaces, theother for the particles. Note that shading can only bedone during ray casting because normal vectors are notavailable in the process of building the look-up-tables.

The following subsections will discuss the applicationof the two pre-integrated ray-casting schemes. The firstone is based on the approximative pre-integration givenin [2] while the second one is based on the scheme givenin [3].

4.2.1 Approximative pre-integrationThe approximative pre-integration scheme was proposedfor a fast estimation of the look-up-tables. By neglectingthe extinction within the segments, some integral func-tions are pre-computed, and the integration of segmentsis estimated using the difference of the integral functions.

The integral functions are computed using Equation(16).

Ms (x) =∫ x

−∞ TFμsg (v) dv

Cs λ (x) =∫ x

−∞ TF cs λ (v) TFμsg (v) dv

Mp λ (x) =∫ x

−∞ TFμp λ (v) dv

Cp λ (x) =∫ x

−∞ TFcp λ (v) TFμp λ (v) dv

(16)

Opacities and intensities of the segments are esti-mated using the difference of the integral functions asshown in Equation (17). The intensities cs λ (v0, v1) andcp λ (v0, v1) here are opacity weighted.

⎧⎪⎨⎪⎩

αs λ (v0, v1) = |Ms (v1) − Ms (v0)| αλ(v0,v1)Mλ(v0,v1)

cs λ (v0, v1) =∣∣∣Cs λ (v1) − Cs λ (v0)

∣∣∣ αλ(v0,v1)Mλ(v0,v1)

⎧⎪⎨⎪⎩

αp λ (v0, v1) = (Mp λ (v1) − Mp λ (v0)) Δlv1−v0

αλ(v0,v1)Mλ(v0,v1)

cp λ (v0, v1) = (Cp λ (v1) − Cp λ (v0)) Δlv1−v0

αλ(v0,v1)Mλ(v0,v1)

where,

Mλ (v0, v1)= |Ms (v1) − Ms (v0)| + (Mp λ (v1) − Mp λ (v0)) Δl

v1−v0

αλ (v0, v1) = 1 − exp (−Mλ (v0, v1))

(17)In comparison with the equations given in [2], Equa-

tion (17) is slightly different in that we added in acorrecting factor αλ(v0,v1)

Mλ(v0,v1)to avoid over-saturation of the

intensity. It can be seen that for the surface part, there’sno such factor as Δl

v1−v0.

Normal vectors needed for shading are estimated byinterpolating the gradient vectors between the two endsof each segment. The result is then normalized. Engel [2]suggested using a weighted average. For the case ofa single isosurface, the weights of the two ends aredecided by the relationship of the isovalue and the valueof the two ends. Let v0 be the starting value, v1 be theending value and vC be the isovalue of the isosurface;weights are determined by Equation (18).

w0 =v1 − vC

v1 − v0, w1 =

vC − v0

v1 − v0(18)

However, in the generalized case there is not necessar-ily an isovalue. Thus, instead of using an isovalue, weredefined vC as the “barycenter” of the segment, whichis a visibility weighted average scalar value. As shownin Equation (19), it produces another look-up-table.

V (x) =∫ x

−∞ vTFμsg (v) dv

vC (v0, v1) = V (v1)−V (v0)Ms(v1)−Ms(v0)

(19)

The numerical processes of the integrations arestraightforward. Note that some Dirac’s delta functionitems may be contained in TFμsg (v). They are convertedinto step function items by the integrations before beingsampled.

4.2.2 Full pre-integration with improved shadingNeglecting the extinction within the segments can in-troduce ordering problems, especially when the value

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 7: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

7

of extinction is large. For quality considerations, suchan approximation should be avoided, and a full pre-integration should be performed. By brute force, a fullpre-integration requires too much calculation for theinteractive transfer function manipulation. Lum [3] pro-posed an algorithm called incremental subrange integra-tion that can calculate an n × n lookup table in O

(n2

)time without problematic approximation. The numericalprocess of this algorithm can be straightforwardly ap-plied to the new model proposed in this paper.

Another problem for approximated pre-integration isthe gradient interpolation method used in the algorithmwhich can cause discontinuities. To tackle this problem,Lum [3] proposed an algorithm called “interpolated pre-integrated lighting”. This idea can also be borrowed forimplementing the new model. For the sake of clarity, wereformulated the idea in the new context.

For a segment of a viewing ray from l0 to l1, its partialintegration can be written as Equation (20)

αλ (l0, l1) = 1 − exp(− ∫ l1

l0(μsg (l) |dv (l)| + μp λ (l) dl)

)cλ (l0, l1) =

∫ l1l0

(cs λ (l) μsg (l) |dv (l)| + cp λ (l) μp λ (l) dl)

× exp(− ∫ l

l0(μsg (t) |dv (t)| + μp λ (t) dt)

)(20)

As has been described in subsection 3.4, a surfaceshading model can usually be written in a cs λ =kλcs λ + bλ form, where kλ and bλ are space-dependentfunctions that are unavailable during the generation ofthe look-up-tables. Following the idea of “interpolatedpre-integrated lighting,” we consider kλ (l) and bλ (l) tobe linear between l0 and l1, and the item cs λ (l) μsg (l) inEquation (20) can be expanded as Equation (21) shows.Thus, the space dependent factors can be decoupledfrom the segment integration.

kλ (l) = (l1−l)kλ(l0)+(l−l0)kλ(l1)l1−l0

bλ (l) = (l1−l)bλ(l0)+(l−l0)bλ(l1)l1−l0

cs λ (l) μsg (l)

= kλ (l0)(l1−l)cs λ(l)μsg(l)

l1−l0+ kλ (l1)

(l−l0)cs λ(l)μsg(l)l1−l0

+bλ (l0)(l1−l)μsg(l)

l1−l0+ bλ (l1)

(l−l0)μsg(l)l1−l0

(21)

In addition, as in all pre-integration schemes, v (l) isassumed to be linear between l0 and l1, with v0 = v (l0)and v1 = v (l1) sampled at the two ends. The task now isto build a pair of look-up-table sets for the isosurfaces (afront-weighted and a back-weighted), and another look-up-table set for the particles as Equation (22) shows.

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

αs fw λ (v0, v1)=

∫ v1

v0

(v1−v)TFμsg(v)v1−v0

[1 − αλ (v0, v, v1 − v0)] |dv|

cs fw λ (v0, v1)=

∫ v1

v0

(v1−v)TF cs λ(v)TFμsg(v)v1−v0

[1 − αλ (v0, v, v1 − v0)] |dv|⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

αs bw λ (v0, v1)=

∫ v1

v0

(v−v0)TFμsg(v)v1−v0

[1 − αλ (v0, v, v1 − v0)] |dv|

cs bw λ (v0, v1)=

∫ v1

v0

(v−v0)TF cs λ(v)TFμsg(v)v1−v0

[1 − αλ (v0, v, v1 − v0)] |dv|⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

αp λ (v0, v1)=

∫ v1

v0

TFμp λ(v)Δlv1−v0

[1 − αλ (v0, v, v1 − v0)] dv

cp λ (v0, v1)=

∫ v1

v0

TFcp λ(v)TFμp λ(v)Δlv1−v0

[1 − αλ (v0, v, v1 − v0)] dv

(22)where,

αλ (v0, v1, Δv)= 1 − exp

(− ∫ v1

v0TFμsg (v) |dv| + TFμp λ(v)Δl

Δv dv)

Then, during ray casting, cs fw λ is shaded withkλ (l0), bλ (l0) and cs bw λ is shaded with kλ (l1), bλ (l1)as Equation (23) shows.

αλ (l0, l1)= αs fw λ (v (l0) , v (l1))+αs bw λ (v (l0) , v (l1))+αp λ (v (l0) , v (l1))

cλ (l0, l1)= kλ (l0) cs fw λ (v (l0) , v (l1))+bλ (l0) αs fw λ (v (l0) , v (l1))+kλ (l1) cs bw λ (v (l0) , v (l1))+bλ (l1) αs bw λ (v (l0) , v (l1))+cp λ (v (l0) , v (l1))

(23)

A fast algorithm to numerically estimate Equation(22) can be derived following the ideas provided inLum [3]. The difference between our process and theirs ismainly in the integrals to be estimated. In their case, onlythe particle lookup tables were involved. These lookuptables consisted of a front-weighted part and a back-weighted part, so a method for the “weighted look-up-tables” was given. In our case, a similar strategy is usedfor the estimation of the surface lookup tables, while theparticle lookup tables are similar to the unweighted case.(see the appendix for more details).

Also, Dirac’s delta function items can be convertedinto step function items via the integrations prior to theprocess of sampling.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 8: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

8

5 IMPLEMENTATION ISSUES

Over the past few years GPUs have undergone tremen-dous development and evolved into extremely flexibleand powerful processors in terms of programmability.In the meantime, various GPGPU APIs and techniqueshave been developed to harness the processing powerof modern GPUs. Our implementations were based onCUDA [29], and the codes were written in CUDA C.

The implementation is generally straight-forwardwithout using any acceleration techniques such asempty-spacing skipping. Even early ray-termination isnot used when the attenuation is wave-length depen-dent. The volume dataset and the lookup tables aresimply stored as textures. The overall scheme goes asfollows. First, some necessary preparation are performedat the CPU side, including the calculation of the lookuptables when the transfer-functions are modified. Then,the rendering kernel is launched, each GPU thread is incharge of a ray casted through a pixel. The ray is firsttransformed to the voxel coordinate and is trunked bythe bounding box of the volume. Then, sample valuesare fetched along the ray from the volume texture withuniform sampling distance. For the direct ray-casting,optical properties are fetched from the 1D-LUTs, thenshaded and merged as described in Section 4.1. For thepre-integrated ray-casting, the two neighboring samplesare used together to achieve the optical properties of thesegment from the 2D-LUTs, then shaded and merged asdescribed in Section 4.2. The normal vector needed forshading is calculated on the fly. The merged values arestored into a GPU buffer. The buffer is mapped to anOpenGL pixel-buffer-object which can be displayed in aOpenGL environment.

Apart from the overall scheme, in this section, wemainly focus on two implementation issues. The firstis about some conversions of the parameters needed tomake the specification of the transfer functions moreintuitive. The second is about some simplifications wemade to reduce the number of transfer functions andlook-up-tables.

5.1 Conversion of ParametersIn Section 4, the transfer functions needed for the modelare assumed to be available. However, in practice, someparameters are not intuitive enough. In this section, wedefine a series of conversions aiming at producing afriendly interface to allow the user to tune the effects.

First, μp λ and cp λ represent the collective behavior ofdifferent kinds of particles; they are calculated from μp s,μp a λ, cp s λ, μp s and ep λ according to Equation (10).

Second, the value range of the extinction coefficientsμp s, μp a λ, and μsg is infinite. Therefore a direct speci-fication of these parameters is not easy. It is better to setan extinction coefficient indirectly using an opacity termranging from zero to one. However, to do so requirescertain context for the opacity values to be converted toextinction values.

s

_sc

_p s

_ _p sc

pe

_ _p ac

_p a

_ _p ec

Eq 27_p

e

Eq 26 _ _p a

sg

Eq 24 _p s

Eq 24 _ _p a

Eq 10

_pc

_p

Fig. 3. Conversion of parameters

For particles, an opacity value can be defined by us-ing a “standard sampling distance”Δlstd(similar conceptcan be found in [30]). An opacity α can be generallyconverted into a corresponding extinction value μ usingEquation (24)

μ = − ln(1−α)Δlstd

(24)

Using conversions such as this, μp s can be calculatedfrom αp s and μp a λ can be calculated from αp a λ.

Similarly for isosurfaces, an opacity value αs can bedefined by using a “standard scalar difference”Δvstd.The corresponding extinction value μsg can be calculatedusing (25).

μsg = − ln(1−αs)Δvstd

(25)

Third, some parameters may still not be intuitiveenough. For selective absorptive particles, the parameterαp a λ is a wavelength-dependent opacity. It can bemore intuitively specified using a transmission colorterm cp a λ and a wavelength independent density termαp a as Equation (26) shows.

αp a λ = αp a (1 − cp a λ) (26)

The parameter of pure emissive particles ep λ is spec-ified using a wavelength independent term ep and anemission color term cp e λ as Equation (27) shows. Thereis actually no upper limit for the term ep. However, forthe over-saturation nature of the light emission effect, thevalue is usually not very large. We used a range from 0to 2.

ep λ = cp e λep

Δlstd(27)

In addition, we may want to include Dirac’s deltafunction items into TFμsg (v). A series of discrete scalarvalues vδ i can be directly used to specify the positionsof the spikes, and the corresponding magnitude termsμδ i can be indirectly specified using opacities of thediscrete isosurfaces αδ i, and we can simply convertthem using μδ i = − ln (1 − αδ i). Optionally, colors ofthe spikes can be specified using a series of colors cδ i,and values of TF cs λ (v) at vδ i can be replaced.

There are four pairs of transfer functions which canbe set by the users. Each pair contains a wavelength-dependent term and a wavelength independent term.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 9: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

9

From these transfer functions, the four transfer functionsused in Section 4 can be built indirectly as Fig. 3 shows.Although the number of transfer functions increases,their meanings are more intuitive so they can be moreeasily set up.

• TFαs (v): opacities for isosurfaces.• TF cs λ (v): base colors for isosurfaces.• TFαp s (v): opacities for isotropic scattering parti-

cles.• TFcp s λ (v): colors for isotropic scattering parti-

cles.• TFαp a (v): densities for selective absorptive parti-

cles.• TFcp a λ (v): transmission colors for selective ab-

sorptive particles.• TFep (v): wavelength independent intensities for

pure emissive particles.• TFcp e λ (v): colors for pure emissive particles.Moreover, there are two global parameters Δlstd and

Δvstd to be set. The first one can be used to adjust thedensities of all particles, and the second one can be usedto adjust the opacities of all isosurfaces.

In the experiment part, we will show some examplesof tuning the effects using the above set of transferfunctions.

5.2 SimplificationsBy far, we have discussed issues related to the newmodel in general. An issue relating to the generalityof the model is that too many look-up-tables may berequired during the process of implementation, whichmay affect the performance.

To avoid the excessive requirement of look-up tables,we made several simplifications in the implementationof the new model. First, we didn’t implement a fullspectrum rendering. Instead, the RGB color space wasused. That means we only sampled the spectrum ofwavelengths at three points corresponding to the stan-dard red, green, and blue colors, hence the concept ofwavelengths is directly mapped to the RGB color chan-nels. Second, we did not use all kinds of the particlestogether, and in some cases, the same color transferfunctions were used for all the color terms, so thefour color transfer functions TF cs λ (v), TFcp s λ (v),TFcp a λ (v) and TFcp e λ (v) could be reduced intoonly one shared color transfer function TFcλ (v).

6 EXPERIMENTAL RESULTS

6.1 A Model-Oriented Solution to the Shading Prob-lemIn conventional shaded volume rendering, shading arti-facts are frequently seen where the gradient magnitudeis small. Such artifacts can be avoided using our newmodel. In Fig. 4(a), the engine dataset is rendered usingthe conventional shaded volume rendering. The transfer-function is set to extract some particles from the volume,

(a) (b)

(c) (d)

Fig. 4. Rendering the engine data set using differentmethods. (a) Conventional shaded volume rendering; (b)Conventional unshaded volume rendering; (c)Our newmodel, using exactly the same transfer functions; (d)Our new model, with a small modification to the surfacetransfer-function.

which are mostly distributed in homogeneous areas.As can be seen from the figure, the shading result isvery poor. Compared with its unshaded counterpart inFig. 4(b), it provides little extra information. In contrast,when the new model is used, with exactly the sametransfer function for both the particles and the surfaces,some surface features can be clearly seen as Fig. 4(c)shows. Even better results can be achieved by using aslightly modified surface transfer function (Fig. 4(d)).Two more experimental results are shown in Fig. 5 tofurther demonstrate the strength of our model. Thistime, a clipping plane is placed in the middle of thevolume. As can be seen from this figure, the detailsin homogeneous regions cannot be properly renderedusing the conventional model, while the details in theseregions can be clearly seen using our model.

6.2 Soft Boundary Surfaces

Image space aliasing is frequently seen in surface render-ing and volume-surface hybrid rendering. With the useof transfer functions, our hybrid optical model allowsisosurfaces corresponding to different isovalues to bespecified in such a way that they transit smoothly fromone isosurface to another instead of being discretelyplaced. As a result, soft boundary surfaces can be dis-played in a way that they look the same as discretely

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 10: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

10

(a) (b)

Fig. 6. Anti-aliasing by soft boundary surfaces. (a) Hybrid rendering of isotropic scattering particles with a discretelyplaced isosurface. (b) Hybrid rendering of isotropic scattering particles with a soft boundary surface.

(a) (b)

(c) (d)

Fig. 5. Comparison of conventional shaded volume ren-dering with our new model using a clipping plane. (a) Thevisible human head CT dataset rendered using conven-tional shaded volume rendering. (b) The visible humanhead CT dataset rendered using our new model. (c) AMRI head dataset rendered using conventional shadedvolume rendering. (d) An MRI head dataset renderedusing our new model.

placed isosurfaces except that the aliasing at the bound-ary is smoothed. A comparison between the effect of adiscretely placed isosurface with a soft boundary surfaceis shown in Fig.6. In Fig.6(a) the isosurface is defined

Histogram

TF for Isotropic Scattering Particles

ControllersTF for Shaded Isosurfaces

Controllers

Delta function controllers

TF for Pure Emissive Particles

Controllers

User Interface Explanation

Fig. 7. An explanation to the UI design and the termsused.

by a delta function, while in Fig.6(b), the isosurface isdefined by a simple triangular pulse with finite width.

6.3 Tuning the EffectsAs described in Section 5, four pairs of transfer functionsare provided to control the different components of themodel, but they are seldom used all together. At theinitial stage, all of the transfer functions are empty, sonothing is displayed. In our implementation, we usecontrollers to set up the transfer functions. A controlleris simply a draggable GUI shape which can be usedto assign a pair of optical properties to a scalar value,so that the optical properties between the controllerscan be interpolated. The transfer-function pair is visual-ized with a colored bar, with the wavelength-dependentproperty mapped to color channels and the wavelength-independent property mapped to the alpha channel.Dirac’s delta function items and the corresponding colorscan be assigned with a special kind of controller whichwe call a delta function controller, each corresponding toa delta function item. See Fig. 7 for the mentioned GUIcomponents.

When a volume dataset is to be visualized, first, theuser may choose to play a quick test of critical isovalues

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 11: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

11

(a) (b)

(c) (d)

(e) (f)

Fig. 8. Rendering the CT heart data set with differentcombinations of contents. (a) Isotropic scattering particlesonly; (b) Added a family of shaded isosurfaces to (a);(c)Added another family of shaded isosurfaces to (b);(d)Added some pure emissive particles to (c). (e) Selec-tive absorptive particles with a family of shaded isosur-faces. (f) Added another a family of shaded isosurfaces to(e)

by temporarily placing and dragging a delta function

(a) (b)

(c) (d)

(e) (f)

Fig. 9. Rendering the same segmented liver CT datasetusing pure emissive particles and shaded surfaces. In(a), there’s the logarithm histogram of the data set andthe transfer functions. The first transfer function is forthe shaded surfaces, and the second one is for the pureemissive particles. With the fixed transfer functions, from(b) to (f), the global parameter Δvstd are 600, 40, 13, 13,and 13. Δlstd are 4, 4, 4, 12 and 600.

controller. Then, according to the requirement of visu-alization, controllers can be successively added to thetransfer functions. To visualize an isosurface, the usercan use the delta function controller or set up a finitepulse in the TF for shaded isosurfaces to generate a softboundary; to visualize some dust like content within avalue range, simply put some controllers into the TF forisotropic scattering particles to define that value rangeand properties; if some color filtering material (some-thing like dyestuff) is needed, then try adding somecontrollers into the TF for pure absorptive particles; ifthe feature of interest is obscured by other contents, thenadding some controllers into the TF for pure emissiveparticles might be helpful to increase the penetration ofthat feature. The resulted image will be a mixture of allof these factors.

The example in Fig. 8 shows how different kinds ofparticles and surface elements can be successively addedto the scene. In Fig. 8(a), a simple transfer function forisotropic scattering particles is set. In Fig. 8(b), controllersare added into the transfer function for isosurfaces todisplay the boundary of the vessels. In Fig. 8(c), more

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 12: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

12

(a) (b)

(c) (d)

Fig. 10. The visible human head CT dataset renderedusing the hybrid model with a sagittal clipping plane in themiddle.In (a), there’s the logarithm histogram of the dataset and the transfer functions. The first transfer functionis for the shaded surfaces, and the second is for isotropicscattering particles. With the fixed transfer functions, theglobal parameter Δlstd in (b) to (d) are 220, 22 and 7.

controllers are added around the scalar value of theboundary of the heart, so the boundary can be clearlyseen. As a side effect, some of the inner structuresare concealed. In Fig. 8(d), some controllers are addedinto the transfer function for pure emissive particles toenhance the display of the vessels. In Fig. 8(e)and (f),isotropic scattering particles are replaced with selectiveabsorptive particles. Compared with the former, theseparticles appear more transparent.

The global parameters Δvstd and Δlstd are manipu-lated using sliders in exponential scale. They can beused to adjust the global transparency of the surfaceand particle elements respectively while keeping theother element unaffected. In the example shown in Fig.9, we first set up the transfer function for the pureemissive particles. Then, some controllers are added intothe transfer function for isosurfaces, so the boundaryof the vessels appeared. From Fig. 9(b) to Fig. 9(d),Δvstd is gradually decreased, so that the isosurfacesbecame more opaque; from Fig. 9(d) to Fig. 9(f), Δlstd

is gradually increased, so that the intensity of the pure

(a) (b)

(c) (d)

Fig. 11. Rendering a synthetic data set using the hy-brid model. All are rendered using the same transfer-functions, but with different estimation algorithms. (a)Logarithm histogram of the data set, and the transfer-functions used; (b) Rendered with direct ray casting(c)Rendered with approximative pre-integrated ray cast-ing (d)Rendered with fully pre-integrated ray casting.

emissive particles is reduced. Overall, a gradual changefrom particle only rendering to surface only rendering isexhibited.

Fig. 10 shows another example. This time, we firstput controllers into the isosurface transfer functions toextract surface features. Then, the transfer functions forisotropic scattering particles are also set up. The figuredemonstrates the effect of the global parameter Δlstd

which controls the global transparency of particles whilekeeping the surfaces unchanged.

6.4 Using Advanced Estimation AlgorithmsAs described in Section 4, we adapted two of the pre-integration algorithms to the new model with subtlemodifications. Here, we tried to make sure that thesealgorithms actually worked as well as they did for theparticle only model. Sampling artifacts can be most evi-dent when the sampling rate is low and the transfer func-tions are sharp. Using pre-integration algorithms, suchartifacts should be dramatically reduced. The results areshown in Fig. 11. The synthetic data set (643) is renderedusing 3 different algorithms, all of them are based onthe hybrid model and the transfer-functions used areidentical. Here we only use isotropic scattering particles.When the data set is rendered with direct ray casting,obvious artifacts are generated due to the inadequate

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 13: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

13

TABLE 1Timing Results.

Datasets

Name Size Data TypeEngine 256x256x128 8 bit unsigned integer

Visible Human Head 128x256x256 8 bit unsigned integerHead MRI 181x217x181 8 bit unsigned integerHead CT 208x256x225 8 bit unsigned integer

Segmented Liver CT 400x310x100 16bit signed integerHeart CT 299x289x218 16bit unsigned integer

Synthetic Data 64x64x64 32bit float

Timing

Figure Dataset Image Size RenderingTime(ms)

Fig.4(a) Engine 528x599 114.19Fig.4(b) Engine 528x599 116.11Fig.4(c) Engine 528x599 114.71Fig.4(d) Engine 528x599 115.30Fig.5(a) Visible Human Head 543x490 90.88Fig.5(b) Visible Human Head 543x490 97.90Fig.5(c) Head MRI 558x449 98.68Fig.5(d) Head MRI 558x449 106.04Fig.6(a) Head CT 558x526 141.62Fig.6(b) Head CT 558x526 141.52Fig.9(b) Segmented Liver CT 658x674 154.23Fig.9(c) Segmented Liver CT 658x674 153.03Fig.9(d) Segmented Liver CT 658x674 152.44Fig.9(e) Segmented Liver CT 658x674 152.01Fig.9(f) Segmented Liver CT 658x674 152.98

Fig.10(b) Visible Human Head 554x507 113.71Fig.10(c) Visible Human Head 554x507 113.29Fig.10(d) Visible Human Head 554x507 112.40Fig.8(a) Heart CT 367x476 151.03Fig.8(b) Heart CT 367x476 148.23Fig.8(c) Heart CT 367x476 145.77Fig.8(d) Heart CT 367x476 145.48Fig.8(e) Heart CT 367x476 159.76Fig.8(f) Heart CT 367x476 145.50

Fig.11(b) Synthetic Data 558x560 40.18Fig.11(c) Synthetic Data 558x560 35.71Fig.11(b) Synthetic Data 558x560 30.52

sampling rate as Fig. 11(b) shows. When it is renderedwith an approximative pre-integrated ray casting algo-rithm, the artifact is reduced, but some artifacts can stillbe seen when closely observed, as Fig. 11(c) shows. Theremaining artifacts are due to the approximations used.When rendering the data set with fully pre-integratedray casting, the sampling artifacts are totally removed,as Fig. 11(d) shows.

6.5 Performance ResultsTo have a better assessment of the performance, wewill give the timing results of the above cases in thissection. Note that we are not trying to achieve an op-timized performance result. In reality, no accelerationtechnique is used. The tests are performed on a standardPC platform with an Intel Core 2 Duo E6550 CPU at2.33GHz and a PCI-E graphics card powered by anNVIDIA Geforce GT240 GPU. The timing results areshown in Table 1. Most of the cases are rendered at aninteractive frame rate. From the Fig.5 cases, we can seesome performance impact of the relatively complicated

hybrid model, which is obviously not as significant as itappears to be.

7 DISCUSSION

In this article, we proposed a novel model for volumerendering that combines particle elements and surfaceelements together to characterize both the behavior ofindividual particles and the collective effect of particles.The proposed hybrid optical model originates from ananalysis of the microstructure of the volume that in-teracts with the light-rays. The main advantage of themodel is its flexibility to smoothly combine differentoptical elements into a single scene in a physicallyexplainable way. Apart from that, it benefits volumerendering in different aspects. On one hand, comparedto the existing methods that address the shading prob-lem, it enables the rendering of isosurfaces with properattenuation pattern which is consistent to the surfaceassumption (view-independent transparency). On theother hand, compared to the polygon based methods, theproposed method doesn’t require the reconstruction ofpolygons during the rendering process when the isoval-ues is changed, and it provides the soft boundary featurewhich mitigates the image space aliasing problem.

In this paper, we have only discussed about singlescalar data sets. However, it is also possible to utilize thisidea for visualizing multi-variable data sets or integrat-ing scalar fields from different sources such as implicitsurface modeling [31]. In that case, we can potentiallyhave multiple choices to assign the scalar values differentmeanings by mapping them to different roles in themodel. For example, we can use one of the scalar valuesto define a set of isosurfaces and use another scalar valuefor the color map, just like rendering the surfaces witha texture attached. The pre-integration schemes may notbe applicable in that case, so more efforts are requiredto develop a suitable numerical estimation algorithm forthe case.

The biggest challenge in using this model is perhapsthe adjustment of the extra transfer functions introducedby the new model. Since each set of transfer functionshas a distinct physical meaning, the user can adjust thesetransfer functions with a well-defined objective in mind,so it is not as difficult as it appears to be. However, webelieve that there’re still rooms for further simplificationsto the manipulation of the transfer functions. The use ofthe automatic transfer function generation or adaptiontechniques in the model may also be considered in futureapplications.

ACKNOWLEDGMENTS

This paper is supported by the National Basic Re-search Program of China (973 Program) under Grant2011CB707700, the National Natural Science Founda-tion of China under Grant No. 81071218, 60910006,

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 14: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

14

30873462, the Knowledge Innovation Project of the Chi-nese Academy of Sciences under Grant No.KSCX2-YW-R-262. We would like to thank the anonymous reviewersfor their valuable comments.

REFERENCES

[1] N. Max, “Optical models for direct volume rendering,” IEEETransactions on Visualization and Computer Graphics, vol. 1, no. 2,pp. 99–108, June 1995.

[2] K. Engel, M. Kraus, and T. Ertl, “High-quality pre-integratedvolume rendering using hardware-accelerated pixel shading,” inProceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop onGraphics hardware. ACM New York, NY, USA, 2001, pp. 9–16.

[3] E. Lum, B. Wilson, and K. Ma, “High-quality lighting and effi-cient pre-integration for volume rendering,” in Proceedings JointEurographics-IEEE TVCG Symposium on Visualization 2004 (Vis-Sym04), 2004, pp. 25–34.

[4] N. Sakamoto, T. Kawamura, K. Koyamada, and K. Nozaki,“Technical section: Improvement of particle-based volumerendering for visualizing irregular volume data sets,” Comput.Graph., vol. 34, pp. 34–42, February 2010. [Online]. Available:http://dx.doi.org/10.1016/j.cag.2009.12.001

[5] B. T. Phong, “Illumination for computer generated pictures,”Commun. ACM, vol. 18, no. 6, pp. 311–317, 1975.

[6] M. Levoy, “Display of surfaces from volume data,” IEEE ComputerGraphics and Applications, vol. 8, no. 5, pp. 29–37, May 1988.

[7] J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional trans-fer functions for interactive volume rendering,” IEEE Transactionson Visualization and Computer Graphics, vol. 8, no. 3, pp. 270–285,2002.

[8] M. Hadwiger, P. Ljung, C. R. Salama, and T. Ropinski, “Ad-vanced illumination techniques for gpu volume raycasting,” inSIGGRAPH Asia ’08: ACM SIGGRAPH ASIA 2008 courses. NewYork, NY, USA: ACM, 2008, pp. 1–166.

[9] J. Kniss, S. Premoze, C. Hansen, and D. Ebert, “Interactive translu-cent volume rendering and procedural modeling,” VisualizationConference, IEEE, pp. 109–116, 2002.

[10] J. Kniss, S. Premoze, C. Hansen, P. Shirley, and A. McPherson,“A model for volume lighting and modeling,” IEEE Transactionson Visualization and Computer Graphics, vol. 9, no. 2, pp. 150–162,2003.

[11] D. Weiskopf, K. Engel, and T. Ertl, “Interactive clipping tech-niques for texture-based volume visualization and volume shad-ing,” IEEE Transactions on Visualization and Computer Graphics,vol. 9, pp. 298–312, 2003.

[12] D. Xiang, J. Tian, F. Yang, Q. Yang, X. Zhang, Q. Li, and X. Liu,“Skeleton cuts - an efficient segmentation method for volume ren-dering,” IEEE Transactions on Visualization and Computer Graphics,vol. 99, no. PrePrints, 2010.

[13] K. Kreeger and A. Kaufman, “Mixing translucent polygons withvolumes,” in Proceedings of the conference on Visualization’99: cel-ebrating ten years. IEEE Computer Society Press Los Alamitos,CA, USA, 1999, pp. 191–198.

[14] S. Rottger, M. Kraus, and T. Ertl, “Hardware-accelerated volumeand isosurface rendering based on cell-projection,” in Proceedingsof the conference on Visualization’00. IEEE Computer Society PressLos Alamitos, CA, USA, 2000, pp. 109–116.

[15] M. Kraus, “Scale-invariant volume rendering,” in In Proc. of IEEEVisualization. Academic Press, 2005, pp. 295–302.

[16] S. Li and K. Mueller, “Accelerated, high-quality refraction com-putations for volume graphics,” International Workshop on VolumeGraphics, vol. 0, pp. 73–229, 2005.

[17] S. Stegmaier, M. Strengert, T. Klein, and T. Ertl, “A simpleand flexible volume rendering framework for graphics-hardware-based raycasting,” International Workshop on Volume Graphics,vol. 0, pp. 187–241, 2005.

[18] S. Rottger and T. Ertl, “A two-step approach for interactivepre-integrated volume rendering of unstructured grids,” in Pro-ceedings of the 2002 IEEE symposium on Volume visualization andgraphics. IEEE Press Piscataway, NJ, USA, 2002, pp. 23–28.

[19] L. M. Sobierajski, “Global illumination models for volume ren-dering,” Ph.D. dissertation, Stony Brook, NY, USA, 1994.

[20] R. Ratering and U. Behrens, “Adding shadows to a texture-based volume renderer,” Volume Visualization and Graphics, IEEESymposium on, pp. 39–46, 1998.

[21] T. Ropinski, J. Kasten, and K. H. Hinrichs, “Efficient shadowsfor gpu-based volume raycasting,” in Proceedings of the 16thInternational Conference in Central Europe on Computer Graphics,Visualization and Computer Vision (WSCG 2008), 2008, pp. 17–24.

[22] T. Ropinski, C. Doring, and C. Rezk-Salama, “Interactive volumet-ric lighting simulating scattering and shadowing,” in IEEE PacificVisualization Symposium (PacificVis 2010), mar 2010, pp. 169–176.

[23] C. R. Salama, “Gpu-based monte-carlo volume raycasting,” Com-puter Graphics and Applications, Pacific Conference on, vol. 0, pp.411–414, 2007.

[24] A. Abdul-Rahman and M. Chen, “Spectral volume renderingbased on the kubelka-munk theory,” in Computer Graphics Forum,vol. 24, no. 3, 2005, pp. 413–422.

[25] M. Strengert, T. Klein, R. Botchen, S. Stegmaier, M. Chen, andT. Ertl, “Spectral volume rendering using gpu-based raycasting,”The Visual Computer, vol. 22, no. 8, pp. 550–561, 2006.

[26] H. Noordmans, H. van der Voort, and A. Smeulders, “Spectralvolume rendering,” IEEE transactions on visualization and computergraphics, vol. 6, no. 3, pp. 196–207, 2000.

[27] S. Bergner, T. Moller, M. S. Drew, and G. D. Finlayson, “Interactivespectral volume rendering,” in VIS ’02: Proceedings of the conferenceon Visualization ’02. Washington, DC, USA: IEEE ComputerSociety, 2002, pp. 101–108.

[28] K. Engel, M. Hadwiger, J. M. Kniss, A. E. Lefohn, C. R. Salama,and D. Weiskopf, “Real-time volume graphics,” in SIGGRAPH ’04:ACM SIGGRAPH 2004 Course Notes. New York, NY, USA: ACM,2004, p. 29.

[29] NVIDA, “Nvidia cuda programming guide version 2.3,” NVIDIACorporation, July 2009.

[30] M. Chen and J. Tucker, “Constructive volume geometry,” inComputer Graphics Forum, vol. 19, no. 4, 2000, pp. 281–293.

[31] Q. Li and J. Tian, “2d piecewise algebraic splines for implicitmodeling,” ACM Trans. Graph., vol. 28, pp. 13:1–13:19, May 2009.[Online]. Available: http://doi.acm.org/10.1145/1516522.1516524

Fei Yang Fei Yang received the B.E. degreein biomedical engineering from Beijing JiaotongUniversity, Beijing, China, in 2009. He is cur-rently a Ph.D. candidate in the Medical ImageProcessing Group at the Institute of Automation,Chinese Academy of Sciences, Beijing. His re-search interests include volume rendering, med-ical image analysis and GPU computing.

Qingde Li Qingde Li received the Ph.D. degreein computer science from the University of Hull,Hull, U.K., in 2002. From 1998 to 2002, he wasa Professor in the School of Mathematics andComputer Science, Anhui Normal University, An-hui, China. He was a visiting scholar in theDepartment of Applied Statistics, University ofLeeds, U.K. from Oct 1990 to May 1992, andin the Department of Computing, University ofBradford, Bradford, U.K., from Sept 1996 to Aug1997. He is currently a lecturer in the Depart-

ment of Computer Science, University of Hull, Hull, UK. His most recentresearch interests include GPU-based volume data visualization, implicitgeometric modeling, and computer graphics.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

Page 15: A Versatile Optical Model for Hybrid Rendering of Volume Dataresearch.net.dcs.hull.ac.uk/publications/Li-Output3.pdf · 2013. 6. 13. · 1 A Versatile Optical Model for Hybrid Rendering

15

Dehui Xiang received the B.E. degree inautomation from Sichuan University, Sichuan,China, in 2007. He is currently a Ph.D. candi-date in the Medical Image Processing Group atthe Institute of Automation, Chinese Academyof Sciences, Beijing, China. His current re-search interests include volume rendering, med-ical image analysis, computer vision, and patternrecognition.

Yong Cao received the Ph.D. degree in Com-puter Science from the University of Californiaat Los Angeles, Los Angeles, USA, in 2005. Heis currently an Assistant Professor of ComputerScience at the Virginia Polytechnic Institute andState University and the director of Animationand Gaming Research Lab. His research fo-cuses on animation, high performance comput-ing and video game based learning.

Jie Tian (M02.SM06.F10) received the Ph.D.degree (with honors) in artificial intelligence fromthe Institute of Automation, Chinese Academy ofSciences, Beijing, China, in 1992. From 1995to 1996, he was a Postdoctoral Fellow at theMedical Image Processing Group, University ofPennsylvania, Philadelphia. Since 1997, he hasbeen a Professor at the Institute of Automation,Chinese Academy of Sciences, where he hasbeen involved in the research in Medical Im-age Processing Group. His research interests

include the medical image process and analysis and pattern recognition.Dr. Tian is the Beijing Chapter Chair of The Engineering in Medicine andBiology Society of the IEEE. He was elected as IEEE Fellow in 2010.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.