Virtual Reality: History
Curs 6Curs 6
Modelare proceduralModelare procedurală, Virtual-ă, Virtual-Humans,Humans,
Modelare audio, Modelare audio, Denavit–HartenbergDenavit–Hartenberg
Virtual Reality: History
Modelare proceduralăModelare procedurală
VE tind să devină din ce în ce mai complexe, pe măsură ce hardware-ul grafic 3D este îmbunătățit și permite specificarea de detalii adiționale.
Există circumstanțe (mărimea datelor, gradul de repetiție al elementelor, structuri de obiecte particulare – spre ex. vegetația) unde este de neconceput să gândim modelarea VE folosind tehnici manuale, de scanare.
Modelarea la nivel “high-level” presupune devoltarea unor tehnici care să pună la dispoziție un nivel de abstractizare a modelului în clase, care să permită specificații “high-level”.
In general, tehnicile procedurale (sau parametrice) folosesc algoritmi și proceduri pentru a coda și abstractiza detaliile modelului, eliberând motorul grafic de necesitatea definirii de specificații detaliate.
Real Real modelmodelReal Real modelmodel
Hi-levelHi-levelmodelmodel
Hi-levelHi-levelmodelmodel
AbstractionAbstractionAbstractionAbstraction
ProceduresProceduresProceduresProcedures
ParametersParametersParametersParametersVirtual Virtual modelmodelVirtual Virtual modelmodel
SynthesisSynthesis
ParametersParametersParametersParameters
ProceduresProceduresProceduresProcedures
AnalysisAnalysis
Virtual Reality: History
Procedural modelling: pros & consProcedural modelling: pros & cons
PROsPROs
Simularea de modele complexe cu mai puțini parametrii de specificat“Data Amplification” prin controlul parametrilor Optim pentru structuri care au un anumit grad de repetivitate Optim pentru modelarea unei varietăți de entități similare dar nu identice, care au anumite proprietați în comun Permit modelarea/rendarea on-demand, evitând astfel stocarea de date care nu sunt necesare (Lazy Evaluation)
CONsCONs
Tehnicile sunt puternic legte de aplicații specifice
De multe ori nu sunt ușor de identificat, înțeles, conceput si proiectat detaliile cu privire la procedurile și setul de parametri necesari
Foarte dificil să se mențină un control suficient asupra rezultatelor
Virtual Reality: History
Paradigmele modelării pParadigmele modelării proceduralroceduralee
ParametersParametersParametersParameters Proc Proc modelmodel
Proc Proc modelmodel
Synthesis
ProceduresProceduresProceduresProcedures
USERUSERUSERUSER USERUSERUSERUSER
Virtual Virtual modelmodel
Virtual Virtual modelmodel
USERUSERUSERUSER
Modelarea SEMI-AUTOMATĂ:
Procedurile și parametrii sunt definiți, dar aceștia nu acoperă întreg procesul de generare. De aceea, în unele etape este necesară sau recomandabilă intervenția manuală a celui care modelează.
Virtual Reality: History
Modelarea proceduralăModelarea procedurală
Virtual Reality: History
Modelarea procedurală a texturilorModelarea procedurală a texturilor
TexturiTexturilele procedurale sunt generate algoritmic, procedurale sunt generate algoritmic, îînloc sa fie nloc sa fie rezultatul unui proces de sampling, imagini sau desenerezultatul unui proces de sampling, imagini sau desene
Exista o multitudine de abordari, folosind diferite proceduri sau Exista o multitudine de abordari, folosind diferite proceduri sau parametri:parametri: Blinn & Newell propun sinteza Fourier. Blinn & Newell propun sinteza Fourier. Fournier, Fussel & Carpenter propun subdiviziune fractal.Fournier, Fussel & Carpenter propun subdiviziune fractal. Gacalowitz a dezvoltat niste metode statistice pentru a analiza Gacalowitz a dezvoltat niste metode statistice pentru a analiza
proprietățile texturi naturale a reușit să le reproducă.proprietățile texturi naturale a reușit să le reproducă. Perline propune utilizarea de latice de zgomot, adică numere Perline propune utilizarea de latice de zgomot, adică numere
aleatoare sau Pseudo gradienti generate pe o grilă/matrice.aleatoare sau Pseudo gradienti generate pe o grilă/matrice.
Virtual Reality: History
Latice de suneteLatice de sunete
Value Noise Value Noise Gradient NoiseGradient Noise Value- Value-Gradient NoiseGradient Noise+ Simple+ Simple- High - High
bandwidthbandwidth
+ Low bandwidth+ Low bandwidth- Artefacts (pattern)- Artefacts (pattern)
http://en.wikipedia.org/wiki/Lattice_Boltzmann_methods
http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2004/rapporter04/eriksson_erik_04161.pdf
Virtual Reality: History
Perlin NoisePerlin Noise
Es. 2D Perlin Turbulence:Es. 2D Perlin Turbulence:
Value = 0Value = 0
for (f = MINFREQ; f<MAXFREQ; f*=2)for (f = MINFREQ; f<MAXFREQ; f*=2)
value += 1/f * noise(P*f)value += 1/f * noise(P*f)
Virtual Reality: History
Texturi proceduraleTexturi procedurale
Pros:Pros: Bandwidth and memory saving (no storage need)Bandwidth and memory saving (no storage need) No tilingNo tiling High detail independent from zoomHigh detail independent from zoom
Procedures:Procedures: IBRIBR (Image Based-Rendering) (Image Based-Rendering): textures are generated : textures are generated
starting from samplesstarting from samples Texture synthesis: texture are generated startingTexture synthesis: texture are generated starting
from material propertiesfrom material properties
IBR pro: no need to identify different pro-IBR pro: no need to identify different pro-cedures for different materials, store onlycedures for different materials, store onlysmalls samplessmalls samples
Virtual Reality: History
IBRIBR
→→
Virtual Reality: History
IBR + algoritmi geneticiIBR + algoritmi genetici
http://www.youtube.com/watch?v=Fp9kzoAxsA4
Virtual Reality: History
Procedural Texturing - AnimationProcedural Texturing - Animation
Real-time animated texturesReal-time animated textures Allows producing comples animations on Allows producing comples animations on
images and not on polygons (es. facial images and not on polygons (es. facial animation, fire or liquids, clouds etc.)animation, fire or liquids, clouds etc.)
Virtual Reality: History
Procedural synthesis of geometryProcedural synthesis of geometry
L-Systems:L-Systems:Formal grammar proposed by Lindenmayer (1968) Formal grammar proposed by Lindenmayer (1968) and adapted to graphics by Alvy Ray Smith (1984)and adapted to graphics by Alvy Ray Smith (1984)
An L-System is based upon:An L-System is based upon: An alphabeAn alphabett (e.g. “F”, “+”, “-”) (e.g. “F”, “+”, “-”) A set of production rules (e.g. F A set of production rules (e.g. F → F+F--F+F)→ F+F--F+F)
Productions are applied in parallel, i.e. the biggest Productions are applied in parallel, i.e. the biggest number of symbols is substituted at each instance number of symbols is substituted at each instance (output is independent from rules applications (output is independent from rules applications order)order)
Virtual Reality: History
Procedural synthesis of geometryProcedural synthesis of geometry
Symbols can have geometrical meaningsSymbols can have geometrical meanings Es. Logo: Es. Logo:
““F” draws a segmentF” draws a segment ““+” rotates +” rotates θθ degrees degrees
counterclockwisecounterclockwise ““-” rotates -” rotates θθ degrees degrees
clockwiseclockwise
http://en.wikipedia.org/wiki/Fractal
Virtual Reality: History
Procedural synthesis of geometryProcedural synthesis of geometry
L-systems are more flexible introducing L-systems are more flexible introducing pushpush and and poppop operations (symbols “[“ amd “]”) operations (symbols “[“ amd “]”)
This allow to realize very complex structures:This allow to realize very complex structures: F → F [+F] F [-F] FF → F [+F] F [-F] F
Virtual Reality: History
Procedural synthesis of geometryProcedural synthesis of geometry
Passing to 3D:Passing to 3D: ““+” e “–” for +” e “–” for yawyaw ““^” e “&” for ^” e “&” for pitchpitch ““\” e “/” for \” e “/” for rollroll
In 3D segments may be represented by cylinders In 3D segments may be represented by cylinders or cone frustumsor cone frustums
Virtual Reality: History
The typical modeling data flow is:The typical modeling data flow is:– The The useruser specifies a conceptual model to the specifies a conceptual model to the modelermodeler– The The modelermodeler converts the conceptual model in an intermediate converts the conceptual model in an intermediate
representation, suitable for being processed and renderedrepresentation, suitable for being processed and rendered– The The renderer renderer takes the representation and synthesize an imagetakes the representation and synthesize an image
Procedural synthesis relies on two different paradigms to Procedural synthesis relies on two different paradigms to specify the intermediate representation:specify the intermediate representation:– Data AmplificationData Amplification– Lazy EvaluationLazy Evaluation
Paradigms of procedural synthesisParadigms of procedural synthesis
UTENTEUTENTE MODELERMODELER RENDERERRENDERERConceptConcept GeometryGeometry
Virtual Reality: History
A high geometrical detail is specified starting from few A high geometrical detail is specified starting from few input informationinput information
L-Systems are a typical example of data amplification L-Systems are a typical example of data amplification (es. poplar tree (es. poplar tree
16 Kb (concept) -> 6.7 MB (polygons)16 Kb (concept) -> 6.7 MB (polygons)
Data explosion: high memory storage requirementsData explosion: high memory storage requirementsUSERUSER MODELARE
(Amplificare)
MODELARE(Amplificare) RENDERERRENDERERArticulareArticulare GeometriaGeometria
Data amplificationData amplification
Virtual Reality: History
This paradigm avoids the intermediate This paradigm avoids the intermediate representation, generating on the fly the representation, generating on the fly the synthesis only when needed for renderingsynthesis only when needed for rendering
Low storage requirement, high real-time Low storage requirement, high real-time demandsdemands
USERUSER MODELARE(server)
MODELARE(server)
RENDERER(client)
RENDERER(client)ConceptConcept
GeometriaGeometria
CoordonateCoordonate
Lazy evaluationLazy evaluation
Virtual Reality: History
Parametric L-systems can change the behaviour depending on the passed Parametric L-systems can change the behaviour depending on the passed parameters. This allows to alter transformations and to allow recursionparameters. This allows to alter transformations and to allow recursion
Es. Es. inductive instancinginductive instancingdefine grass(0) < define grass(0) < iarbăiarbă > >
define grass(n) <define grass(n) <grass (n-1)grass (n-1)
grass (n-1) translate 2^n * (0.1, 0, 0)grass (n-1) translate 2^n * (0.1, 0, 0)
grass (n-1) translate 2^n * (0, 0, 0.1)grass (n-1) translate 2^n * (0, 0, 0.1)
grass (n-1) translate 2^n * (0.1, 0, 0.1) grass (n-1) translate 2^n * (0.1, 0, 0.1)
>>
Parametric L-SystemsParametric L-Systems
Virtual Reality: History
Global coordinates:Global coordinates:
An instance may vary its geometry An instance may vary its geometry depending on its world position/orientationdepending on its world position/orientation
Es. Es. TropismTropism http://en.wikipedia.org/wiki/Tropism http://en.wikipedia.org/wiki/Tropism es. top es. top → → down: gravitydown: gravity
bottom bottom → → up: searching sunlightup: searching sunlight
side: wind, etc.).side: wind, etc.).
Procedural Geometric InstancingProcedural Geometric Instancing
Virtual Reality: History
Virtual HumansVirtual Humans
Virtual Reality: History
““Icon or interactive representation of a Icon or interactive representation of a user in a shared environment”user in a shared environment”– ““avatar” comes from Hindu, describing god Vishnu avatar” comes from Hindu, describing god Vishnu
embodiementembodiement– in text-based VR in text-based VR
(such as MUDs) an avatar is a text(such as MUDs) an avatar is a textdescription provided to other users description provided to other users looking at the userlooking at the user
– In graphics VR, an avatar is a 3D modelIn graphics VR, an avatar is a 3D modelof a virtual human (or character) of a virtual human (or character) directly driven by a human userdirectly driven by a human user
AvatarAvatar
Virtual Reality: History
An agent is an autonomous VH whose An agent is an autonomous VH whose actions are not driven by a human actions are not driven by a human being but rather by a computerbeing but rather by a computer
VH actions can be guided by:VH actions can be guided by:– an authonomous sensory systeman authonomous sensory system– behavioural rulesbehavioural rules– predefined scriptspredefined scripts
AgentsAgents
Virtual Reality: History
– Graphics simulation:Graphics simulation: Rigid bodiesRigid bodies Deformable bodiesDeformable bodies
– Physical simulation:Physical simulation: Physically based modelingPhysically based modeling Inverse and direct kinematicsInverse and direct kinematics
– Behavioural simulationBehavioural simulation
Virtual Humans: requirementsVirtual Humans: requirements
Virtual Reality: History
Layered modeling:Layered modeling:– Simplest case: rigid bodies hierarchy (no Simplest case: rigid bodies hierarchy (no
deformations)deformations)– Usually at least two layers: skeleton and skinUsually at least two layers: skeleton and skin– More layers: anatomy based approachMore layers: anatomy based approach
• VH Animation– Skeleton motion– Consequent skin blending
Graphical representation of VHsGraphical representation of VHs
Virtual Reality: History
Skeleton + skinningSkeleton + skinning
RReprepreezzentaentarere grafica grafica aa VHs VHs
Virtual Reality: History
Simple deformation algorithm allowing a skeleton to continuously Simple deformation algorithm allowing a skeleton to continuously animate an associated skin meshanimate an associated skin mesh
Each skin vertex is influenced by one or more bones of the skeleton Each skin vertex is influenced by one or more bones of the skeleton depending on a system of weights.depending on a system of weights.
To connect the skin to the bones (To connect the skin to the bones (riggingrigging) ) envolopesenvolopes of predefined of predefined shape and size can be used, with an internal and an external shape and size can be used, with an internal and an external boundaryboundary
Vertices inside the internal boundary are given weight = 1Vertices inside the internal boundary are given weight = 1 Vertices outside the external boundary are given weight = 0Vertices outside the external boundary are given weight = 0 Vertices between the two boundaries areVertices between the two boundaries are
given a weight between 0 and 1given a weight between 0 and 1 A vertex assume the position related to theA vertex assume the position related to the
envelope it is included intoenvelope it is included into If multiple envelopes include a vertex, itIf multiple envelopes include a vertex, it
will assume an intermediate position givenwill assume an intermediate position givenby the weighted average of the positionsby the weighted average of the positions
Blending formulaBlending formula
SkinningSkinning
vvblend blend = v= v11ww11 + … + v + … + vn-1n-1 w wn-1n-1 + v + vnn (1- (1-ΣΣiiwwii))
Virtual Reality: History
Rigid bodies hierarchy:Rigid bodies hierarchy:– P: Simple, lightweightP: Simple, lightweight– S: Imprecise, does not simulate deformationsS: Imprecise, does not simulate deformations
It can be considered a borderline case of blending, where each It can be considered a borderline case of blending, where each vertex is associated to one bone only with weight = 1vertex is associated to one bone only with weight = 1
Rigid bodies vs. skinningRigid bodies vs. skinning
Virtual Reality: History
When the skin is linked to the skeleton, the “Bind When the skin is linked to the skeleton, the “Bind pose” is saved, which is the status, in world-space, pose” is saved, which is the status, in world-space, of the transformation matrices of the bones and of of the transformation matrices of the bones and of the skin, which will be used in the the skin, which will be used in the blendingblending stage. stage.
During blending each skin vertex:During blending each skin vertex:– Is transformed from the skin Is transformed from the skin
local-space into world-spacelocal-space into world-space– The from world-space to the localThe from world-space to the local
space of each bone it is connectedspace of each bone it is connected– It undergoes all the animations ofIt undergoes all the animations of
each bone it is connected, and foreach bone it is connected, and foreach a corresponding new positioneach a corresponding new positionis computedis computed
– Alle the new positions are transformedAlle the new positions are transformedinto world-space and then weightedinto world-space and then weighted
Skinning – Bind PoseSkinning – Bind Pose
Virtual Reality: History
When dealing with complex skins and When dealing with complex skins and skeletons, blending can be CPU intensiveskeletons, blending can be CPU intensive
The same operation can be performed on The same operation can be performed on GPU by programming an opportune vertex GPU by programming an opportune vertex shadershader
Important: in order to produce a correct Important: in order to produce a correct lighting, normals must be blended too!lighting, normals must be blended too!
Skinning – Vertex shaderSkinning – Vertex shader
Virtual Reality: History
Animation techniquesAnimation techniques
Virtual Reality: History
A human body can be sketched as a connected set of:A human body can be sketched as a connected set of:– linkslinks (arm, forearm, etc.) (arm, forearm, etc.)– jointsjoints (connecting links: elbow, shoulder etc.) (connecting links: elbow, shoulder etc.)
In order to animate a skeleton, it is needed toIn order to animate a skeleton, it is needed toalter the joint angles:alter the joint angles:– In a 2-layers representation, only the bones are In a 2-layers representation, only the bones are
animated composing, each frame, a particularanimated composing, each frame, a particularposture, corresponding to a particular configurationposture, corresponding to a particular configurationof the joint angles arrayof the joint angles array
– The skin is then blended accordinglyThe skin is then blended accordingly
Needed info:Needed info:– Anatomy based joint angles limitsAnatomy based joint angles limits– Masses of links (only for PBM)Masses of links (only for PBM)
Forward kinematicsForward kinematics
http://www.youtube.com/watch?feature=endscreen&NR=1&v=3ZcYSKVDlOchttp://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters
http://www.youtube.com/watch?v=VjsuBT4Npvk&NR=1&feature=endscreen
Virtual Reality: History
Joint angles can be computed (Motion Synthesis) or sampled using sensors (Motion Capture)
Motion capture pros: Realistic animation Little work for modelers
(only fine tuning and junctions) Captures animation details
that are uncatchable from theeye or difficult to synthesize
Cons: Requires expensive devices Data are samples, which are
more difficult to process
FK: Motion CaptureFK: Motion Capture
Virtual Reality: History
The kinematics of a connected structure is the process of computing the position in the space of the structure end-effector, given the jointangles
Inverse kinematics (IK) is the opposite process: given theend-effector position (goal or target), retrieves theconfiguration of the related joint angles. It is a processwidely used in robotics.
Pros: Allows to calculated joints starting only from some
position information (typically hands, feet, head), reducing the number of needed sensors
Cons: In some “singularity” point more than one solution is
possible. Although anatomical limits can help, it isnot always possible to find the correct one.
Inverse kinematicsInverse kinematics
http://demonstrations.wolfram.com/ForwardKinematics/
Virtual Reality: History
Two or more 3D reference meshes representing human body postures are morphed
Generally the needed steps are: 1 – Meshes are morphed 2 - Texture coordinates are morphed 3 - Texture maps are morphed
If meshes are related to the same basic shape, steps 1 and 2 arebasically linear interpolations.Step 3 is needed only for some visualeffects
Shape InterpolationShape Interpolation
Virtual Reality: History
Sometimes in literature is synonim of Shape Interpolation Our definition refers to Skeletal Animation Reference values are not diretly meshes, rather skeleton postures
related to particular keyframes Interpolations takes place on these values, determining new
skeleton posture The skin is blended frame by frame based on the new
interpolated skeleton postures
Keyframe InterpolationKeyframe Interpolation
Virtual Reality: History
Animations based on kinematics do not keep into account physical effects like gravity or inertia
Rather than setting up a kinematics problem, we can setup a dynamics problem, considering also masses and forces
Inverse dynamics is also possible (for each joint, computes forces and torques generating a desired movement)
Modeling dynamics is DESIRABLE to correctly manage the interaction between the VH and the VE (collision detection and management etc.)
Physically based modelingPhysically based modeling
Virtual Reality: History
VHs must be able to access information about the VE, either directly (unrealistic) or by means of a system of virtual sensors
These informatino will drive his actions: (behavioural modeling) locomotion driven by sight object manipulations feedback to acoustic stimula etc.
Behaviours can be scripted or procedurally evaluated
Other behavioural issues: Interaction among VHs Interaction between VHs and real humans
Behavioural modelingBehavioural modeling
Virtual Reality: History
StandardsStandards
Virtual Reality: History
Humanoid Animation:Humanoid Animation:– Target: creation of a library of interchangeable Target: creation of a library of interchangeable
humanoids and authoring tools to create new humanoids and authoring tools to create new humanoids and animationshumanoids and animations
– Support keyframe, IK, FK, etc.Support keyframe, IK, FK, etc.– http://www.h-anim.org/Specifications/H-Anim1.1http://www.h-anim.org/Specifications/H-Anim1.1
H-AnimH-Anim
Features:Features: VRML 97 compliantVRML 97 compliant Flexibility, no assumption on the application typeFlexibility, no assumption on the application type Simple (deals only with VHs and no other articulated figure)Simple (deals only with VHs and no other articulated figure)
Virtual Reality: History
......DEF hanim_l_shoulder Joint { name "l_shoulder" DEF hanim_l_shoulder Joint { name "l_shoulder"
center 0.167 1.36 -0.0518 center 0.167 1.36 -0.0518 children [ children [ DEF hanim_l_elbow Joint { name "l_elbow" DEF hanim_l_elbow Joint { name "l_elbow"
center 0.196 1.07 -0.0518 center 0.196 1.07 -0.0518 children [ children [
DEF hanim_l_wrist Joint { name "l_wrist" DEF hanim_l_wrist Joint { name "l_wrist" center 0.213 0.811 -0.0338 center 0.213 0.811 -0.0338 children [ children [ DEF hanim_l_hand Segment { name "l_hand“ DEF hanim_l_hand Segment { name "l_hand“
... } ] } ... } ] }
DEF hanim_l_forearm Segment { name DEF hanim_l_forearm Segment { name l_forearm" l_forearm" ... } ] } ... } ] }
DEF hanim_l_upperarm Segment { name DEF hanim_l_upperarm Segment { name "l_upperarm" "l_upperarm" . . .. } ] } ... .. } ] } ...
H-Anim – File format exampleH-Anim – File format example
Virtual Reality: History
H-Anim - HierarchyH-Anim - Hierarchy
Virtual Reality: History
FBA: Facial Body Animation in Mpeg-4FBA: Facial Body Animation in Mpeg-4– An Mpeg-4 body is a collection of nodesAn Mpeg-4 body is a collection of nodes– Root, BodyNode, contains 3 nodes:Root, BodyNode, contains 3 nodes:
- BAP (Body Animation Parameter): 296 parameters describing skeleton BAP (Body Animation Parameter): 296 parameters describing skeleton propertiesproperties
- Rendered Body: holds DEFAULT skin information (shape + textures)Rendered Body: holds DEFAULT skin information (shape + textures)- If a specific body must be rendered, the BDP (Body Definition Parameters) If a specific body must be rendered, the BDP (Body Definition Parameters)
is added, replacing the defaultdel Rendered Body. Skinning parameters is added, replacing the defaultdel Rendered Body. Skinning parameters may be specified.may be specified.
The H-Anim group, coordinated with theThe H-Anim group, coordinated with theFBA group of Mpeg-4 has standardizedFBA group of Mpeg-4 has standardizedVH specifications, so as to produce VH specifications, so as to produce coherent results in both ennvironments.coherent results in both ennvironments.
MPEG-4: body animationMPEG-4: body animation
Virtual Reality: History
Virtual CrowdsVirtual Crowds
Virtual Reality: History
VHs are needed to populate VEsVHs are needed to populate VEs Some VEs (such as cities) need a high number of Some VEs (such as cities) need a high number of
VHsVHs Crowds simulationCrowds simulation, , because of its intrinsic because of its intrinsic
complexity, cannot be managed as sum of VHscomplexity, cannot be managed as sum of VHs Application: entertainment, study of crowd flows Application: entertainment, study of crowd flows
(panic, disaster etc.)(panic, disaster etc.) Requirements:Requirements:
– Graphics modelingGraphics modeling– Behavioural modelingBehavioural modeling
Virtual CrowdsVirtual Crowds
Virtual Reality: History
Virtual CrowdsVirtual Crowds
Virtual Reality: History
Real-time orientedReal-time oriented Allows to render ~ 100K different VHsAllows to render ~ 100K different VHs VHs rendered as precomputedVHs rendered as precomputed
impostorsimpostors Several VH types, possibilitySeveral VH types, possibility
of modulating colors andof modulating colors andtexturestextures
Real-time shadowing usingReal-time shadowing usingshadowmapsshadowmaps
Virtual Crowds exampleVirtual Crowds example
Virtual Reality: History
Virtual CrowdsVirtual Crowds
Behavioural algorithms:Behavioural algorithms: Collision avoidanceCollision avoidance Height check Height check Interest attractorInterest attractor Exit search, visibility, flow inertia etc.Exit search, visibility, flow inertia etc.
Virtual Reality: History
Acoustical EnvironmentAcoustical Environment
When a sound is generated, it is When a sound is generated, it is propagated as waves in a medium. The propagated as waves in a medium. The properties of the medium, and of the properties of the medium, and of the surrounding environment, influence how surrounding environment, influence how the sound is perceivedthe sound is perceived
A complete acoustical field is composed A complete acoustical field is composed of:of:
– Sound sourcesSound sources– ListenerListener– EnvironmentEnvironment
Virtual Reality: History
Acoustical environmentAcoustical environment
Sound source:Sound source:
an object producing sound waves, which are an object producing sound waves, which are transmitted(usually) along a preferential directiontransmitted(usually) along a preferential direction
EnvironmentEnvironment
once produced the wave propagates in the once produced the wave propagates in the environment, which could modify the wave environment, which could modify the wave propertiesproperties
Listener:Listener:
an object receiving sound waves. Processing an object receiving sound waves. Processing these waves, it can retrieve information about the these waves, it can retrieve information about the sound source and the environmentsound source and the environment
Virtual Reality: History
Environmental effectsEnvironmental effects
Falloff:Falloff:decay of the sound intensity when the distance decay of the sound intensity when the distance between the listener and the sound source between the listener and the sound source increasesincreases
Virtual Reality: History
Environmental effectsEnvironmental effects
ReflectionReflection: : when a sound source changes propagation when a sound source changes propagation medium, a share of the wave is transmitted in medium, a share of the wave is transmitted in the new medium, the remaining is reflected the new medium, the remaining is reflected (related phenomenons: echo, …)(related phenomenons: echo, …)
Virtual Reality: History
Environmental effectsEnvironmental effects
Diffraction:Diffraction:when a sound wave meets an obstacle, it when a sound wave meets an obstacle, it bends bends its its path so as to move around the obstacle (like water path so as to move around the obstacle (like water waves)waves)
Virtual Reality: History
Reverberation:Reverberation:Depending on their shapes and materials, objects inside an Depending on their shapes and materials, objects inside an environment are characterized by reflection and absorption environment are characterized by reflection and absorption parameters. This modifies the sound wave; the human brain can parameters. This modifies the sound wave; the human brain can perceive individually first-order reflection, while higher-order perceive individually first-order reflection, while higher-order reflections are perceived as combined and form the reverberation. It reflections are perceived as combined and form the reverberation. It is an important acoustical phenomenon, as there is only one direct is an important acoustical phenomenon, as there is only one direct path and many indirect ones. Usually mostpath and many indirect ones. Usually most
of the acoustic energy reachingof the acoustic energy reaching a listener comes by reflections.a listener comes by reflections.
Environmental effectsEnvironmental effects
Virtual Reality: History
Reverberation provides important perceptual Reverberation provides important perceptual information about the type and the size of the information about the type and the size of the environment:environment:
– Short duration Short duration →→ sound almost all direct sound almost all direct →→ clue of small clue of small environmentenvironment
– Long duration Long duration →→ well distinct from the original sound well distinct from the original sound →→ clue of big clue of big environmentenvironment
– Much energy in hi frequencies Much energy in hi frequencies →→ clue of reflecting enviroment (not clue of reflecting enviroment (not absorbing hi freqs)absorbing hi freqs)
– Much energy in hi frequencies → clue of soundproof environmentMuch energy in hi frequencies → clue of soundproof environment
Reverberation provides also positional information: Reverberation provides also positional information: when the sound source goes away, the direct when the sound source goes away, the direct component decreases whilst the reverberation component decreases whilst the reverberation remains unalteredremains unaltered
ReverberationReverberation
Virtual Reality: History
Sound localizationSound localization
Interaural intensity difference Interaural intensity difference ((IIDIID):):
The sound is perceived as more intense in the The sound is perceived as more intense in the ear closer to the sound sources, not only ear closer to the sound sources, not only because of the distance but also because of the because of the distance but also because of the head head masking.masking.
Virtual Reality: History
Sound localizationSound localization
Interaural Time difference Interaural Time difference ((ITDITD):):
The sound is perceived earlier by the ear closer The sound is perceived earlier by the ear closer to the sound source. to the sound source.
Virtual Reality: History
Sound localizationSound localization
The human brain uses ITD + The human brain uses ITD + IID effects in order to IID effects in order to determine the position of a determine the position of a sound source inside a cone.sound source inside a cone.
Although useful, ITD and IID have important limits:Although useful, ITD and IID have important limits: Internalization: using headphones the listener correctly Internalization: using headphones the listener correctly
perceives lateralization but the sound appears to be inside perceives lateralization but the sound appears to be inside the headthe head
They do not help in perceiving “up-down” and “rear-front” They do not help in perceiving “up-down” and “rear-front” differences.differences.
Each listener perceives his own “version” of a sound, due to Each listener perceives his own “version” of a sound, due to his head/body shape. To correctly perceive the 3D his head/body shape. To correctly perceive the 3D localization, these anatomic parameters must be kept into localization, these anatomic parameters must be kept into accountaccount
Virtual Reality: History
HRTFHRTF
We must move to a head-centered acoustic system and We must move to a head-centered acoustic system and compute is transfer function.compute is transfer function.
HRTF = HRTF = Head Related Transfer FunctionHead Related Transfer Function
Given input signals (sound sources) and the transfer function, Given input signals (sound sources) and the transfer function, output signals (perceived sounds) can be calculatedoutput signals (perceived sounds) can be calculated
The HRTF is unique for each human being (The HRTF is unique for each human being (ear-printear-print), however ), however an “average” approximated HRTF can be computed and usedan “average” approximated HRTF can be computed and used
sound source perceived sound
Virtual Reality: History
HRTF: analysisHRTF: analysis A possible process to calculateA possible process to calculate
an HRTF:an HRTF:– Two microphones are put close to L Two microphones are put close to L
and R channels (either of the userand R channels (either of the useror using a mannequin)or using a mannequin)
– A loudspeaker is put in a knownA loudspeaker is put in a knownposition P.position P.
– A known signal is played.A known signal is played.– The sound is recored using mics.The sound is recored using mics.– By comparing the original sound waveBy comparing the original sound wave
with the resulting output, the HRTFwith the resulting output, the HRTFis computed for THAT P.is computed for THAT P.
– The process is repeated moving PThe process is repeated moving Ponto a sphereonto a sphere
Virtual Reality: History
HRTF: analysisHRTF: analysis
Virtual Reality: History
HRTF: synthesisHRTF: synthesis
The described process produce a HRTF table related to a The described process produce a HRTF table related to a particular headparticular head
This HRTF must be synthesized in order to be used in real-This HRTF must be synthesized in order to be used in real-time in a VR applicationtime in a VR application
To this purpose, special DSP (Digital Sound Processing) To this purpose, special DSP (Digital Sound Processing) real-time algorithms are realized, implementing filters to real-time algorithms are realized, implementing filters to be applied to non-directional input signals which are, this be applied to non-directional input signals which are, this way, “localized”way, “localized”
Virtual Reality: History
Audio ModelingAudio Modeling
Virtual Reality: History
Virtual Environment Work FlowVirtual Environment Work Flow
SynthesisSynthesisSynthesisSynthesis SamplingSamplingSamplingSampling
BehaviourBehaviourBehaviourBehaviour PropertiesPropertiesPropertiesProperties
RenderingRenderingRenderingRendering
USERUSER
InteractionInteractionInteractionInteraction
VIRTUAL ENVIRONMENTVIRTUAL ENVIRONMENT
ModellingModellingModellingModelling
ManagementManagementManagementManagement
Virtual Reality: History
Sound ModelingSound Modeling
Tecniques:Tecniques: Playback of sampled signalsPlayback of sampled signals
ProsPros: maximum fidelity of THAT single event: maximum fidelity of THAT single eventConsCons: static, non reactive, repetitive: static, non reactive, repetitive
Signal-based synthesis (substractive, additive, FM…)Signal-based synthesis (substractive, additive, FM…)ProsPros: better flexibility, computationally non demanding: better flexibility, computationally non demandingConsCons: limited, “artificial” sound: limited, “artificial” sound
Physical-based modelingPhysical-based modelingProsPros: realistic, highly reactive: realistic, highly reactiveConsCons: Complex (both for calculation and control), often “too : Complex (both for calculation and control), often “too perfect”perfect”Stress on Stress on how the sound is producedhow the sound is producedAlternative: stress on Alternative: stress on how we perceive the sound how we perceive the sound ((modal modal synthesissynthesis))
Virtual Reality: History
Audio RenderingAudio Rendering
Virtual Reality: History
SW for 3D Audio RenderingSW for 3D Audio Rendering
Basic concepts of 3D Audio API are very similar to those of 3D Basic concepts of 3D Audio API are very similar to those of 3D Graphics.Graphics.
To define a virtual acoustic scenario we must define:To define a virtual acoustic scenario we must define:– ListenerListener : corresponding to camera. It is defined with: corresponding to camera. It is defined with
a position and an orientation (a position and an orientation (toptop and front vecs) and front vecs)– Sound SourcesSound Sources : like light sources they can be : like light sources they can be
omnidirectional, directionalomnidirectional, directional or or spot. spot. Minimum andMinimum and
maximum distances (like maximum distances (like farfar and and nearnear planes) exists. planes) exists.– EnvironmentEnvironment : defines properties of reflections, : defines properties of reflections,
absorption etc. In basic APIs often it is neglectedabsorption etc. In basic APIs often it is neglected
Examples of 3D Audio API:Examples of 3D Audio API: DirectSound 3D: Microsoft only, HW support DirectSound 3D: Microsoft only, HW support
now discontinued.now discontinued. OpenAL: cross platform, HW support, 3D onlyOpenAL: cross platform, HW support, 3D only EAX: available for DS3D and OpenAL, manages environment fx.EAX: available for DS3D and OpenAL, manages environment fx.
Top Related