Mocap Basics for Science

17
Motion Capture Technology: The Basics This paper is an informal introduction to motion capture for use in science – i.e. for motion analysis. We examine the process, the purpose, and the technologies. It can also serve as a buyer's guide before you buy a motion capture system so that you can make sure you get the right system for your purposes, otherwise you can spend huge amounts of money and still not get what you need. This paper will give you the big picture of what is possible given the state of technology as of today (June 2010). We will try to keep it updated as the technologies change. MOCAP in the news: Picture from the Onion http://www.theonion.com/content/news/obama_outfitted_with_238_motion

Transcript of Mocap Basics for Science

Page 1: Mocap Basics for Science

Motion Capture Technology:

The Basics

This paper is an informal introduction to motion capture for use in science – i.e. for motion analysis. We examine the process, the purpose, and the technologies. It can also serve as a buyer's guide before you buy a motion capture system so that you can make sure you get the right system for your purposes, otherwise you can spend huge amounts of money and still not get what you need. This paper will give you the big picture of what is possible given the state of technology as of today (June 2010). We will try to keep it updated as the technologies change.

MOCAP in the news: Picture from the Onion http://www.theonion.com/content/news/obama_outfitted_with_238_motion

Page 2: Mocap Basics for Science

Table of Contents

Science not Entertainment ................................................................................................. 3

Who needs Motion Capture? .............................................................................................. 4

How Does 3D Motion Capture Work? ................................................................................... 5

Working Backwards Trick ................................................................................................... 6

Analysis Software ............................................................................................................. 7

Motion Analysis Fundamentals ........................................................................................... 7

Modeling ......................................................................................................................... 9

Types of Motion Capture Systems ...................................................................................... 10

Markerless Systems ..................................................................................................... 10

Marker Based Systems ................................................................................................. 11

Passive Marker Systems ................................................................................................... 12

Calibration, Linearization, and Tracking .............................................................................. 12

Active Marker Systems ..................................................................................................... 14

Electromagnetic and Inertial Sensors ................................................................................. 14

The Motion Analysis Process ............................................................................................. 15

Motion Capture – Calibration Trial or Standing Trial .......................................................... 15

Signal and Event Processing ......................................................................................... 16

Modeling .................................................................................................................... 16

Analysis ..................................................................................................................... 16

Choosing a System .......................................................................................................... 17

Page 3: Mocap Basics for Science

Science not Entertainment If you are looking to make 3D animated movies or computer/video games, this paper will likely be interesting – but it will not help you. We will be looking at the tools for scientific motion analysis, and not motion simulation. While it is true that many of the technologies used to capture movement – from video cameras, infrared cameras, 3D sensor arrays, and accelerometer based systems – the key difference is in how you capture data and what you do with the data after you have captured it (ignoring real-time processing for now). In the animation/entertainment world, the goal is to translate human (or animal) movement into an animation package for use as an avatar or to simulate a complex movement so that it can be modified or enhanced in some manner. Thus, various standards have been developed over the years for getting real life movement data into animation software products. The BVH file format, the Biped Model, and the Quadruped Model, and other standards are used in this industry. Since the goal of animation is make a digital version of a smooth and/or realistic movement, these systems automatically filter, smooth, stretch, and fit data into formats that an artist can work with. For scientific and biomechanical studies, animation formats and standards simply will not work. Scientific motion analysis strives for accuracy in data collection, and each potential use for motion capture very likely requires a specialized process for data collection – making standard processes almost impossible without imposing severe limitations on the potential for analysis. Because of the need for precision and accuracy, it makes the selection of the tools (motion capture and analysis software) quite important, and despite what some vendors may represent, no one system or manufacturer has a solution for every situation. With this context in mind, we will continue by examining where motion capture technologies may be useful, what kind of systems work in various situations, and how the analysis might be done. And then, we show how to work backwards – from expected result to the data collection – in order to pick a good system and analysis solution.

Page 4: Mocap Basics for Science

Who needs Motion Capture? From dance choreography to shark swimming, there is tremendous value in studying natural movement. The role of motion capture systems is to translate that movement into data that can be analyzed. While most often associated with movie special effects and building characters for video games, motion capture data has a very valuable role in hundreds of applications. With it we can copy, analyze, and react to movements and apply it to literally thousands of different applications. For example, by accurately measuring human movements, quantitative assessments are made to promote effective physical therapies, reduce pain, or to prevent sports injuries. The list below illustrates just a tiny sample of existing motion capture data applications. Industry Application(s)

Animation Special effects, video game avatars, animated movies, choreography

Industrial Machine controls, robotics, VR CAD, crash testing, backpack design

Commercial Sports equipment design and performance, tool manufacture, shoes

Sports Performance assessments, training, troubleshooting, equipment selection

Sports Medicine Injury Prevention, rehabilitation, pre-surgical assessments, concussions

Physical Therapy Gait analysis, performance assessments, training, rehabilitation effectiveness

Orthopedics Spine studies, Functional Capacity, design/fitting prosthetics, orthotics

Engineering Structural assessments, robotics, exoskeletons, ship design, load testing

Ergonomics Usage/usability studies, home and office assessments, repetition analysis

Psychology Facial recognition, VR interactions, behavior studies, animal studies

Surgery Pre and Post operation assessments, virtual surgery, stroke assessments

Neuroscience cognitive/motor assessments, traumatic brain injury assessments, VR

Veterinary Injury prevention, training, rehabilitation, swimming

Legal / Insurance Disability assessments, Functional Capacity testing

By collecting large populations of movement data, we can build databases of good (aka “normal”) data to use for comparing and evaluating disabilities, performance, injuries, and to prevent injuries or physical problems. Applied mathematics may be used to predict the effects of a movement; how changes to some aspect of a movement will have an effect overall – thus providing a valuable tool for pre-surgical and post-surgical assessments, sports training, and injury prevention. Neuroscientists also have found that changes in human movement can be correlated to strokes and traumatic brain injuries, and so motion capture has the potential for rapid screening and predictive assessments. Using motion capture data in sports medicine and physical therapy has been used for decades and has shown tremendous value. However, the main obstacles to its widespread acceptance have been the difficulties in collecting data, the high cost, and complex result reporting. Fortunately, now many of these hurdles have been overcome.

Page 5: Mocap Basics for Science

How Does 3D Motion Capture Work?

The study of human movement is complex, but it is not new. However, measuring it is extremely difficult because human movement is made in 3 dimensions. A video camera can capture movement, but it is still only a 2-dimensional image. We want to be able to “get behind” the image, to move it around, and to examine a movement from multiple viewpoints. For that, we need some sort of physical system to record movement as it happens from several perspectives – like having 2 eyes to perceive

depth. So, a 3-dimensional capture of movement requires multiple perspectives, thus specialized tools. Of course, recording a movement data from multiple angles is just the first step of the process. We must then correlate the data to make a 3-D model, and from that model we can then analyze movements to derive information. These steps are mentioned here as a foreshadowing of the differences we will see between the practice of motion capture and motion analysis. It is critical to understand the basics principles of motion capture before selecting any motion capture system. You need to know the way movement is observed and tracked, how the data obtained is interpreted and stored, and what the limitations are due to the technological approach used. Otherwise, you could spend big money to no avail, or you might be able to get data, but it could result in a seriously flawed analysis because of technical limitations. For example, we occasionally see data from new users who spent lots of money and time studying all the electronics and specifications of a motion capture system, only to discover later that the analysis tools and marker placement strategies were far more important. Thus they have extremely high quality data that has no value. In the animation world, it is good to have small odd movements smoothed out automatically since no analysis is done and marker placement isn't all that important. However, that same approach would hide key data for trying to identify a disability or to evaluate a therapy if you didn't realize that the data was filtered and/or missing key information due to restrictions or bad marker placement. A great many types of systems have been developed to capture motion and record 3D movements. A summary of the popular technologies commonly used is below, but it is by no means all inclusive - there also exist several other systems such as hybrids, ultrasound, laser, and radio systems. Also, there are less accurate historical approaches, like exoskeletons and hinged devices that get strapped on – but since they are seldom viable, we will ignore these older mechanical tools and approaches.

3D Sensor Array High-end wireless Infrared Camera Inertial Sensor

High Speed Video Camera

Low-end Infrared Camera

Page 6: Mocap Basics for Science

Tools Technology Limitations

Video Cameras 1) Record subject from multiple viewpoints with a common reference image. Correlate images to generate wireframe geometries. 2) Record subject with reflective markers attached. Correlate images to markers.

1) Fine details can get lost (hand movements, wrist rotations, etc). Low accuracy. Occlusions. 2) Limited accuracy due to lenses and guessing at marker centroids. 3) Image data requires post-processing to digitize marker data. Real-time processing limited.

Infra-Red Cameras (high end)

1) Record subject with reflective markers attached. 2) Marker centroids calculated by cameras very accurately. 3) Good for large volumes 4) Some work in outside lighting 5) High speed (frame rate) available

1) Potential marker occlusions. 2) Volume calibration and marker setup time.

Infra-Red Cameras (low end)

1) Record subject with reflective markers attached. 2) Marker centroids can be calculated by cameras very accurately. 3) Inexpensive 4) Portable

1) Potential marker occlusions. 2) Volume calibration and marker setup time 3) Not good for large volumes 4) Subject to people bumping into them 5) Lower frame rates

Inertial Sensors Self-contained markers attached to tracked segments. Often wireless.

Signal drift. Low accuracy. Relative movement tracking not tied to environment.

Electromagnetic Sensors

Markers are active sensors attached to segments whose locations are picked up by a receiver.

Sometimes subject to interference. Limited range. Signal drift.

Sensor Arrays Record subject with tiny active (usually LED) markers attached. Factory calibrated.

Potential marker occlusions. Marker setup time and wires.

Working Backwards Trick Before we continue discussing technologies and tools for capturing motion, we need to first understand what will eventually be done with the data. This is arguably the most important aspect of motion analysis, and if you are in the market for a motion capture system, this is where the concept of working backwards is helpful. The idea is to recognize the analysis results desired first, from that determine the 1) software needed to produce it, then 2) the type of data needed and its characteristics, and finally 3) the type of hardware that can produce that data. Historically the approach has been to evaluate and buy a collection system first, and then learn how to collect and process data, and finally, hope for good results.

Page 7: Mocap Basics for Science

Analysis Software Oddly enough, there are many more ways to collect data than there are to process it. Industrial applications have their own set of standard products (like Jack from UGS/Siemens). Because this author is in the biomechanics field of rehabilitation, sports, orthopedics, neuroscience, and such, I’ll mention some of the products there. Our own product, C-Motion’s Visual3D is clearly the high end platform independent analysis solution for kinematics and kinetics. Competitors with solutions for specific types of analysis, such as AnyBody exist, as well as more general but limited solutions from Vicon (Nexus/Polygon/Bodybuilder) and other hardware manufacturers. Motion Analysis Corp also sells some older software with their systems that can do limited types of analysis (KinTools, OrthoTrak). Of course, many people feel they can use Matlab to write their own biomechanical analysis software. After a while, they become Visual3D customers. C-Motion can also be contracted to create custom applications using the Visual3D libraries if needed. Forward dynamic simulations products are now available in the form of OpenSim – with integration to Visual3D supported, improving and growing all the time. Alternatives include the commercial Simm product for Motion Analysis Corp, and roll-your-own math routines from Adams and SDfast. However, only Visual3D lets you work with all the systems giving you upgrade flexibility and collaborative abilities. It also provides all the analysis capabilities a researcher could want, has expert PhD level tech support, and thousands of users world-wide. That’s the end of my sales pitch. From this point we’ll assume that the analysis and reporting needs have been thought out now we can re-focus on determining how to get data into the chosen analysis package of Visual3D (sorry, I couldn’t help myself). So, here is what is involved in any analysis…

Motion Analysis Fundamentals Each human (or animal) movement involves a skeleton and hundreds of unseen muscles rotating, flexing, stretching, and compressing in complex ways. Some movements like the shoulders (scapula), ankles, and knee caps (patella) are controlled by muscles, cartilage and tendons rather than bone joints, so those movements are more fluid and extremely difficult to measure. Analyzing any movement data in 3D space requires basic information such as

• The starting, intermediate, and ending positions • Angles and rotations between parts • A relative and/or an external frame of reference

For 3D motion analysis, the trick is to convert motion capture data into a set of moving lines and/or geometrical objects from which actual 3D measurements can be made. Not only does the movement itself need to be tracked, but we need to collect information about the part being studied. Using an elbow joint

Page 8: Mocap Basics for Science

as an example, we must identify the positions of the upper arm and forearm in relation to each other as they move and twist so we can define the actual joint location and can measure the angles between them (as well as rotations, speed, accelerations, and so forth). But to be accurate, we also need to know geometric properties, like the length and width of each component. In a motion analysis any moving part, the upper arm and the forearm in our example, are called segments, and each segment has properties of its own – such as length, mass, size, and its own coordinate system from which to base those measurements on. The geometric information is needed so that you can calculate the exact center of the segment, or center line, from which to base measurements. A joint is the point where 2 segments intersect. Measuring joint angles in 3D space and other characteristics related to segments and their movements is essential. Most advanced motion capture systems collect information in ways that lets you define custom segments. Otherwise, you may be forced to use a predefined model that may, or may not, be modifiable. It is also helpful to identify a fixed point in the volume where the movement takes place to use as a reference location. This is called the origin of the laboratory coordinate system - point (0,0,0) in an (X,Y,Z) reference frame. The laboratory coordinate system is useful for measuring angles as projections onto a plane, and for defining environmental characteristics, like where the floor is located. Since motion capture analysis typically views each segment as a variation of a sphere, cylinder, or cone, an easy way to imagine what the system is seeing is to envision an artist's mannequin moving about rather than a real person. Each segment has an associated geometry and geometric properties. A subject in a motion capture trial is usually identified by only their movements, and nothing else. The measurement of distances, angles, rotations, velocities, accelerations, angular velocity, and angular acceleration in 3D is called kinematics. All of the systems we will discuss provide information for a kinematic analysis. However, to understand the forces, powers, and moments involved in a movement, additional information about mass, gravity, loads, and inertia are needed. This type of data is collected using force plates, instrumented treadmills, load cells, or other devices. The mathematics for this type of processing are called kinetics and inverse dynamics. Mathematical simulations using motion capture data (forward dynamics) can make predictions of movements based on changes to muscles, positions, and forces. There are also hybrid and derivative technologies such as Inverse Kinematics, Global Optimization Modeling, Induced Acceleration, and Induced Velocity among others that may be part of a motion analysis protocol, and thus should be mentioned.

Page 9: Mocap Basics for Science

Modeling A model is a set of segment definitions and the rules for specifying how they move. The movement data from any subject must be correlated to a model so that the movement can be analyzed or processed. The model can also include things in addition to the movement data. For example, one extra segment property may be to add a 3D scanned picture of a bone onto the segment. This is how a motion capture of a subject is displayed by the motion analysis software as a moving skeleton. The goal of motion capture is to identify the movement of segments – arms, legs, feet, finger bones, the lumbar spinal region, or even individual vertebrae – and the exact physical position and orientation of each segment as it moves. Mathematically, this is represented with transformation matrices. Segments are generally linked together is some way, and each segment has a length, mass, and a varying width depending upon its geometric properties. Each segment also has its very own coordinate system. Independent segments, linked together but independently defined, can move in any direction and thus have 6 degrees of freedom. As the segments are linked together, movement is constrained and so are the degrees of freedom. Many models of linked segments represent only 3 degrees of freedom since their movements are restricted by each other sharing the same joint as part of their definition. There are several predefined models that exist in both the biomechanics and the animation worlds. These models require you to capture movement data in a prescribed manner, and then they will adjust your data make it fit into the model. For animation applications this is not much of a problem, but there is a serious loss incurred of real movement information. Some predefined models, such as the biped or quadruped models used in the animation industry, lack the mathematical ability to handle actual human movements. For this reason, animation formats and models cannot be used for scientific or medical purposes, and will produce very misleading (i.e. incorrect) results if used for sports. If analytical accuracy is needed, you must have the ability to define your own models based on the process you use to collect data and focused on the results you desire. Even some standard biomechanics models will produce poor results if not used exactly the way they were intended. Now, here is where it gets tricky. There is another type of model, depending upon the motion capture system you are using, and both types are simply called “the model.” The model we covered so far is for analysis and finding meaningful results. The other type of model you may encounter is only used by certain vendors for tracking the markers or segments as they move from frame to frame. This “tracking model” is for the motion capture system to automatically identify markers (or segments) so that each one is not confused with another as movements get complicated and things cross over each other or spin around. The problem with the tracking model is that it is not good at defining segments or doing any sort of movement analysis. It does, however, provide just enough information for an animation system to use. Animation systems perform no analysis functions. In summary, any time motion capture data is collected it must be mapped to a model in order to make sense, and for an analysis to be possible. The quality of the model will determine the accuracy and value of the motion capture data.

Page 10: Mocap Basics for Science

Types of Motion Capture Systems We will ignore the 2D video camera systems (even extremely high speed ones), that simply record motion for playback and review purposes. We only mention this 2D technology to eliminate some confusion because there is software available that use video overlays to let you draw lines and measure angles. You can see this used on TV sporting events. Dartfish is one popular software package that does this. These systems have their place, and some can simulate a 3D analysis by having one camera process a front view and another one do a side view. These systems can also get very fancy, and make for good TV and assessing some sports performance, but they are not very accurate, nor do they provide comprehensive 3-dimensional views of an entire movement. There are two common techniques for capturing 3D movements. One requires markers to identify positions on the subject, and the other is markerless.

Markerless Systems There are two generations of markerless motion capture systems. Years ago, a way to measure movement was developed that used video cameras to film the movement. You first placed a big piece of graph paper (generally a big plastic chart) on the wall behind the subject to use as a guide for making measurements. To get a 3D result, several cameras would be used, correlating on one point on the graph paper. To define moving segments, a line was drawn (by hand or by a computer) up the center of each part of the body. Using this method, the whole body can be drawn as a stick figure of segments. A later approach mapped segments to a reference mannequin, that later became the standard for animation formats. There were several drawbacks to these approaches. One was that the processing of the video data was generally done by hand and time consuming. Likewise, accuracy was very poor, and only certain large movements can be analyzed at all. Hands and feet are usually not even done. Vast improvements to this approach have been made over the years, and now it is possible to eliminate the chart on the wall, and for software to recognize the edges of a subject's body and thus create geometrical segments, or wireframes, which can be used for measuring. These geometric segments can be used by animation programs. In some systems the segments are fitted to predefined models to make processing faster. None of them are suitable for biomechanics yet. Some issues to be aware of in modern markerless tracking systems is that clothing will distort the geometry of a segment unless it is form-fitted, it can be very difficult to see the rotation of a segment (examine the small movements near your elbow when moving your arm and twisting your wrist), and small segments, like feet, hands, and fingers are exceedingly difficult to track and measure this way. The accuracy of the system in building wireframe segments seems also to depend on heavy computational processing, so computing capabilities are a factor in some systems to cut down on the lengthy processing times. This technology is still evolving and shows promise. These systems have shown value in animation and in animal studies, where putting a marker on a subject is not practical, but other restrictions make them non-general solutions.

Page 11: Mocap Basics for Science

Marker Based Systems Most motion capture systems track some sort of markers placed on the subject. To see and record marker movements, some use video cameras or other special cameras, and others use special sensors. Since the goal is to accurately identify a segment as it moves, we can identify fixed points on one or more body parts and observe those points as they move – instead of the body itself. From careful marker placement we can very accurately derive segment definitions, their locations, and geometry. The movement of just the markers (not the subject) is observed, and a motion capture system calculates their locations and records them to a computer. In some technologies, like inertial sensors, the marker provides the data directly to a computer. The variety of “markers” used in this approach is extensive. There are two types of marker-based systems, corresponding to the type of markers they use to track movements. The two marker categories are active and passive. In terms of accuracy, it turns out there is actually little difference between them. Active markers provide information about themselves to a system and generally require some sort of power source and communications mechanism to get that information into the motion capture system. Passive markers need no power and are simply reflective dots or spheres that a motion capture system can see.

Marker Type Active/Passive System type Some Vendors

Reflective Marker Spheres

Passive IR/Optical Cameras

Vicon, Qualisys, Motion Analysis Corp, NaturalPoint, Innovision Systems, BTS, etc...

Flashing LED's Active – wired and wireless

Sensor Arrays Northern Digital, Charnwood Dynamics, Phoenix, PhaseSpace, etc...

Inertial Sensors Active – wired and wireless

Self Contained Animazoo, Sensorize, etc...

Electromagnetic Sensors

Active – wired and wireless

Electromagnetic Receiver

Polhemus, Northern Digital, etc...

Page 12: Mocap Basics for Science

Passive Marker Systems

These systems consist of cameras and reflective markers. The cameras are generally sensitive to infrared light, and use a ring of LEDs around the camera lens to act like a flashlight and get reflections back from the markers. Software in the camera examines the reflections and determines the exact location of the center of the marker (the centroid) within the motion capture volume. (This is why spheres are used rather than stickers or other marker types.)

Here are a few of the more well known passive systems...

Vendor Web Site

Vicon www.vicon.com

Qualisys www.qualisys.se

Motion Analysis Corp www.motionanalysis.com

NaturalPoint www.naturalpoint.com/optitrack These systems require a calibration step before you can use them. Generally, a baseline location is established by placing an L-shaped jig with some markers attached to it on the floor. The cameras then are focused so they all can see the jig, and this establishes a reference laboratory coordinate system and a lab origin. After that, a wand with a couple markers on it (in exactly known positions) is waved about in the motion capture area so that the camera systems can all focus on the markers, correlate their views, and calibrate themselves and correct for lens distortion. Once calibrated, the motion capture volume is established so that motion capture trials of people, animals, or objects can now begin. The calibration step should be performed before every session to make sure you are collecting valid data.

Calibration, Linearization, and Tracking Unfortunately, passive marker systems can’t simply be turned on and you start collecting data. Each time you go to capture data with this type of system you need to define what part of the lab is visible to the cameras. Calibration is the process of determining where all the camera overlap and viewing areas are - i.e. the 3D capture volume. Calibration typically consists of performing a data capture of a specially manufactured wand with 3 or more markers of known diameter and known distance from each other. The wand is spun, waved, and generally twisted about in the whole lab. The cameras all capture this data and the calibration software then combines each camera’s view into a comprehensive 3D volume. The calibration program will also adjust for the curvature of any lens (i.e. imagine a fish-eye lens) so that distance calculations are more accurate. This is called linearization. Often the accuracy of

Page 13: Mocap Basics for Science

a 3D motion capture system in determining where a target centroid is located is within a fraction of a millimeter. The picture here shows a set of boxes that illustrates what parts one camera chip sees with accuracy based upon the calibration and linearization routines of the software. The corners in this one seem to have problems, but even this simple calibration result shows a 0.091mm accuracy level using a small NaturalPoint Optitrack camera. Once the 3D volume is known, we still need to be able to track each marker in the volume – and avoid getting it confused with another marker during a movement. This can be difficult. A common problem is marker flipping. During a rotating movement or spin the system the system assumes that markers go in straight lines, so if two markers are on opposite sides of an object, the system may confuse marker identification as the object moves. This effect can be reduced by placing markers asymmetrically, but the effect does not go away. Tracking software can be very sophisticated in trying to keep all the markers straight, but manual intervention is often needed. Some tracking software uses a stick model – where you draw lines between markers to build patterns that the software uses to help automatically identify markers in a movement trial. The pattern is called either a “template” or sometimes it is confusingly called a “model”.

In any case, understand that for any system using a passive marker system, the calibration step is required before you use the system, and target/marker tracking software is an essential part of the system.

Page 14: Mocap Basics for Science

Active Marker Systems

These systems use self-identifying markers rather than generic reflective markers. The markers themselves are typically tiny LED's that flash in patterns or have some other type of unique signature. This eliminates the need for the special tracking software that passive systems need. There is generally a sensor array in the form of a long rectangular box that picks up the active marker signals and tracks them. These systems save time by not having to calculate centroid locations, and can thus act as very high speed camera systems. These systems can also slow down as the number of markers used increases since each marker must either be polled or recognized one at a time. One aspect of active marker systems is that the sensor array for seeing the markers is generally fixed in the factory. This means they do not need to be calibrated like a passive system in order to define the motion capture

volume (i.e. no calibration wands). There are some hybrid systems, however, that use sensors that need to be calibrated.

Electromagnetic and Inertial Sensors There are other types of active marker systems available, where LED's are not used.

Electromagnetic systems have small 6 degree of freedom sensor coils (providing position and orientation information) that are placed on the subject. Sensor position information is picked up by a base unit (wired or wireless). These systems are typically less expensive than the LED systems.

The inertial sensor systems consist of accelerometers, and sometimes gyroscopes and magnetometers. The sensors send their position and orientation information back to a base unit, often wirelessly. These systems are also generally less expensive than LED systems, but often have issues with rapid movements, accuracy, and lower frame speed.

Here are a few of the more well known systems...

Page 15: Mocap Basics for Science

Vendor Web Site

Northern Digital (active, passive, and electromagnetic) www.ndigital.com

Charnwood Dynamics (CODA system) www.charndyn.com

Phoenix Technologies www.ptiphoenix.com

PhaseSpace (hybrid) www.phasespace.com

Animazoo (Inertial) www.animazoo.com

FAB (biosyn) (Inertial) www.biosynsystems.com

Polhemus (electromagnetic) www.polhemus.com

The Motion Analysis Process There are four tightly coupled steps involved in any biomechanics study or application. They are: 1) motion capture, 2) mathematical modeling, 3) analysis, and 4) reporting. Each step must be as error-free as possible because each step relies on the previous one, thus errors will compound. For that reason, effective biomechanics requires an in-depth knowledge of each step. It is essential to identify any assumptions being made, or restrictions imposed on any step by either a tool, a manufacturer, a program, or a process. Otherwise, any results will be suspect. Reliance on any single step – either a motion capture system, a model, a process, or an application – can lead to incorrect conclusions or untenable results, and is often never detected. Once movement data has been collected, the role of the motion capture system stops and the analysis software takes precedence. In many situations this aspect of the analysis is overlooked as motion capture vendors overwhelm their potential customers by focusing on all the special hardware features they have. Fortunately Visual3D works with almost all systems and thus levels the field, and is even resold by many vendors so that they can legitimately focus on hardware.

Motion Capture – Calibration Trial or Standing Trial Typically there is a model defining motion capture trial done prior to capturing movement trials. Models are tied to marker sets (a set of prescribed locations for markers), so it is extremely important to put markers on the subject to enable building a model. You first put all the markers you need to define segments on the subject, and as well as the tracking markers that the camera will see in the movement trials. Other the subject stands with arms outstretched for a one to ten second motion capture. That data is then used to define the model. Visual3D can help speed up this process since it support virtual markers, digitizing pointers, and functional joint center calculations - all techniques to let you use fewer markers and get through

Motion Capture Model Analysis Report

Page 16: Mocap Basics for Science

this phase faster. After the calibration trial (also called the standing trial) the markers used only for calibration purposes can be removed - optionally. With just the tracking markers that the system can see, you then do all your movement trails. Some systems run faster with fewer markers to track.

Signal and Event Processing After all the data has been collected, sometimes the motion capture data need to be filtered or corrected. An analysis package must be able to handle signal processing, such as applied high-pass or low-pass filters, or even custom processing. It should be able to fix marker problems or interpolate missing signals. The software should recognize key events in your movement trials and let you create events based on pattern recognition or other approaches. At a minimum, you should be able to look at your data and validate its integrity.

Modeling Creating a model means defining segments at a minimum. You may be able to use a template, or sample definition as a starting point. The data from the standing or calibration trial is used to create the model. There are predefined models that exist and have established marker sets that can serve both tracking and analysis roles. The most common predefined model is the Helen Hayes/Newington/Plug-in Gait model. It has some small variations and goes by several names, but fundamentally it is the same model, and it was developed specifically for gait analysis. There are a few software packages that can process data using this model, but there are limitations to what it can produce and there are severe restrictions on marker placement that will cause problems if not followed explicitly every time, regardless of subject characteristics or limitations. If you want to create your own analysis models or tracking models, you will likely find that the software provided by most equipment vendors is insufficient, unless animation is your goal. That is one reason why people get Visual3D.

Analysis Once a model has been defined, the movement trial using the same marker set can be applied to that model. Before a model is applied, the movement trial looks like a bunch of dots moving on the screen. If the model segment definitions include graphics like bones or mannequin parts, then you should see a figure or skeleton moving after the model is applied to the data. At a minimum, you should be able to see a stick figure of connected segments. The analysis phase depends on the study you are doing. Animators need no analysis since the data is simply exported to an animation program. A model could be constrained using Inverse Kinematics and exported to a forward dynamics application. A series of joint angles, powers or moments could be calculated and evaluated. Kinematic studies could be performed, assessments made, and reports created. This is the one part that is unique to each user.

Page 17: Mocap Basics for Science

Choosing a System This section is simply some rules of things to looks for in a system. Our advice, as stated earlier, is to work backwards by first determining your reporting and analysis needs, and then selecting the technology that can provide the data to meet those needs. Before selecting a system or technology solution, you need to have a good idea of what the end results should look like. For example: Do you need a set of detailed graphs and/or tables, do you simply want to see an animation, do you want a video picture for frame by frame viewing, do you need instant feedback from a movement, or do you need to simply transfer movements from the real world to a virtual or animation world? We naturally recommend having the modeling and analysis tools first, since Visual3D is a way to remove any dependencies on specific hardware vendors, software, models, or processes. It can let you start with very low end systems and move up to more accurate or expensive systems and add components as you needed them. It ensure data integrity, regardless of collection systems, and enables collaboration with other labs. If your purpose is to examine sports performance, physical rehabilitation, disability assessments, functional assessments, or research, then you will need some flexibility in defining your models and powerful analysis software. Again, Visual3D is appropriate. For hardware, you will always have some limitations to work with, so it is best to gather those up front. The most common limitation is usually budgets. Next is the amount of physical space available to collect data. Always ask a vendor to prove their solution in your own facility if possible. Understand the limitations of a capture technology as it applies to you. Optical systems need room for cameras; accelerometers provide relative movements, but are difficult to model 6 degree of freedom segments. Some systems do not support analog data input, so you can't use force plates or EMG with the system. Finally, it is important to recognize any limitations that the subjects you are analyzing my have. For example, in many animal studies the animals will chew off any markers you attach – regardless of type. Surgical implantation is a known solution. Active subjects may have trouble keeping markers attached or have severe skin movements making marker consistency problematic (i.e. marker clusters may be required). Finally, with the advent of low-cost optical systems, it may now be possible to have 2 systems. One can be used for teaching or loaning out to other groups, while a bigger lab-based system is used for the more serious data collection tasks. John Kiser C-Motion, Inc. ©2009, 2010