Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Motion capture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Markerless=== Emerging techniques and research in [[computer vision]] are leading to the rapid development of the markerless approach to motion capture. Markerless systems such as those developed at [[Stanford University]], the [[University of Maryland]], [[MIT]], and the [[Max Planck Institute]], do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. [[ESC entertainment]], a subsidiary of [[Warner Brothers Pictures]] created especially to enable [[virtual cinematography]], used a technique called Universal Capture that utilized [[multi-camera setup|7 camera setup]] and the tracking the [[optical flow]] of all [[pixel]]s over all the 2-D planes of the cameras for motion, [[gesture]] and [[facial expression]] capture leading to photorealistic results. ====Traditional systems==== Traditionally markerless optical motion tracking is used to keep track of various objects, including airplanes, launch vehicles, missiles and satellites. Many such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High-resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on the space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light.<ref>{{Cite journal| doi = 10.1007/BF00216781| title = Optical tracking of artificial satellites| year = 1963| last1 = Veis | first1 = G.| journal = Space Science Reviews| volume = 2| issue = 2| pages = 250β296| bibcode=1963SSRv....2..250V| s2cid = 121533715}}</ref> An optical tracking system typically consists of three subsystems: the optical imaging system, the mechanical tracking platform and the tracking computer. The optical imaging system is responsible for converting the light from the target area into a digital image that the tracking computer can process. Depending on the design of the optical tracking system, the optical imaging system can vary from as simple as a standard digital camera to as specialized as an astronomical telescope on the top of a mountain. The specification of the optical imaging system determines the upper limit of the effective range of the tracking system. The mechanical tracking platform holds the optical imaging system and is responsible for manipulating the optical imaging system in such a way that it always points to the target being tracked. The dynamics of the mechanical tracking platform combined with the optical imaging system determines the tracking system's ability to keep the lock on a target that changes speed rapidly. The tracking computer is responsible for capturing the images from the optical imaging system, analyzing the image to extract the target position and controlling the mechanical tracking platform to follow the target. There are several challenges. First, the tracking computer has to be able to capture the image at a relatively high frame rate. This posts a requirement on the bandwidth of the image-capturing hardware. The second challenge is that the image processing software has to be able to extract the target image from its background and calculate its position. Several textbook image-processing algorithms are designed for this task. This problem can be simplified if the tracking system can expect certain characteristics that is common in all the targets it will track. The next problem down the line is controlling the tracking platform to follow the target. This is a typical control system design problem rather than a challenge, which involves modeling the system dynamics and designing [[motion controller|controllers]] to control it. This will however become a challenge if the tracking platform the system has to work with is not designed for real-time. The software that runs such systems is also customized for the corresponding hardware components. One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites. Another option is the software SimiShape, which can also be used hybrid in combination with markers. ====RGB-D cameras==== RGB-D cameras such as [[Kinect]] capture both the color and depth images. By fusing the two images, 3D colored [[voxels]] can be captured, allowing motion capture of 3D human motion and human surface in real-time. Because of the use of a single-view camera, motions captured are usually noisy. Machine learning techniques have been proposed to automatically reconstruct such noisy motions into higher quality ones, using methods such as [[lazy learning]]<ref>{{cite journal |last1=Shum |first1=Hubert P. H. |last2=Ho |first2=Edmond S. L. |last3=Jiang |first3=Yang |last4=Takagi |first4=Shu |title=Real-Time Posture Reconstruction for Microsoft Kinect |journal=IEEE Transactions on Cybernetics |date=2013 |volume=43 |issue=5 |pages=1357β1369 |doi=10.1109/TCYB.2013.2275945|pmid=23981562 |s2cid=14124193 }}</ref> and [[Gaussian]] models.<ref>{{cite journal |last1=Liu |first1=Zhiguang |last2=Zhou |first2=Liuyang |last3=Leung |first3=Howard |last4=Shum |first4=Hubert P. H. |title=Kinect Posture Reconstruction based on a Local Mixture of Gaussian Process Models |journal=IEEE Transactions on Visualization and Computer Graphics |date=2016 |volume=22 |issue=11 |pages=2437β2450 |doi=10.1109/TVCG.2015.2510000|pmid=26701789 |s2cid=216076607 |url=http://nrl.northumbria.ac.uk/id/eprint/25559/1/07360215.pdf }}</ref> Such method generates accurate enough motion for serious applications like ergonomic assessment.<ref>{{cite journal |last1=Plantard |first1=Pierre |last2=Shum |first2=Hubert P. H. |last3=Pierres |first3=Anne-Sophie Le |last4=Multon |first4=Franck |title=Validation of an Ergonomic Assessment Method using Kinect Data in Real Workplace Conditions |journal=Applied Ergonomics |date=2017 |volume=65 |pages=562β569 |doi=10.1016/j.apergo.2016.10.015|pmid=27823772 |s2cid=13658487 |doi-access=free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)