Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Motion capture
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Related techniques== ===Facial motion capture=== {{Main|Facial motion capture}} Most traditional motion capture hardware vendors provide for some type of low-resolution facial capture utilizing anywhere from 32 to 300 markers with either an active or passive marker system. All of these solutions are limited by the time it takes to apply the markers, calibrate the positions and process the data. Ultimately the technology also limits their resolution and raw output quality levels. High-fidelity facial motion capture, also known as '''performance capture''', is the next generation of fidelity and is utilized to record the more complex movements in a human face in order to capture higher degrees of emotion. Facial capture is currently arranging itself in several distinct camps, including traditional motion capture data, blend-shaped based solutions, capturing the actual topology of an actor's face, and proprietary systems. The two main techniques are stationary systems with an array of cameras capturing the facial expressions from multiple angles and using software such as the stereo mesh solver from OpenCV to create a 3D surface mesh, or to use light arrays as well to calculate the surface normals from the variance in brightness as the light source, camera position or both are changed. These techniques tend to be only limited in feature resolution by the camera resolution, apparent object size and number of cameras. If the users face is 50 percent of the working area of the camera and a camera has megapixel resolution, then sub millimeter facial motions can be detected by comparing frames. Recent work is focusing on increasing the frame rates and doing optical flow to allow the motions to be retargeted to other computer generated faces, rather than just making a 3D Mesh of the actor and their expressions. ===Radio frequency positioning=== Radio frequency positioning systems are becoming more viable{{Citation needed|date=October 2017}} as higher frequency radio frequency devices allow greater precision than older technologies such as [[radar]]. The speed of light is 30 centimeters per nanosecond (billionth of a second), so a 10 gigahertz (billion cycles per second) radio frequency signal enables an accuracy of about 3 centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about 8 mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are almost as dependent on line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances is not likely to be as high. Many scientists{{Who|date=October 2017}} believe that radio frequency will never produce the accuracy required for motion capture. Researchers at Massachusetts Institute of Technology researchers said in 2015 that they had made a system that tracks motion by radio frequency signals.<ref>{{Cite web|url=https://www.nydailynews.com/news/national/mit-creates-device-track-people-walls-article-1.2419781|title=MIT researchers create device that can recognize, track people through walls|last=Alba|first=Alejandro|website=nydailynews.com|date=November 2015 |access-date=2019-12-09}}</ref> ===Non-traditional systems=== An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a [[hamster ball]], which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment. Even though this technology could potentially lead to much lower costs for motion capture, the basic sphere is only capable of recording a single continuous direction. Additional sensors worn on the person would be needed to record anything more. Another alternative is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training, bio-mechanical research and virtual reality.{{citation needed|date=January 2020}} ===3D pose estimation=== In [[3D pose estimation]], an actor's pose can be reconstructed from an image or [[depth map]].<ref>Ye, Mao, et al. "[http://www-oldurls.inf.ethz.ch/personal/pomarc/pubs/YeICCV11.pdf Accurate 3d pose estimation from a single depth image] {{Webarchive|url=https://web.archive.org/web/20200113213918/http://www-oldurls.inf.ethz.ch/personal/pomarc/pubs/YeICCV11.pdf |date=2020-01-13 }}." 2011 International Conference on Computer Vision. IEEE, 2011.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)