Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Augmented reality
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==3D tracking== {{Main|Positional tracking}} 3D tracking is an integral part of augmented reality, as it allows a headset and controllers to be tracked in the user's environment. Tracking is often camera-based, which uses cameras on the device. Mobile augmented-reality systems use one or more of the following [[motion capture|motion tracking]] technologies: [[digital camera]]s and/or other [[image sensor|optical sensors]], accelerometers, GPS, gyroscopes, solid state compasses, [[radio-frequency identification]] (RFID). These technologies offer varying levels of accuracy and precision. These technologies are implemented in the ARKit [[API]] by [[Apple Inc.|Apple]] and [[ARCore]] API by [[Google]] to allow tracking for their respective mobile device platforms. CMOS camera sensors are widely used for camera-based tracking in AR technology.<ref>{{cite book |last1=Schmalstieg |first1=Dieter |last2=Hollerer |first2=Tobias |title=Augmented Reality: Principles and Practice |date=2016 |publisher=[[Addison-Wesley Professional]] |isbn=978-0-13-315320-0 |pages=209β10 |url=https://books.google.com/books?id=qPU2DAAAQBAJ&pg=PT209}}</ref> ===Camera-based tracking=== [[File:comparison_of_augmented_reality_fiducial_markers.svg|thumb|{{{1|200px}}}|Comparison of fiducial markers used for 3D tracking in augmented reality]] Augmented reality systems must realistically integrate virtual imagery into the real world. The software must derive real world coordinates, independent of camera, and camera images. That process is called [[image registration]], and uses different methods of [[computer vision]], mostly related to [[video tracking]].<ref name="recentadvances" /><ref>Maida, James; Bowen, Charles; Montpool, Andrew; Pace, John. [http://research.jsc.nasa.gov/PDF/SLiSci-14.pdf Dynamic registration correction in augmented-reality systems] {{webarchive|url=https://web.archive.org/web/20130518032710/http://research.jsc.nasa.gov/PDF/SLiSci-14.pdf |date=18 May 2013 }}, ''Space Life Sciences'', NASA.</ref> Many computer vision methods of augmented reality are inherited from [[visual odometry]]. Usually those methods consist of two parts. The first stage is to detect [[interest point detection|interest points]], fiducial markers or [[optical flow]] in the camera images. This step can use [[Feature detection (computer vision)|feature detection]] methods like [[corner detection]], [[blob detection]], [[edge detection]] or [[Thresholding (image processing)|thresholding]], and other [[image processing]] methods.<ref>State, Andrei; Hirota, Gentaro; Chen, David T; Garrett, William; Livingston, Mark. [http://www.cs.princeton.edu/courses/archive/fall01/cs597d/papers/state96.pdf Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking], Department of Computer Science, University of North Carolina at Chapel Hill.</ref><ref>Bajura, Michael; Neumann, Ulrich. [http://graphics.usc.edu/cgit/publications/papers/DynamicRegistrationVRAIS95.pdf Dynamic Registration Correction in Augmented-Reality Systems] [https://web.archive.org/web/20120713224616/https://graphics.usc.edu/cgit/publications/papers/DynamicRegistrationVRAIS95.pdf Archived] 13 July 2012, University of North Carolina, University of Southern California.</ref> The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be calculated beforehand. If part of the scene is unknown, simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, [[structure from motion]] methods like [[bundle adjustment]] are used. Mathematical methods used in the second stage include: [[projective geometry|projective]] ([[Epipolar geometry|epipolar]]) geometry, [[Kalman filter|kalman]] and [[Particle filter|particle]] filters, [[nonlinear optimization]], [[robust statistics]].{{citation needed|date=February 2017}} There are two methods of camera-based tracking: marker-based tracking and markerless tracking.<ref>{{Cite journal |url=https://codegres.com/augmented-reality/ |title=What is Augmented Reality |last=Hegde |first=Naveen |date=19 March 2023 | journal=Codegres |access-date=19 March 2023}}</ref> Marker-based tracking uses fiducial markers, whereas markerless tracking stores a representation of the real world using visual-inertial odometry (VIO) or simultaneous localization and mapping (SLAM). A piece of paper with some distinct geometries can be used for marker-based tracking. The camera recognizes the geometries by identifying specific points in the drawing. Markerless tracking, also called instant tracking, does not use markers. It uses sensors in mobile devices to accurately detect the real-world environment, such as the locations of walls and points of intersection.<ref>{{Cite news|url=https://www.marxentlabs.com/what-is-markerless-augmented-reality-dead-reckoning/|title=Markerless Augmented Reality is here.|date=9 May 2014|work=Marxent {{!}} Top Augmented Reality Apps Developer|access-date=23 January 2018|language=en-US}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)