Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Sensor fusion
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == One application of sensor fusion is [[GPS/INS]], where [[Global Positioning System]] and [[inertial navigation system]] data is fused using various different methods, e.g. the [[extended Kalman filter]]. This is useful, for example, in determining the attitude of an aircraft using low-cost sensors.<ref>{{cite journal|last=Gross|first=Jason|author2=Yu Gu |author3=Matthew Rhudy |author4=Srikanth Gururajan |author5=Marcello Napolitano |title=Flight Test Evaluation of Sensor Fusion Algorithms for Attitude Estimation|journal=IEEE Transactions on Aerospace and Electronic Systems|date=July 2012|volume=48|issue=3|pages=2128β2139|doi=10.1109/TAES.2012.6237583|bibcode=2012ITAES..48.2128G|s2cid=393165}}</ref> Another example is using the [[data fusion]] approach to determine the traffic state (low traffic, traffic jam, medium flow) using road side collected acoustic, image and sensor data.<ref>{{cite conference|author=Joshi, V., Rajamani, N., Takayuki, K., Prathapaneni, N., Subramaniam, L. V. |year= 2013|title= Information Fusion Based Learning for Frugal Traffic State Sensing|conference=Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence}}</ref> In the field of autonomous driving, sensor fusion is used to combine the redundant information from complementary sensors in order to obtain a more accurate and reliable representation of the environment.<ref name=mmp>{{cite journal | last1=Mircea Paul| first1=Muresan| last2=Ion| first2=Giosan| last3=Sergiu| first3=Nedevschi | title=Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation | journal=Sensors | volume=20 | issue=4 | pages=1110| date=2020-02-18| doi=10.3390/s20041110 | pmid=32085608| pmc=7070899| bibcode=2020Senso..20.1110M| doi-access=free}}</ref> Although technically not a dedicated sensor fusion method, modern [[convolutional neural network]] based methods can simultaneously process many channels of sensor data (such as [[hyperspectral imaging]] with hundreds of bands <ref name=Ran>{{cite journal | last1=Ran | first1=Lingyan | last2=Zhang | first2=Yanning | last3=Wei | first3=Wei | last4=Zhang | first4=Qilin | title=A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features | journal=Sensors | volume=17 | issue=10 | pages=2421 | date=2017-10-23 | doi=10.3390/s17102421 | pmid=29065535 | pmc=5677443 | bibcode=2017Senso..17.2421R | doi-access=free }}</ref>) and fuse relevant information to produce classification results.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)