Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computer vision
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Applications== Applications range from tasks such as industrial [[machine vision]] systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: [[File:Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes With Deep Generative Networks.png|thumb|Learning 3D shapes has been a challenging task in computer vision. Recent advances in [[deep learning]] have enabled researchers to build models that are able to generate and reconstruct 3D shapes from single or multi-view [[depth map]]s or silhouettes seamlessly and efficiently.<ref name="3DVAE" />]] * Automatic inspection, ''e.g.'', in manufacturing applications; * Assisting humans in identification tasks, e.g., a [[Automated species identification|species identification]] system;<ref>{{Cite journal|last1=Wäldchen|first1=Jana|last2=Mäder|first2=Patrick|date=2017-01-07|title=Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review|journal=Archives of Computational Methods in Engineering|volume=25|issue=2|pages=507–543|doi=10.1007/s11831-016-9206-z|pmid=29962832|pmc=6003396|issn=1134-3060}}</ref> * Controlling processes, ''e.g.'', an [[industrial robots|industrial robot]]; * [[Activity recognition|Detecting events]], ''e.g.'', for [[artificial intelligence for video surveillance|visual surveillance]] or [[people counter|people counting]], e.g., in the [[Presto (restaurant technology platform)|restaurant industry]]; * Interaction, ''e.g.'', as the input to a device for [[computer-human interaction]]; * monitoring agricultural crops, e.g. an open-source [[vision transformer]]s model<ref>{{Cite journal |last1=Aghamohammadesmaeilketabforoosh |first1=Kimia |last2=Nikan |first2=Soodeh |last3=Antonini |first3=Giorgio |last4=Pearce |first4=Joshua M. |date=January 2024 |title=Optimizing Strawberry Disease and Quality Detection with Vision Transformers and Attention-Based Convolutional Neural Networks |journal=Foods |volume=13 |issue=12 |pages=1869 |doi=10.3390/foods13121869 |doi-access=free |pmid=38928810 |pmc=11202458 |issn=2304-8158}}</ref> has been developed to help farmers automatically detect [[List of strawberry diseases|strawberry diseases]] with 98.4% accuracy.<ref>{{Cite web |date=2024-09-13 |title=New AI model developed at Western detects strawberry diseases, takes aim at waste |url=https://london.ctvnews.ca/new-ai-model-developed-at-western-detects-strawberry-diseases-takes-aim-at-waste-1.7035616 |access-date=2024-09-19 |website=London }}</ref> * Modeling objects or environments, ''e.g.'', medical image analysis or [[Topographic map|topographical]] modeling; * Navigation, ''e.g.'', by an [[autonomous vehicle]] or [[mobile robot]]; * Organizing information, ''e.g.'', for [[Search engine indexing|indexing]] databases of images and image sequences. *Tracking surfaces or planes in 3D coordinates for allowing [[Augmented reality|Augmented Reality]] experiences. *Analyzing the condition of facilities in industry or construction. *Automatic real-time lip-reading for devices and apps to assist people with disabilities.<ref>{{Cite web |date=2020-06-30 |title=Applications of Computer Vision |url=https://www.geeksforgeeks.org/applications-of-computer-vision/ |access-date=2025-04-27 |website=GeeksforGeeks |language=en-US}}</ref> For 2024, the leading areas of computer vision were industry (market size US$5.22 billion),<ref>{{Cite web |title=Global Industrial Machine Vision Market Growth Analysis - Size and Forecast 2024 - 2028 |url=https://www.technavio.com/report/industrial-machine-vision-market-analysis |access-date=2025-05-14 |website=www.technavio.com |language=EN}}</ref> medicine (market size US$2.6 billion),<ref>{{Cite web |last=Laviola |first=Erin |title=What Is Computer Vision and How Is It Being Used in Healthcare? |url=https://healthtechmagazine.net/article/2025/03/what-is-computer-vision-in-healthcare-perfcon |access-date=2025-05-14 |website=HealthTech |language=en}}</ref> military (market size US$996.2 million).<ref>{{Cite web |title=Computer Vision - Artificial intelligence in military market outlook |url=https://www.grandviewresearch.com/horizon/statistics/artificial-intelligence-in-military-market/technology/computer-vision/global |access-date=2025-05-14 |website=www.grandviewresearch.com |language=en}}</ref> === Medicine === [[File:DARPA Visual Media Reasoning Concept Video.ogv|thumb|[[DARPA]]'s Visual Media Reasoning concept video]] One of the most prominent application fields is [[medical computer vision]], or medical image processing, characterized by the extraction of information from image data to [[Computer-assisted diagnosis|diagnose a patient.]]<ref>{{Cite journal |last1=Li |first1=Mengfang |last2=Jiang |first2=Yuanyuan |last3=Zhang |first3=Yanzhou |last4=Zhu |first4=Haisheng |date=2023 |title=Medical image analysis using deep learning algorithms |journal=Frontiers in Public Health |volume=11 |pages=1273253 |doi=10.3389/fpubh.2023.1273253 |doi-access=free |issn=2296-2565 |pmc=10662291 |pmid=38026291|bibcode=2023FrPH...1173253L }}</ref> An example of this is the detection of [[tumour]]s, [[arteriosclerosis]] or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: ''e.g.'', about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—[[Ultrasound|ultrasonic images]] or [[Radiography|X-ray images]], for example—to reduce the influence of noise. === Machine vision === A second application area in computer vision is in industry, sometimes called [[machine vision]], where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the [[Wafer (electronics)|Wafer]] industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a [[Integrated circuit|computer chip]] from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process called [[optical sorting]].<ref name="Davies-2005" /> === Military === The obvious examples are the detection of enemy soldiers or vehicles and [[missile guidance]]. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. === Autonomous vehicles === [[File:Mars Science Laboratory, 2011-Present.jpg|thumb|Artist's concept of ''[[Curiosity (rover)|Curiosity]]'', an example of an uncrewed land-based vehicle. The [[stereo camera]] is mounted on top of the rover.|alt=]] One of the newer application areas is autonomous vehicles, which include [[submersible]]s, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles ([[Unmanned aerial vehicle|UAV]]). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment ([[Simultaneous localization and mapping|SLAM]]), for detecting obstacles. It can also be used for detecting certain task-specific events, ''e.g.'', a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for [[Driverless car|autonomous driving of cars]]. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, ''e.g.'', [[NASA]]'s ''[[Curiosity (rover)|Curiosity]]'' and [[China National Space Administration|CNSA]]'s ''[[Yutu-2]]'' rover. === Tactile feedback === [[File:Finger sensor.webp|thumb|upright=1.15|Rubber artificial skin layer with the flexible structure for the shape estimation of micro-undulation surfaces]] [[File:Silicon Sensor.webp|thumb|upright=1.15|Above is a silicon mold with a camera inside containing many different point markers. When this sensor is pressed against the surface, the silicon deforms, and the position of the point markers shifts. A computer can then take this data and determine how exactly the mold is pressed against the surface. This can be used to calibrate robotic hands in order to make sure they can grasp objects effectively.]] Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface.<ref name=":0">{{Cite journal|last1=Ando|first1=Mitsuhito|last2=Takei|first2=Toshinobu|last3=Mochiyama|first3=Hiromi|date=2020-03-03|title=Rubber artificial skin layer with flexible structure for shape estimation of micro-undulation surfaces|journal=ROBOMECH Journal|volume=7|issue=1|pages=11|doi=10.1186/s40648-020-00159-0|issn=2197-4225|doi-access=free}}</ref> Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data.<ref name=":1">{{Cite journal|last1=Choi|first1=Seung-hyun|last2=Tahara|first2=Kenji|date=2020-03-12|title=Dexterous object manipulation by a multi-fingered robotic hand with visual-tactile fingertip sensors|journal=ROBOMECH Journal|volume=7|issue=1|pages=14|doi=10.1186/s40648-020-00162-5|issn=2197-4225|doi-access=free}}</ref> Other application areas include: * Support of [[visual effects]] creation for cinema and broadcast, ''e.g.'', [[camera tracking]] (match moving). * [[Surveillance]]. * [[Driver drowsiness detection]]<ref>{{Cite book |last=Garg |first=Hitendra |title=2020 International Conference on Power Electronics & IoT Applications in Renewable Energy and its Control (PARC) |chapter=Drowsiness Detection of a Driver using Conventional Computer Vision Application |date=2020-02-29 |chapter-url=https://ieeexplore.ieee.org/document/9087013 |pages=50–53 |doi=10.1109/PARC49193.2020.236556 |isbn=978-1-7281-6575-2 |s2cid=218564267 |access-date=2022-11-06 |archive-date=2022-06-27 |archive-url=https://web.archive.org/web/20220627061928/https://ieeexplore.ieee.org/document/9087013/ |url-status=live }}</ref><ref>{{Cite book |last1=Hasan |first1=Fudail |last2=Kashevnik |first2=Alexey |title=2021 29th Conference of Open Innovations Association (FRUCT) |chapter=State-of-the-Art Analysis of Modern Drowsiness Detection Algorithms Based on Computer Vision |date=2021-05-14 |chapter-url=https://ieeexplore.ieee.org/document/9435480 |pages=141–149 |doi=10.23919/FRUCT52173.2021.9435480 |isbn=978-952-69244-5-8 |s2cid=235207036 |access-date=2022-11-06 |archive-date=2022-06-27 |archive-url=https://web.archive.org/web/20220627061552/https://ieeexplore.ieee.org/document/9435480/ |url-status=live }}</ref><ref>{{Cite journal |last1=Balasundaram |first1=A |last2=Ashokkumar |first2=S |last3=Kothandaraman |first3=D |last4=kora |first4=SeenaNaik |last5=Sudarshan |first5=E |last6=Harshaverdhan |first6=A |date=2020-12-01 |title=Computer vision based fatigue detection using facial parameters |journal=IOP Conference Series: Materials Science and Engineering |volume=981 |issue=2 |page=022005 |doi=10.1088/1757-899x/981/2/022005 |bibcode=2020MS&E..981b2005B |s2cid=230639179 |issn=1757-899X|doi-access=free }}</ref> * Tracking and counting organisms in the biological sciences<ref name="BruijningVisser2018">{{cite journal|last1=Bruijning|first1=Marjolein|last2=Visser|first2=Marco D.|last3=Hallmann|first3=Caspar A.|last4=Jongejans|first4=Eelke|last5=Golding|first5=Nick|title=trackdem: Automated particle tracking to obtain population counts and size distributions from videos in r |journal=Methods in Ecology and Evolution|volume=9|issue=4|pages=965–973|year=2018|issn=2041-210X|doi=10.1111/2041-210X.12975|bibcode=2018MEcEv...9..965B |doi-access=free|hdl=2066/184075|hdl-access=free}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)