Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Machine vision
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Imaging based automatic inspection and sorting== The primary uses for machine vision are imaging-based automatic inspection and sorting and robot guidance.;<ref name = NASAarticle/><ref name=AssemblyIntro2016/>{{rp|6β10}} in this section the former is abbreviated as "automatic inspection". The overall process includes planning the details of the requirements and project, and then creating a solution.<ref name = WestRoadmap/><ref name = IntegrationVandSJan2009>{{cite journal | author=Dechow, David | title=Integration: Making it Work | journal=Vision & Sensors | date=January 2009 | pages=16β20 | url=http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000496708 | archive-url=https://web.archive.org/web/20200314042314/http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000496708 | url-status=dead | archive-date=2020-03-14 | access-date=2012-05-12 }}</ref> This section describes the technical process that occurs during the operation of the solution. ===Methods and sequence of operation=== The first step in the automatic inspection sequence of operation is [[digital imaging|acquisition of an image]], typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing.<ref name=Handbook427>{{cite book|title=Handbook of Machine Vision | author=Hornberg, Alexander |page=427| date= 2006|publisher=[[Wiley-VCH]]|isbn=978-3-527-40584-8|url=https://books.google.com/books?id=x_1IauK-M2cC&pg=PA427|access-date=2010-11-05}}</ref><ref name = "DemantBook">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | isbn=3-540-66410-6}}{{Page needed|date=May 2012}}</ref> MV [[software]] packages and programs developed in them then employ various [[digital image processing]] techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information.<ref name=Handbook429>{{cite book|title=Handbook of Machine Vision | author=Hornberg, Alexander |page=429| date= 2006|publisher=Wiley-VCH|isbn=978-3-527-40584-8|url=https://books.google.com/books?id=x_1IauK-M2cC&pg=PA429|access-date=2010-11-05}}</ref> ===Equipment=== The components of an automatic inspection system usually include lighting, a camera or other imager, a processor, software, and output devices.<ref name=AssemblyIntro2016>{{cite web|last1=Cognex|title=Introduction to Machine Vision|url=http://www.assemblymag.com/ext/resources/White_Papers/Sep16/Introduction-to-Machine-Vision.pdf|publisher=Assembly Magazine|access-date=9 February 2017|date=2016}}</ref>{{rp|11β13}} ===Imaging=== The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a [[smart camera]] or smart sensor.<ref>{{cite book | title = Smart Cameras | editor = Belbachir, Ahmed Nabil| publisher = Springer | date = 2009 | isbn = 978-1-4419-0952-7}}{{page needed|date=December 2012}}</ref><ref name= "VSD201302">{{cite journal| url=http://www.vision-systems.com/articles/print/volume-18/issue-2/departments/leading-edge-views/explore-the-fundamentals-of-machine-vision-part-i.html | title=Explore the Fundamentals of Machine Vision: Part 1| volume=18 | issue=2 | date=February 2013 |author=Dechow, David |journal=Vision Systems Design |pages=14β15| access-date=2013-03-05}}</ref> Inclusion of the full processing function into the same enclosure as the camera is often referred to as embedded processing.<ref name ="PhotonicsSpectra2019">''Critical Considerations for Embedded Vision Design'' by Dave Rice and Amber Thousand ''Photonics Spectra'' magazine published by Laurin Publishing Co. July 2019 issue Pages 60-64</ref> When separated, the connection may be made to specialized intermediate hardware, a custom processing appliance, or a [[frame grabber]] within a computer using either an analog or standardized digital interface ([[Camera Link]], [[CoaXPress]]).<ref name = coaxexpress>{{cite journal| url=http://www.vision-systems.com/articles/2011/05/coaxpress-standard-camera-frame-grabber-support.html | title=CoaXPress standard gets camera, frame grabber support | date= May 31, 2011 |author=Wilson, Andrew |journal=Vision Systems Design |access-date=2012-11-28}}</ref><ref name = VSDCompliantCameras>{{cite journal| url=http://www.vision-systems.com/articles/2012/11/cameras-certified-as-compliant-with-coaxpress-standard.html | title=Cameras certified as compliant with CoaXPress standard | author=Wilson, Dave |journal=Vision Systems Design | date= November 12, 2012 |access-date=2013-03-05}}</ref><ref name = Davies2nd/><ref name = Dinev>{{cite journal |author=Dinev, Petko |title=Digital or Analog? Selecting the Right Camera for an Application Depends on What the Machine Vision System is Trying to Achieve |journal=Vision & Sensors |date=March 2008 |pages=10β14 |url=http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000276728 |archive-url=https://web.archive.org/web/20200314042249/http://www.visionsensorsmag.com/Articles/Feature_Article/BNP_GUID_9-5-2006_A_10000000000000276728 |url-status=dead |archive-date=2020-03-14 |access-date=2012-05-12 }}</ref> MV implementations also use digital cameras capable of direct connections (without a framegrabber) to a computer via [[IEEE 1394|FireWire]], [[USB]] or [[Gigabit Ethernet]] interfaces.<ref name = Dinev/><ref name = VSDInterfaces>{{cite journal | url=http://www.vision-systems.com/articles/print/volume-16/issue-12/features/looking-to-the-future-of-vision.html | title=Product Focus - Looking to the Future of Vision | author=Wilson, Andrew | journal=Vision Systems Design |volume=16| issue=12 | date=December 2011 |access-date=2013-03-05}}</ref> While conventional (2D visible light) imaging is most commonly used in MV, alternatives include [[Multispectral image|multispectral imaging]], [[hyperspectral imaging]], imaging various infrared bands,<ref name =InfraredVSDApril2011>{{cite journal |author=Wilson, Andrew | title=The Infrared Choice | journal= Vision Systems Design |date= April 2011 |pages=20β23 | url=http://www.vision-systems.com/articles/print/volume-16/issue-4/features/the-infrared-choice.html |volume=16 |issue=4|access-date=2013-03-05}}</ref> line scan imaging, [[3D imaging]] of surfaces and X-ray imaging.<ref name = NASAarticle>{{cite journal|journal= [[NASA Tech Briefs]] |volume= 35 |issue= 6 |date= June 2011 |title=Machine Vision Fundamentals, How to Make Robots See|author=Turek, Fred D. |pages=60β62 |url= http://www.techbriefs.com/privacy-footer-69/10531 | access-date=2011-11-29}}</ref> Key differentiations within MV 2D visible light imaging are monochromatic vs. color, [[frame rate]], resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes.<ref name = WestHSRT>West, Perry ''High Speed, Real-Time Machine Vision '' CyberOptics, pages 1-38</ref> Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry.<ref name=DN201202>{{cite journal |title=3D Machine Vison Comes into Focus |author=Murray, Charles J |journal=[[Design News]] |date=February 2012 |url=http://www.designnews.com/document.asp?doc_id=237971 |access-date=2012-05-12 |url-status=dead |archive-url=https://web.archive.org/web/20120605095256/http://www.designnews.com/document.asp?doc_id=237971 |archive-date=2012-06-05 }}</ref><ref name=Davies4th410-411>{{cite book|pages=410β411|author=Davies, E.R. | edition=4th | date=2012 | title=Computer and Machine Vision: Theory, Algorithms, Practicalities | publisher=Academic Press| isbn=9780123869081 | url=https://books.google.com/books?id=AhVjXf2yKtkC&pg=PA410 | access-date=2012-05-13}}</ref> The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. A laser is projected onto the surfaces of an object. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. The line is viewed by a camera from a different angle; the deviation of the line represents shape variations. Lines from multiple scans are assembled into a [[depth map]] or point cloud.<ref name = QualityMagazine/> Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras.<ref name = QualityMagazine>''3-D Imaging: A practical Overview for Machine Vision'' By Fred Turek & Kim Jackson Quality Magazine, March 2014 issue, Volume 53/Number 3 Pages 6-8</ref> Other 3D methods used for machine vision are [[Time-of-flight camera|time of flight]] and grid based.<ref name =QualityMagazine/><ref name =DN201202/> One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012.<ref name = hybrid>http://research.microsoft.com/en-us/people/fengwu/depth-icip-12.pdf HYBRID STRUCTURED LIGHT FOR SCALABLE DEPTH SENSING Yueyi Zhang, Zhiwei Xiong, Feng Wu University of Science and Technology of China, Hefei, China Microsoft Research Asia, Beijing, China</ref><ref name = pseudorandom>R.Morano, C.Ozturk, R.Conn, S.Dubin, S.Zietz, J.Nissano, "Structured light using pseudorandom codes", IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322β327</ref> ===Image processing=== After an image is acquired, it is processed.<ref name = Davies2nd>{{cite book| author= Davies, E.R. | title=Machine Vision - Theory Algorithms Practicalities | edition=2nd |publisher= Harcourt & Company | isbn=978-0-12-206092-2 | date=1996}}{{Page needed|date=May 2012}}.</ref> Central processing functions are generally done by a [[CPU]], a [[GPU]], a [[FPGA]] or a combination of these.<ref name = PhotonicsSpectra2019/> Deep learning training and inference impose higher processing performance requirements.<ref name = VSDSept2019>''Finding the optimal hardware for deep learining inference in machine vision'' by Mike Fussell Vision Systems Design magazine September 2019 issue pages 8-9</ref> Multiple stages of processing are generally used in a sequence that ends up as a desired result. A typical sequence might start with tools such as filters which modify the image, followed by extraction of objects, then extraction (e.g. measurements, reading of codes) of data from those objects, followed by communicating that data, or comparing it against target values to create and communicate "pass/fail" results. Machine vision image processing methods include; * [[Image stitching|Stitching]]/[[Image registration|Registration]]: Combining of adjacent 2D or 3D images.{{citation needed|date=April 2013}} * Filtering (e.g. [[Morphological image processing|morphological filtering]])<ref name = "Demant39">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=39 | isbn=3-540-66410-6}}</ref> * Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image to simply black and white based on whether it is below or above that grayscale value.<ref name = "Demant96">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=96 | isbn=3-540-66410-6}}</ref> * Pixel counting: counts the number of light or dark [[pixel]]s{{citation needed|date=April 2013}} * [[Segmentation (image processing)|Segmentation]]: Partitioning a [[digital image]] into multiple [[Image segment|segments]] to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.<ref name="computervision">[[Linda Shapiro|Linda G. Shapiro]] and George C. Stockman (2001): βComputer Visionβ, pp 279-325, New Jersey, Prentice-Hall, {{ISBN|0-13-030796-3}}</ref><ref>Lauren Barghout. Visual Taxometric approach Image Segmentation using Fuzzy-Spatial Taxon Cut Yields Contextually Relevant Regions. Information Processing and Management of Uncertainty in Knowledge-Based Systems. CCIS Springer-Verlag. 2014</ref> * [[Edge detection]]: finding object edges<ref name = "Demant108">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=108 | isbn=3-540-66410-6}}</ref> * Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate [[Feature (computer vision)|features]] using color.<ref name = NASAarticle/> * [[blob extraction|Blob detection and extraction]]: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks.<ref name = "Demant95">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=95 | isbn=3-540-66410-6}}</ref> * [[Artificial neural network|Neural network]] / [[deep learning]] / [[machine learning]] processing: weighted and self-training multi-variable decision making<ref name ="TurekNeuralNet">{{cite journal | author=Turek, Fred D. |title=Introduction to Neural Net Machine Vision |url= http://www.vision-systems.com/articles/print/volume-12/issue-3/features/introduction-to-neural-net-machine-vision.html |access-date=2013-03-05|journal = Vision Systems Design |date= March 2007 |volume=12|number=3}}</ref> Circa 2019 there is a large expansion of this, using deep learning and machine learning to significantly expand machine vision capabilities. The most common result of such processing is classification. Examples of classification are object identification,"pass fail" classification of identified objects and OCR.<ref name ="TurekNeuralNet"/> * [[Pattern recognition]] including [[template matching]]. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.<ref name = "Demant111">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=111 | isbn=3-540-66410-6}}</ref> * [[Barcode]], [[Data Matrix]] and "[[2D barcode]]" reading<ref name = "Demant125">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=125 | isbn=3-540-66410-6}}</ref> * [[Optical character recognition]]: automated reading of text such as serial numbers<ref name = "Demant132">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=132 | isbn=3-540-66410-6}}</ref> * [[Metrology|Gauging/Metrology]]: measurement of object dimensions (e.g. in [[pixel]]s, [[inch]]es or [[millimeter]]s)<ref name = "Demant191">{{cite book | author=Demant C.| author2=Streicher-Abel B.| author3=Waszkewitz P.| name-list-style=amp| title=Industrial Image Processing: Visual Quality Control in Manufacturing| publisher=Springer-Verlag | date=1999 | page=191 | isbn=3-540-66410-6}}</ref> *Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.<ref name="Demant125"/> ===Outputs=== A common output from automatic inspection systems is pass/fail decisions.<ref name = Handbook429/> These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information for robot guidance systems.<ref name = NASAarticle/> Additionally, output types include numerical measurement data, data read from codes and characters, counts and classification of objects, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and [[process control]] signals.<ref name = WestRoadmap >West, Perry ''A Roadmap For Building A Machine Vision System'' Pages 1-35</ref><ref name = DemantBook/> This also includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange.<ref name=Handbook709>{{cite book|title=Handbook of Machine Vision | author=Hornberg, Alexander |page=709| date= 2006|publisher=[[Wiley-VCH]]|isbn=978-3-527-40584-8|url=https://books.google.com/books?id=x_1IauK-M2cC&pg=PA709|access-date=2010-11-05}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)