Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Facial recognition system
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Techniques for face recognition == [[File:Face detection.jpg|thumb|right|Automatic face detection with [[OpenCV]] ]] While humans can recognize faces without much effort,<ref>{{Cite book|title=Handbook of Face Recognition|last1=Li|first1=Stan Z.| last2 = Jain| first2 = Anil K.|publisher=Springer Science & Business Media|year=2005|isbn= 9780387405957|pages=1}}</ref> facial recognition is a challenging [[pattern recognition]] problem in [[computing]]. Facial recognition systems attempt to identify a human face, which is three-dimensional and changes in appearance with lighting and facial expression, based on its two-dimensional image. To accomplish this computational task, facial recognition systems perform four steps. First [[face detection]] is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face [[pose]], image size and photographic properties, such as [[Illumination (image)|illumination]] and [[grayscale]]. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction. Features such as eyes, nose and mouth are pinpointed and measured in the image to represent the face. The so established [[feature vector]] of the face is then, in the fourth step, matched against a database of faces.<ref>{{Cite book|title=Handbook of Face Recognition|last1=Li|first1=Stan Z.| last2 = Jain| first2 = Anil K.|publisher=Springer Science & Business Media|year=2005|isbn= 9780387405957|pages=2}}</ref> === Traditional === [[Image:eigenfaces.png|thumb|Some [[eigenface]]s from [[AT&T Labs|AT&T Laboratories]] Cambridge]] Some face recognition [[algorithms]] identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw.<ref>{{cite web|url=http://www.hrsid.com/company/technology/face-recognition|title=Airport Facial Recognition Passenger Flow Management|work=hrsid.com}}</ref> These features are then used to search for other images with matching features.<ref name="Bonsor2">{{cite web|url=http://computer.howstuffworks.com/facial-recognition.htm|title=How Facial Recognition Systems Work|last=Bonsor|first=K.|access-date=June 2, 2008|date=September 4, 2001}}</ref> Other algorithms [[normalization (image processing)|normalize]] a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data.<ref name="Smith2">{{cite web|url=http://www.biometrics.gov/Documents/FaceRec.pdf|title=Face Recognition|last=Smith|first=Kelly|access-date=June 4, 2008}}</ref> One of the earliest successful systems<ref>R. Brunelli and T. Poggio, "Face Recognition: Features versus Templates", IEEE Trans. on PAMI, 1993, (15)10:1042β1052</ref> is based on template matching techniques<ref>R. Brunelli, ''Template Matching Techniques in Computer Vision: Theory and Practice'', Wiley, {{ISBN|978-0-470-51706-2}}, 2009 ''([http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470517069.html] TM book)''</ref> applied to a set of salient facial features, providing a sort of compressed face representation. Recognition algorithms can be divided into two main approaches: geometric, which looks at distinguishing features, or photo-metric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features.<ref>{{Cite book|title=Advances in Biometrics: International Conference, ICB 2006, Hong Kong, China, January 5β7, 2006, Proceedings|last1=Zhang|first1=David|last2=Jain|first2=Anil|publisher=Springer Science & Business Media|year=2006|isbn=9783540311119|location=Berlin|pages=183}}</ref> Popular recognition algorithms include principal component analysis using [[eigenface]]s, [[linear discriminant analysis]], [[elastic matching|elastic bunch graph matching]] using the Fisherface algorithm, the [[hidden Markov model]], the [[multilinear subspace learning]] using [[tensor]] representation, and the neuronal motivated [[dynamic link matching]].{{citation needed|date=September 2020}}<ref>{{Cite journal|title=A Study on the Design and Implementation of Facial Recognition Application System|journal=International Journal of Bio-Science and Bio-Technology}}</ref> Modern facial recognition systems make increasing use of machine learning techniques such as [[deep learning]].<ref>H. Ugail, ''Deep face recognition using full and partial face images'', Elesevier, {{ISBN|978-0-12-822109-9}}, 2022 ''([https://doi.org/10.1016/B978-0-12-822109-9.00015-1] Advanced Methods and Deep Learning in Computer Vision)''</ref> === Human identification at a distance (HID) === To enable human identification at a distance (HID) low-resolution images of faces are enhanced using [[face hallucination]]. In [[CCTV]] imagery faces are often very small. But because facial recognition algorithms that identify and plot facial features require high resolution images, resolution enhancement techniques have been developed to enable facial recognition systems to work with imagery that has been captured in environments with a high [[signal-to-noise ratio]]. Face hallucination algorithms that are applied to images prior to those images being submitted to the facial recognition system use example-based machine learning with pixel substitution or [[nearest neighbour distribution]] indexes that may also incorporate demographic and age related facial characteristics. Use of face hallucination techniques improves the performance of high resolution facial recognition algorithms and may be used to overcome the inherent limitations of super-resolution algorithms. Face hallucination techniques are also used to pre-treat imagery where faces are disguised. Here the disguise, such as sunglasses, is removed and the face hallucination algorithm is applied to the image. Such face hallucination algorithms need to be trained on similar face images with and without disguise. To fill in the area uncovered by removing the disguise, face hallucination algorithms need to correctly map the entire state of the face, which may be not possible due to the momentary facial expression captured in the low resolution image.<ref>{{Cite book|title=Reliable Face Recognition Methods: System Design, Implementation and Evaluation|author=Harry Wechsler|publisher=Springer Science & Business Media|year=2009|isbn=9780387384641|pages=196}}</ref> === 3-dimensional recognition === [[File:3D_face.stl|thumb|right|3D model of a human face]] [[Three-dimensional face recognition]] technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.<ref name="Williams2" /> One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view.<ref name="Williams2" /><ref name="Bonsor2" /> Three-dimensional data points from a face vastly improve the precision of face recognition. 3D-dimensional face recognition research is enabled by the development of sophisticated sensors that project structured light onto the face.<ref name="Crawford2">{{cite web|url=http://spie.org/x57306.xml|title=Facial recognition progress report|last=Crawford|first=Mark|publisher=SPIE Newsroom|access-date=October 6, 2011}}</ref> 3D matching technique are sensitive to expressions, therefore researchers at [[Technion]] applied tools from [[metric geometry]] to treat expressions as [[isometries]].<ref name="Kimmel2">{{cite web|url=https://www.cs.technion.ac.il/~ron/PAPERS/BroBroKimIJCV05.pdf|title=Three-dimensional face recognition|last=Kimmel|first=Ron|access-date=January 1, 2005}}</ref> A new method of capturing 3D images of faces uses three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject's face in real-time and be able to face detect and recognize.<ref>{{cite book|last1=Duhn|first1=S. von|last2=Ko|first2=M. J.|date=September 1, 2007|pages=1β6|doi=10.1109/BCC.2007.4430529|last3=Yin|first3=L.|last4=Hung|first4=T.|last5=Wei|first5=X.|title=2007 Biometrics Symposium |chapter=Three-View Surveillance Video Based Face Modeling for Recogniton |isbn=978-1-4244-1548-9|s2cid=25633949}}</ref> === Thermal cameras === [[File:Ir girl.png|thumb|right|A [[false color|pseudocolor]] image of two people taken in long-wavelength infrared (body-temperature thermal) light]] A different form of taking input data for face recognition is by using [[Thermographic camera|thermal cameras]], by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or makeup.<ref name=":5" /> Unlike conventional cameras, thermal cameras can capture facial imagery even in low-light and nighttime conditions without using a flash and exposing the position of the camera.<ref name=":6">{{Cite news|url=https://www.azorobotics.com/News.aspx?newsID=9840|title=Army Builds Face Recognition Technology that Works in Low-Light Conditions|date=April 18, 2018|work=AZoRobotics|access-date=August 17, 2018}}</ref> However, the databases for face recognition are limited. Efforts to build databases of thermal face images date back to 2004.<ref name=":5">{{cite web|url=http://dl.acm.org/citation.cfm?id=1896300.1896448|title=Thermal Face Recognition in an Operational Scenario|last1=Socolinsky|first1=Diego A.|last2=Selinger|first2=Andrea|series=CVPR'04|date=January 1, 2004|publisher=IEEE Computer Society|pages=1012β1019|via=ACM Digital Library}}</ref> By 2016, several databases existed, including the IIITD-PSE and the Notre Dame thermal face database.<ref>{{Cite book|title=Face Recognition Across the Imaging Spectrum|author=Thirimachos Bourlai|publisher=Springer|year=2016|isbn=9783319285016|pages=142}}</ref> Current thermal face recognition systems are not able to reliably detect a face in a thermal image that has been taken of an outdoor environment.<ref>{{Cite book|title=Face Recognition Across the Imaging Spectrum|author=Thirimachos Bourlai|publisher=Springer|year=2016|isbn=9783319285016|pages=140}} </ref> In 2018, researchers from the [[United States Army Research Laboratory|U.S. Army Research Laboratory (ARL)]] developed a technique that would allow them to match facial imagery obtained using a thermal camera with those in databases that were captured using a conventional camera.<ref>{{Cite news|url=https://www.arl.army.mil/www/default.cfm?article=3199|archive-url=https://web.archive.org/web/20180422075809/http://www.arl.army.mil/www/default.cfm?article=3199|url-status=dead|archive-date=April 22, 2018|title=Army develops face recognition technology that works in the dark|date=April 16, 2018|work=Army Research Laboratory|access-date=August 17, 2018}}</ref> Known as a cross-spectrum synthesis method due to how it bridges facial recognition from two different imaging modalities, this method synthesize a single image by analyzing multiple facial regions and details.<ref name=":7">{{Cite conference|last1=Riggan|first1=Benjamin|last2=Short|first2=Nathaniel|last3=Hu|first3=Shuowen |title=Thermal to Visible Synthesis of Face Images Using Multiple Regions |date=March 2018 |conference=2018 IEEE Winter Conference on Applications of Computer Vision (WACV) |pages=30β38 |doi=10.1109/WACV.2018.00010|bibcode=2018arXiv180307599R|arxiv=1803.07599|url=https://www.researchgate.net/publication/323932058}}</ref> It consists of a non-linear regression model that maps a specific thermal image into a corresponding visible facial image and an optimization issue that projects the latent projection back into the image space.<ref name=":6" /> ARL scientists have noted that the approach works by combining global information (i.e. features across the entire face) with local information (i.e. features regarding the eyes, nose, and mouth).<ref>{{Cite news|title=U.S. Army's AI facial recognition works in the dark|last=Cole|first=Sally|date=June 2018|work=Military Embedded Systems|page=8}}</ref> According to performance tests conducted at ARL, the multi-region cross-spectrum synthesis model demonstrated a performance improvement of about 30% over baseline methods and about 5% over state-of-the-art methods.<ref name=":7" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)