Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Facial recognition system
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== History of facial recognition technology == Automated facial recognition was pioneered in the 1960s by [[Woody Bledsoe]], [[Helen Chan Wolf]], and Charles Bisson, whose work focused on teaching computers to recognize human faces.<ref name="Nilsson 2009">{{Cite book|url=https://books.google.com/books?id=nUJdAAAAQBAJ&q=helen+chan+wolf+artificial+intelligence&pg=PT141|title=The Quest for Artificial Intelligence|last=Nilsson|first=Nils J.|date=2009-10-30|publisher=Cambridge University Press|isbn=978-1-139-64282-8|language=en}}</ref> Their early facial recognition project was dubbed "man-machine" because a human first needed to establish the coordinates of facial features in a photograph before they could be used by a computer for recognition. Using a [[graphics tablet]], a human would pinpoint facial features coordinates, such as the pupil centers, the inside and outside corners of eyes, and the [[widows peak]] in the hairline. The coordinates were used to calculate 20 individual distances, including the width of the mouth and of the eyes. A human could process about 40 pictures an hour, building a database of these computed distances. A computer would then automatically compare the distances for each photograph, calculate the difference between the distances, and return the closed records as a possible match.<ref name="Nilsson 2009"/> In 1970, [[Takeo Kanade]] publicly demonstrated a face-matching system that located anatomical features such as the chin and calculated the distance ratio between facial features without human intervention. Later tests revealed that the system could not always reliably identify facial features. Nonetheless, interest in the subject grew and in 1977 Kanade published the first detailed book on facial recognition technology.<ref>{{Cite book|title=The History of Information Security: A Comprehensive Handbook|last1=de Leeuw| first1=Karl| last2=Bergstra| first2=Jan| publisher=Elsevier| year=2007| isbn=9780444516084|pages=266}}</ref> In 1993, the [[Defense Advanced Research Project Agency]] (DARPA) and the [[Army Research Laboratory]] (ARL) established the face recognition technology program [[FERET (facial recognition technology)|FERET]] to develop "automatic face recognition capabilities" that could be employed in a productive real life environment "to assist security, intelligence, and law enforcement personnel in the performance of their duties." Face recognition systems that had been trialled in research labs were evaluated. The FERET tests found that while the performance of existing automated facial recognition systems varied, a handful of existing methods could viably be used to recognize faces in still images taken in a controlled environment.<ref>{{Cite book|title=Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance|last1=Gates| first1=Kelly| publisher=NYU Press| year=2011| isbn=9780814732090|pages=48–49}}</ref> The FERET tests spawned three US companies that sold automated facial recognition systems. Vision Corporation and Miros Inc were founded in 1994, by researchers who used the results of the FERET tests as a selling point. [[Viisage Technology]] was established by an [[identification card]] defense contractor in 1996 to commercially exploit the rights to the facial recognition algorithm developed by [[Alex Pentland]] at [[MIT]].<ref>{{Cite book|title=Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance|last1=Gates| first1=Kelly| publisher=NYU Press| year=2011| isbn=9780814732090|pages=49–50}}</ref> Following the 1993 FERET face-recognition vendor test, the [[Department of Motor Vehicles]] (DMV) offices in [[West Virginia]] and [[New Mexico]] became the first DMV offices to use automated facial recognition systems to prevent people from obtaining multiple driving licenses using different names. [[Driver's licenses in the United States]] were at that point a commonly accepted form of [[photo identification]]. DMV offices across the United States were undergoing a technological upgrade and were in the process of establishing databases of digital ID photographs. This enabled DMV offices to deploy the facial recognition systems on the market to search photographs for new driving licenses against the existing DMV database.<ref>{{Cite book|title=Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance|last1=Gates| first1=Kelly| publisher=NYU Press| year=2011| isbn=9780814732090|pages=52}}</ref> DMV offices became one of the first major markets for automated facial recognition technology and introduced US citizens to facial recognition as a standard method of identification.<ref>{{Cite book|title=Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance|last1=Gates| first1=Kelly| publisher=NYU Press| year=2011| isbn=9780814732090|pages=53}}</ref> The increase of the [[Incarceration in the United States|US prison population]] in the 1990s prompted U.S. states to established connected and automated identification systems that incorporated digital [[biometric]] databases, in some instances this included facial recognition. In 1999, [[Minnesota]] incorporated the facial recognition system FaceIT by Visionics into a [[mug shot]] booking system that allowed police, judges and court officers to track criminals across the state.<ref>{{Cite book|title=Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance|last1=Gates| first1=Kelly| publisher=NYU Press| year=2011| isbn=9780814732090|pages=54}}</ref> [[File:Mona Lisa eigenvector grid.png|thumb|In this [[shear mapping]] the red arrow changes direction, but the blue arrow does not and is used as eigenvector.]] [[File:Haar Feature that looks similar to the bridge of the nose is applied onto the face.jpg|thumb|The Viola–Jones algorithm for face detection uses [[Haar-like feature]]s to locate faces in an image. Here a Haar feature that looks similar to the bridge of the nose is applied onto the face.]] Until the 1990s, facial recognition systems were developed primarily by using [[photographic portrait]]s of human faces. Research on face recognition to reliably locate a face in an image that contains other objects gained traction in the early 1990s with the [[principal component analysis]] (PCA). The PCA method of face detection is also known as [[Eigenface]] and was developed by Matthew Turk and Alex Pentland.<ref name="auto">{{Cite book|title=Perception and Machine Intelligence: First Indo-Japan Conference, PerMIn 2012, Kolkata, India, January 12–13, 2011, Proceedings|editor1=Malay K. Kundu |editor2=Sushmita Mitra |editor3=Debasis Mazumdar |editor4=Sankar K. Pal| publisher=Springer Science & Business Media| year=2012| isbn=9783642273865|pages=29}}</ref> Turk and Pentland combined the conceptual approach of the [[Karhunen–Loève theorem]] and [[factor analysis]], to develop a [[linear model]]. Eigenfaces are determined based on global and [[orthogonal]] features in human faces. A human face is calculated as a [[weighted]] combination of a number of Eigenfaces. Because few Eigenfaces were used to encode human faces of a given population, Turk and Pentland's PCA face detection method greatly reduced the amount of data that had to be processed to detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997, the PCA Eigenface method of face recognition<ref>{{Cite book|title=Reliable Face Recognition Methods: System Design, Implementation and Evaluation|editor1=Malay K. Kundu |editor2=Sushmita Mitra |first1=Harry |last1=Wechsler| publisher=Springer Science & Business Media| year=2009| isbn=9780387384641|pages=11–12}}</ref> was improved upon using [[linear discriminant analysis]] (LDA) to produce [[Fisherface]]s.<ref>{{Cite book|title=Neural Information Processing: 13th International Conference, ICONIP 2006, Hong Kong, China, October 3–6, 2006, Proceedings, Part II|editor1=Jun Wang |editor2=Laiwan Chan |editor3=DeLiang Wang | publisher=Springer Science & Business Media| year=2012| isbn=9783540464822|pages=198}}</ref> LDA Fisherfaces became dominantly used in PCA feature based face recognition. While Eigenfaces were also used for face reconstruction. In these approaches no global structure of the face is calculated which links the facial features or parts.<ref>{{Cite book|title=Reliable Face Recognition Methods: System Design, Implementation and Evaluation|first1=Harry |last1=Wechsler| publisher=Springer Science & Business Media| year=2009| isbn=9780387384641|pages=12}}</ref> Purely feature based approaches to facial recognition were overtaken in the late 1990s by the Bochum system, which used [[Gabor filter]] to record the face features and computed a [[regular grid|grid]] of the face structure to link the features.<ref>{{Cite book|title=Reliable Face Recognition Methods: System Design, Implementation and Evaluation|editor1=Malay K. Kundu |editor2=Sushmita Mitra |first1=Harry |last1=Wechsler| publisher=Springer Science & Business Media| year=2009| isbn=9780387384641|pages=12}}</ref> [[Christoph von der Malsburg]] and his research team at the [[University of Bochum]] developed [[Elastic Bunch Graph Matching]] in the mid-1990s to extract a face out of an image using skin segmentation.<ref name="auto"/> By 1997, the face detection method developed by Malsburg outperformed most other facial detection systems on the market. The so-called "Bochum system" of face detection was sold commercially on the market as ZN-Face to operators of airports and other busy locations. The software was "robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses—even sunglasses".<ref>{{cite web|url=https://www.sciencedaily.com/releases/1997/11/971112070100.htm|title=Mugspot Can Find A Face In The Crowd – Face-Recognition Software Prepares To Go To Work In The Streets|date=November 12, 1997|access-date=November 6, 2007|website=ScienceDaily}}</ref> Real-time face detection in video footage became possible in 2001 with the [[Viola–Jones object detection framework]] for faces.<ref>{{Cite book|title=Perception and Machine Intelligence: First Indo-Japan Conference, PerMIn 2012, Kolkata, India, January 12–13, 2011, Proceedings|editor1=Malay K. Kundu|editor2=Sushmita Mitra|editor3=Debasis Mazumdar|editor4=Sankar K. Pal| publisher=Springer Science & Business Media| year=2012| isbn=9783642273865|pages=29}}</ref> [[Paul Viola]] and [[Michael Jones (scientist)|Michael Jones]] combined their face detection method with the [[Haar-like feature]] approach to object recognition in digital images to launch [[AdaBoost]], the first real-time frontal-view face detector.<ref>{{Cite book|title=Handbook of Face Recognition|last1=Li|first1=Stan Z.| last2 = Jain| first2 = Anil K.|publisher=Springer Science & Business Media|year=2005|isbn= 9780387405957|pages=14–15}}</ref> By 2015, the Viola–Jones algorithm had been implemented using small low power [[detectors]] on [[handheld devices]] and [[embedded systems]]. Therefore, the Viola–Jones algorithm has not only broadened the practical application of face recognition systems but has also been used to support new features in [[user interfaces]] and [[teleconferencing]].<ref>{{Cite book|title= Face Detection and Recognition: Theory and Practice|last1=Kumar Datta|first1=Asit | last2 = Datta|first2= Madhura| last3 = Kumar Banerjee|first3 =Pradipta| publisher=CRC|year=2015|isbn=9781482226577|pages=123}}</ref> Ukraine is using the US-based [[Clearview AI]] facial recognition software to identify dead Russian soldiers. Ukraine has conducted 8,600 searches and identified the families of 582 deceased Russian soldiers. The IT volunteer section of the Ukrainian army using the software is subsequently contacting the families of the deceased soldiers to raise awareness of Russian activities in Ukraine. The main goal is to destabilise the Russian government. It can be seen as a form of [[psychological warfare]]. About 340 Ukrainian government officials in five government ministries are using the technology. It is used to catch spies that might try to enter Ukraine.<ref>{{Cite web |url=https://www.washingtonexaminer.com/news/ukraine-uses-facial-recognition-software-to-identify-dead-russian-soldiers |title= Ukraine uses facial recognition software to identify dead Russian soldiers |last1=Severi |first1=Misty |date=15 April 2022 }}</ref> Clearview AI's facial recognition database is only available to government agencies who may only use the technology to assist in the course of law enforcement investigations or in connection with national security.<ref>{{Cite web |url=https://www.dailynews.com/2022/05/15/facial-recognition-technology-is-a-valuable-tool/ |title= Facial recognition technology is a valuable tool |website= [[Los Angeles Daily News]] |date=15 May 2022 }}</ref> The software was donated to Ukraine by Clearview AI. Russia is thought to be using it to find anti-war activists. Clearview AI was originally designed for US law enforcement. Using it in war raises new ethical concerns. One London based surveillance expert, Stephen Hare, is concerned it might make the Ukrainians appear inhuman: "Is it actually working? Or is it making [Russians] say: 'Look at these lawless, cruel Ukrainians, doing this to our boys'?"<ref>{{Cite web |url= https://www.businessinsider.com/ukraine-sending-photos-of-dead-russian-soldiers-home-moms-2022-4 |title= Ukraine is using facial recognition to ID dead Russian soldiers and send photos of corpses home to their moms: report |last1=Italiano |first1=Laura |website= [[Business Insider]] |date=15 April 2022}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)