Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Face detection
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definition and related algorithms == Face detection can be regarded as a specific case of [[object-class detection]]. In object-class detection, the task is to find the locations and sizes of all objects in an image that belong to a given class. Examples include upper torsos, pedestrians, and cars. Face detection simply answers two question, 1. are there any human faces in the collected images or video? 2. where is the face located? Face-detection algorithms focus on the detection of frontal human faces. It is analogous to image detection in which the image of a person is matched bit by bit. Image matches with the image stores in database. Any facial feature changes in the database will invalidate the matching process.<ref name="a">{{cite journal |last1=Sheu |first1=Jia-Shing |last2=Hsieh |first2=Tsu-Shien |last3=Shou |first3=Ho-Nien |title=Automatic Generation of Facial Expression Using Triangular Geometric Deformation |journal=Journal of Applied Research and Technology |date=1 December 2014 |volume=12 |issue=6 |pages=1115β1130 |doi=10.1016/S1665-6423(14)71671-2 |language=en |issn=2448-6736|doi-access=free }}</ref> A reliable face-detection approach based on the [[genetic algorithm]] and the [[Eigenface|eigen-face]]<ref>{{Cite journal |doi = 10.1109/5.628712|title = Face recognition: Eigenface, elastic matching, and neural nets|journal = Proceedings of the IEEE|volume = 85|issue = 9|pages = 1423β1435|year = 1997|last1 = Jun Zhang|last2 = Yong Yan|last3 = Lades|first3 = M.}}</ref> technique: Firstly, the possible human eye regions are detected by testing all the valley regions in the gray-level image. Then the genetic algorithm is used to generate all the possible face regions which include the eyebrows, the iris, the nostril and the mouth corners.<ref name="a"/> Each possible face candidate is normalized to reduce both the lighting effect, which is caused by uneven illumination; and the shirring effect, which is due to head movement. The fitness value of each candidate is measured based on its projection on the eigen-faces. After a number of iterations, all the face candidates with a high fitness value are selected for further verification. At this stage, the face symmetry is measured and the existence of the different facial features is verified for each face candidate.{{CN|date=February 2024}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)