Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Affective computing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Facial affect detection=== The detection and processing of facial expression are achieved through various methods such as [[optical flow]], [[hidden Markov model]]s, [[Artificial neural network|neural network]] processing or active appearance models. More than one modality can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody,<ref name="face-prosody">{{cite conference | url = http://www.image.ece.ntua.gr/php/savepaper.php?id=447 | first1 = G. | last1 = Caridakis | first2 = L. | last2 = Malatesta | first3 = L. | last3 = Kessous | first4 = N. | last4 = Amir | first5 = A. | last5 = Raouzaiou | first6 = K. | last6 = Karpouzis | title = Modeling naturalistic affective states via facial and vocal expressions recognition | conference = International Conference on Multimodal Interfaces (ICMI'06) | location = Banff, Alberta, Canada | date = November 2β4, 2006 }}</ref> facial expressions and hand gestures,<ref name="face-gesture">{{cite book | chapter-url = http://www.image.ece.ntua.gr/php/savepaper.php?id=334 | first1 = T. | last1 = Balomenos | first2 = A. | last2 = Raouzaiou | first3 = S. | last3 = Ioannou | first4 = A. | last4 = Drosopoulos | first5 = K. | last5 = Karpouzis | first6 = S. | last6 = Kollias | chapter = Emotion Analysis in Man-Machine Interaction Systems | editor1-first = Samy | editor1-last = Bengio | editor2-first = Herve | editor2-last = Bourlard | title = Machine Learning for Multimodal Interaction | series = [[Lecture Notes in Computer Science]] | volume = 3361| year = 2004 | pages = 318β328 | publisher = [[Springer-Verlag]] }}</ref> or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. ==== Facial expression databases ==== {{Main|Facial expression databases}} Creation of an emotion database is a difficult and time-consuming task. However, database creation is an essential step in the creation of a system that will recognize human emotions. Most of the publicly available emotion databases include posed facial expressions only. In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous emotion elicitation requires significant effort in the selection of proper stimuli which can lead to a rich display of intended emotions. Secondly, the process involves tagging of emotions by trained individuals manually which makes the databases highly reliable. Since perception of expressions and their intensity is subjective in nature, the annotation by experts is essential for the purpose of validation. Researchers work with three types of databases, such as a database of peak expression images only, a database of image sequences portraying an emotion from neutral to its peak, and video clips with emotional annotations. Many facial expression databases have been created and made public for expression recognition purpose. Two of the widely used databases are CK+ and JAFFE. ====Emotion classification==== {{Main|Emotion classification}} By doing cross-cultural research in Papua, New Guinea, on the Fore Tribesmen, at the end of the 1960s, [[Paul Ekman]] proposed the idea that facial expressions of emotion are not culturally determined, but universal. Thus, he suggested that they are biological in origin and can, therefore, be safely and correctly categorized.<ref name="Ekman, P. 1969"/> He therefore officially put forth six basic emotions, in 1972:<ref>{{cite conference | last = Ekman | first = Paul | author-link = Paul Ekman | year = 1972 | title = Universals and Cultural Differences in Facial Expression of Emotion | editor-first = J. | editor-last = Cole | conference = Nebraska Symposium on Motivation | location = Lincoln, Nebraska | publisher = University of Nebraska Press | pages = 207β283 }}</ref> * [[Anger]] * [[Disgust]] * [[Fear]] * [[Happiness]] * [[Sadness]] * [[Surprise (emotion)|Surprise]] However, in the 1990s Ekman expanded his list of basic emotions, including a range of positive and [[negative emotion]]s not all of which are encoded in facial muscles.<ref>{{Cite book|last=Ekman |first=Paul |author-link=Paul Ekman |year=1999 |url=http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |contribution=Basic Emotions |editor1-first=T |editor1-last=Dalgleish |editor2-first=M |editor2-last=Power |title=Handbook of Cognition and Emotion |place=Sussex, UK |publisher=John Wiley & Sons |url-status=dead |archive-url=https://web.archive.org/web/20101228085345/http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |archive-date=2010-12-28 }}.</ref> The newly included emotions are: # [[Amusement]] # [[Contempt]] # [[Contentment]] # [[Embarrassment]] # [[Anticipation (emotion)|Excitement]] # [[Guilt (emotion)|Guilt]] # [[Pride| Pride in achievement]] # [[Relief (emotion)|Relief]] # [[Contentment|Satisfaction]] # [[Pleasure|Sensory pleasure]] # [[Shame]] ====Facial Action Coding System==== {{Main|Facial Action Coding System}} A system has been conceived by psychologists in order to formally categorize the physical expression of emotions on faces. The central concept of the [[Facial Action Coding System]], or FACS, as created by Paul Ekman and Wallace V. Friesen in 1978 based on earlier work by [[Carl-Herman HjortsjΓΆ]]<ref>[http://face-and-emotion.com/dataface/facs/description.jsp "Facial Action Coding System (FACS) and the FACS Manual"] {{webarchive |url=https://web.archive.org/web/20131019130324/http://face-and-emotion.com/dataface/facs/description.jsp |date=October 19, 2013 }}. A Human Face. Retrieved 21 March 2011.</ref> are action units (AU). They are, basically, a contraction or a relaxation of one or more muscles. Psychologists have proposed the following classification of six basic emotions, according to their action units ("+" here mean "and"): {| class="wikitable sortable" |- ! Emotion !! Action units |- | Happiness ||6+12 |- | Sadness || 1+4+15 |- | Surprise || 1+2+5B+26 |- | Fear || 1+2+4+5+20+26 |- | Anger || 4+5+7+23 |- | Disgust || 9+15+16 |- | Contempt || R12A+R14A |} ====Challenges in facial detection==== As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, [[Gaussian blur|linear Gaussian smoothing]], median filtering,<ref>{{cite web|url=http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT5/node3.html|title=Spatial domain methods}}</ref> or newer methods such as the Bacterial Foraging Optimization Algorithm.<ref>Clever Algorithms. [http://www.cleveralgorithms.com/nature-inspired/swarm/bfoa.html "Bacterial Foraging Optimization Algorithm β Swarm Algorithms β Clever Algorithms"] {{Webarchive|url=https://web.archive.org/web/20190612144816/http://www.cleveralgorithms.com/nature-inspired/swarm/bfoa.html |date=2019-06-12 }}. Clever Algorithms. Retrieved 21 March 2011.</ref><ref>[http://www.softcomputing.net/bfoa-chapter.pdf "Soft Computing"]. Soft Computing. Retrieved 18 March 2011.</ref> Other challenges include * The fact that posed expressions, as used by most subjects of the various studies, are not natural, and therefore algorithms trained on these may not apply to natural expressions. * The lack of rotational movement freedom. Affect detection works very well with frontal use, but upon rotating the head more than 20 degrees, "there've been problems".<ref>Williams, Mark. [http://www.technologyreview.com/Infotech/18796/?a=f "Better Face-Recognition Software β Technology Review"] {{Webarchive|url=https://web.archive.org/web/20110608023222/http://www.technologyreview.com/Infotech/18796/?a=f |date=2011-06-08 }}. Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.</ref> * Facial expressions do not always correspond to an underlying emotion that matches them (e.g. they can be posed or faked, or a person can feel emotions but maintain a "poker face"). * FACS did not include dynamics, while dynamics can help disambiguate (e.g. smiles of genuine happiness tend to have different dynamics than "try to look happy" smiles.) * The FACS combinations do not correspond in a 1:1 way with the emotions that the psychologists originally proposed (note that this lack of a 1:1 mapping also occurs in speech recognition with homophones and homonyms and many other sources of ambiguity, and may be mitigated by bringing in other channels of information). * Accuracy of recognition is improved by adding context; however, adding context and other modalities increases computational cost and complexity
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)