A facial expression database is a collection of images or video clips with facial expressions of a range of emotions.
Well-annotated (emotion-tagged) media content of facial behavior is essential for training, testing, and validation of algorithms for the development of expression recognition systems. The emotion annotation can be done in discrete emotion labels or on a continuous scale. Most of the databases are usually based on the basic emotions theory (by Paul Ekman) which assumes the existence of six discrete basic emotions (anger, fear, disgust, surprise, joy, sadness). However, some databases include the emotion tagging in continuous arousal-valence scale.
In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous expressions differ from posed ones remarkably in terms of intensity, configuration, and duration. Apart from this, synthesis of some AUs are barely achievable without undergoing the associated emotional state. Therefore, in most cases, the posed expressions are exaggerated, while the spontaneous ones are subtle and differ in appearance.
Many publicly available databases are categorized here.[1][2] Here are some details of the facial expression databases.
Database
Facial expression
Number of Subjects
Number of images/videos
Gray/Color
Resolution, Frame rate
Ground truth
Type
FERG-3D-DB (Facial Expression Research Group 3D Database) for stylized characters [3]
angry, disgust, fear, joy, neutral, sad, surprise
4
39574 annotated examples
Color
Emotion labels
Frontal pose
Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [4]
Speech: Calm, happy, sad, angry, fearful, surprise, disgust, and neutral.
Song: Calm, happy, sad, angry, fearful, and neutral.
Each expression at two levels of emotional intensity.
^Aneja, Deepali, et al. "Learning to generate 3D stylized character expressions from humans." 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018.
^Livingstone & Russo (2018). The Ryerson Audio-Visual Database of
Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. doi:10.1371/journal.pone.0196391
^P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, "The Extended Cohn-Kanade Dataset (CK+): A complete facial expression dataset for action unit and emotion-specified expression," in 3rd IEEE Workshop on CVPR for Human Communicative Behavior Analysis, 2010
^Lyons, Michael; Kamachi, Miyuki; Gyoba, Jiro (1998). The Japanese Female Facial Expression (JAFFE) Database. doi:10.5281/zenodo.3451524.
^M. Valstar and M. Pantic, "Induced disgust, happiness and surprise: an addition to the MMI facial expression database," in Proc. Int. Conf. Language Resources and Evaluation, 2010
^I. Sneddon, M. McRorie, G. McKeown and J. Hanratty, "The Belfast induced natural emotion database," IEEE Trans. Affective Computing, vol. 3, no. 1, pp. 32-41, 2012
^S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh and J. Cohn., "DISFA: A Spontaneous Facial Action Intensity Database," IEEE Trans. Affective Computing, vol. 4, no. 2, pp. 151–160, 2013
^N. Aifanti, C. Papachristou and A. Delopoulos, The MUG Facial Expression Database, in Proc. 11th Int. Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), Desenzano, Italy, April 12–14, 2010.
^S L Happy, P. Patnaik, A. Routray, and R. Guha, "The Indian Spontaneous Expression Database for Emotion Recognition," in IEEE Transactions on Affective Computing, 2016, doi:10.1109/TAFFC.2015.2498174.
^Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377—1388. doi:10.1080/02699930903485076