Researchers develop new software capable of spotting intricate details of emotions that remain hidden to the human eye, according to a study published in Frontiers. The software maps key features on a human face and measures the intensities of multiple different facial expressions at the same time.
A team of researchers from the University of Bristol and Manchester Metropolitan University team worked with Bristol’s Children of the 90s study participants to assess if computational methods could capture authentic human emotions. For this study, the researchers used videos captured by headcams worn by babies during interactions with their parents.
It turns out that scientists can use machine learning techniques to predict human emotions based on parent facial expressions. “Using computational methods to detect facial expressions from video data can be very accurate when the videos are of high quality and represent optimal conditions – for instance, when videos are recorded in rooms with good lighting, when participants are sat face-on with the camera, and when glasses or long hair are kept from blocking the face,” said Romana Burgess, from the University of Bristol. However, “we were intrigued by their performance in the chaotic, real-world settings of family homes. The software detected a face in around 25% of the videos taken in real-world conditions, reflecting the difficulty in evaluating faces in this kind of dynamic interactions.”
The team used data from the Children of the 90s health study. Parents were invited to attend a clinic at the University of Bristol when their babies were six months old. At the clinic, parents received two wearable headcams to use at home during interactions with their babies. Both parents and babies used the headcams during feeding and play interactions, for example.
With the resultant videos, they used an ‘automated facial coding’ software to analyse the parents’ facial expressions but also had human coders analyse the facial expressions in the same videos. They then quantified how frequently the software was able to detect a face and checked how often the humans and the software agreed on facial expressions. The last step involved using machine learning to predict human emotions and judgments based on the computers’ decisions.
“Deploying automated facial analysis in the parents’ home environment could change how we detect early signs of mood or mental health disorders, such as postnatal depression,” said Romana. “For instance, we might expect parents with depression to show more sad expressions and less happy facial expressions.”
“These conditions could be better understood through subtle nuances in parents’ facial expressions, providing early intervention opportunities that were once unimaginable. For example, most parents will try to ‘mask’ their own distress and appear ‘ok’ to those around them. More subtle combinations can be picked up by the software, including expressions that are a mix of sadness and joy or that change quickly,” added Professor Rebecca Pearson from Manchester Metropolitan University.
In the future, the team wants to explore using the same procedure as a tool to understand mood and mental health disorders. The team believes this could help pioneer a new era of health monitoring, bringing innovative science directly into the home. “Our research used wearable headcams to capture genuine, unscripted emotions in everyday parent-infant interactions. Together with cutting-edge computational techniques, this means we can uncover hidden details previously unattainable by the human eye, changing how we understand parents’ real emotions during interactions with their babies,” concluded Romana.
Burgess R, Culpin I, Costantini I, Bould H, Nabney I, Pearson RM. Quantifying the efficacy of an automated facial coding software using videos of parents. Front Psychol. 2023 Jul 31;14:1223806. doi: 10.3389/fpsyg.2023.122380