In his blog post a few weeks ago Alejandro Riera talked about characterizing stress based on EEG. The post presented the generation of EEG features based on ratios and differences at particular frequency bands. I would like to comment today on the second part of the story, that goes from such features to the classification of activities upon the stress level. This can have applications like the one recently presented. The general framework was somehow introduced by Anton Albajes-Eizagirre in his post on machine learning applied to affective computing. He presented the general framework, so I would like to comment on the experimental results we have obtained with a particular dataset by using data fusion. The work can be found in detail in our paper Electro-Physiological Data Fusion for Stress Detection.
The multimodal setting for stress characterization
The purpose of the system is to fuse the data of separate information channels, namely EEG, and EMG data. We achieve the classification of stress-related tasks, e.g. mathematical computation, a fake blood extraction, vs tasks not related to stress, e.g. relaxing, reading. The idea is that by using these two electrophysiological modalities we would be able to determine if a subject is suffering from stress. We extract different features from the two modalities and then apply fusion operators in order to characterize the stress level of the task. We take into account the EEG channels reported in the literature to be the most related with valence and arousal, namely those in the frontal part. Therefore we select the pairs F3-F4, and F7-F8. We computed here the alpha asymmetry, and the alpha-beta ratios, so that we have 6 EEG-based features. Moreover, we take into account the EMG energy on the zygomatic and the corrugator facial muscles. Facial EMG is known to be a good mean to monitor facial expressions and therefore emotional characterization. This gives us a system based on 8 features delivered each second with an analysis window of 2 seconds length.