logo
©neuroelectrics49

From the lab to the Market: The experience lab

The path from lab research to the market is a tough one. Nearly all funding research calls ask you to report on your exploitation plans, so you need to figure out how the outcomes of your research could go to the market. The truth is that it is not always possible to materialize your findings on an actual product or service.

Here at Starlab we really try hard to successfully walk this path down as many times as possible. We always say that our main mission is to transform science into technologies with a real positive impact on society. Take as an example Enobio and StarStim, two products that are helping today both the health and research community.

Inspired by this motto, we recently decided to collect all the outcomes that we obtained working on different EU-funded research projects related with user affective state. As a result, we happily put in the market The Experience Lab: a neurophysiological sensing platform and data analysis services for user affective state.

The starting point of this sensing platform may be found in the projects INTERSTRESS and BEAMING. In the former, we investigated if stress markers could be found in EEG signals. The later brought affective computing and immersive technologies together to set a whole telepresence experience.

Those projects, especially BEAMING, brought the opportunity to work with other physiological sensors and signals apart from EEG which, as you might know, we have investigated for many years in other different use cases and scenarios (such as sleep studies, brain computer interfaces, response to brain stimulation, biometrics or biomarkers for early detection of neurodegenerative diseases like Parkinsons’ disease).

Thanks to the integration effort made in those projects, not only from the hardware perspective (synchronising different wireless sensors), but also from the point of view of combining the analysis of the different physiological signals individually, we ended up with an easy-to-use hardware neurophysiological sensing platform along with a software back-end which can characterize the user affective state.

©neuroelectrics49
©neuroelectrics

The sensing platform is based on Enobio but integrating other physiological sensors. Up to now, we have successfully integrated several ones in order to read brain activity (of course by using Enobio), heart activity (either using Enobio or a Shimmer sensor), galvanic skin response (through a Shimmer sensor), and breathing rhythm (by using a chest belt with an embedded piezoelectric). The platform is also ready to record face expressions with an HD web cam.

The whole system is fully customizable since the sensing platform can be configured to use as many sensors as needed depending on the type of experiment and the needs of the client. For instance, just the EEG sensor was used in the context of a simulated shopping campaign by a company studying what consumers wanted and how they went shopping. In another experiment, which was conducted in a museum, the sensing platform was equipped with sensors for reading brain and heart activity as well as the galvanic skin response. A GoPro camera was added to the sensing platform in that occasion. The purpose of that camera was not to record face expressions but to record the user‘s perspective to synchronize the user’s affective state with the artwork on sight (more details of this experiment can be found here). The whole system, with all the sensors listed above fully integrated, is currently in action at a company whose products are exposed to their professional testers. They are now evaluating their products not only with the classical questionnaires but taking into account the physiological analysis provided by The Experience Lab.

The analysis service provided by The Experience lab is based on a back-end processing chain that takes the raw data recorded by the sensing platform and processes it to extract different features that are correlated with the user’s state. That processing chain, like the sensing platform, can be fully adapted according to the type of signals that are present at recording time and the type of analysis and reports that the client is interested in.

The user state characterization provided by the analysis service is based on a valence-arousal approach. The back-end processing chain takes each of the signals recorded by the sensing platform individually and extracts different features that are correlated either with the user’s valence (positive/negative emotion domain) or arousal (high/low emotion intensity domain). The features extracted from the galvanic skin response, for instance, are correlated with the user’s arousal. The ones extracted from the heart activity can also be correlated with the emotion intensity. In the case of the brain activity, we extract either EEG features that correlate with valence or features that correlate with arousal. Detected face expressions can be categorized with pre-defined emotions which can be mapped to a specific level of arousal and valence. You can check this emotional granularity entry in the Wikipedia to better understand this valence and arousal mapping to different emotions and user states.

The extracted features mentioned above are combined using fusion algorithms to obtain the final user state representation that the client is interested in (after stimulus presentation, as an average over different period of times, grand averages over subjects or stimuli subsets, etc.). The final stage of the back-end processing chain is the classification of the user state characterizations by applying machine learning techniques. These techniques can provide information, for instance, on how different stimulus can be classified according to the user’s physiological responses to them. Based on that, an automatic tool could be built to know how well a user response to a certain new stimuli can be classified with different predefined labels. Let me put this with different words through the following example.

Let’s consider an auditory sensory experiment where pleasant, unpleasant, relaxing and invigorating audio stimulus are provided. The machine learning algorithms are trained with the physiological emotional features captured when those stimulus are played. Then, a new stimulus, which is desired to be characterized and it is different from the previous one, is provided. By analysing the users’ response to this new stimulus, it is possible to provide the level of membership for each of the different categories (pleasant, unpleasant, relaxing, invigorating) so it can be checked if the new audio stimulus provokes the effect that it was meant for.

As you can see, The Experience Lab benefits from all our previous experience with wireless sensors, third-party hardware integration, EEG and other physiological signals analysis as well as data fusion and machine learning algorithms. Mostly all of these concepts worked first in the lab under research projects, now they are in the market through Neurokai which is a division of Starlab Neuroscience born some months ago to release The Experience Lab.

Neurokai also offers other solutions based in user performance and biomarkers research to provide state-of-the-art neuroscience data services but let’s leave those interesting applications for other posts.