logo
clock

The Future of Brain Computer Interfaces, are we there yet?

A lot has been written on Brain Computer Interfaces and how they are poised to transform the lives of those that have lost the ability to communicate or to move (Allison 2011, Racine et al. 2010). People are finally ready to get out of the lab and build systems that can be used in the real world. The focus has moved to user centred design and application development and really it’s just a matter of ironing out the last few wrinkles. Except, well except that they don’t work.

Not in the sense we expect at least, not in the sense that your wireless keyboard works or your phone works or anything you buy just works, by which I mean at least most of the time.

There has been truly amazing progress in Brain Computer Interface research in the last 25 years but as far as we know, and we have looked into it (FutureBNCI Roadmap), there are at most dozens of people in the world with a Brain Computer Interface at home and those are all basically custom systems that require technical support and/or on site assistance beyond what you or I would consider reasonable for a remote control.

So what happened? How did we end up with this mismatch between promise and reality?

If you speak to EEG based Brain Computer Interface researchers (the current front runner in terms of cost and usability) you will find that they typically fall into 3 camps:

  1. Young engineers who view BCI as an interesting instance of Human Computer Interaction and as a platform to play with new HCI paradigms. They tend to assume that EEG and the associated features used for Brain Computer Interface are both well understood and robust.
  2. Experienced BCI researchers who have painstakingly tweaked and tested their hardware and algorithms to squeeze the last bit (pun intended) of performance from their applications. They are very aware of the limitations of EEG as a signal, our understanding of EEG itself, of the features used for Brain Computer Interface, of the universality of those features and finally, and perhaps most importantly for consumers, the usability and robustness of the applications we can build. These researchers are, for example, looking beyond their field to context aware intelligent systems that can work together with the user to address some of these limitations. They are also very concerned with user centred design and the trap of building prototypes with healthy users that are ultimately aimed at patients in need of assistive technologies.
  3. The third group are those that having seen the limitations recognise the need for more fundamental research on the signals, the sensors used to detect them and the techniques used to extract and classify signal features.

For the most part the BCI community are very much aware of where we are right now or how far we are from a truly useful Brain Computer Interface. Some have chosen to attack the problems at the application level by building smart systems that do much of the work for the user, some continue to refine systems that work well for certain use cases such as P300, while others continue to explore the signal and signal processing techniques and alternative sensing solutions (see Brain-Computer Interfaces: Principles and Practice for a comprehensive and well balanced summary of the state-of-the-art).

The mismatch then seems to come from overly optimistic journalism and a sudden wealth of low-cost commercial Brain Computer Interface applications. The term “mind-reading” has been thrown around far too often and whenever possible the example of controlling a wheelchair with your thoughts is rolled out without putting it in context. As in, yes, you can control a wheelchair with a Brain Computer Interface but probably not with the low-cost, easy to use, comfortable to wear system you saw on WIRED and, in my opinion, not without giving significant control to the wheel chair itself (Allison et al. 2012). Don’t get me wrong, I think the advent of low-cost Brain Computer Interface technology can only be good for the field and will certainly improve hardware and drive down costs in general but we should be clear on what can be done with these systems and manage our and everyone else’s expectations.

So, to answer the question, the Future of Brain Computer Interfaces, are we there yet? Definitely not, but a lot of real progress has been made and I think there has been a subtle reset in the community in terms of where we are and what remains to be done and obviously also some very grounded journalism (How Far Away is Mind-Machine Integration?).

One of our conclusions in the FutureBNCI roadmap was to not accept the current state-of-the art in wearable EEG as a starting point. There is a lot more to do in understanding EEG and how it might be recorded that will directly impact the usability of real-world BCI before we even consider the features to use.

It’s also good to keep in mind that some of the most interesting applications of Brain Computer Interface fall outside this classical idea of assistive technology. Our own work includes biometrics, emotion recognition and neuromodulation and many other groups such as the Berlin BCI group are working on novel applications (Brain-Computer Interfaces for Non-Medical Applications: How to Move Forward). Some of these applications are, as it happens, suited to low-cost wireless hardware, so as is often the case maybe the future will not be what we expected.

photo credit: Gilderic Photography via photopin cc

Leave a Reply

Your email address will not be published. Required fields are marked *