These remarks were delivered to the OECD Working Group on Emerging Technologies Minding Neurotechnology Workshop, Shanghai, China, September 6, 2018
Graeme Moffat, Chief Scientist & VP Regulatory Affairs, Muse (Interaxon); Senior Fellow, Munk School of Global Affairs & Public Policy (University of Toronto)
I would like to thank the OECD for inviting some representatives of the neurotechnology industry here to speak. I’m Graeme Moffat. I’m the Chief Scientist with Muse, as well as a Senior Fellow with the Munk School of Public Policy at the University of Toronto.
Muse is a success story of Canadian and Chinese collaboration on neurotechnology, between our partners in Xiamen and our head office in Toronto. This is a story of global integration in neurotechnology, with Canadian and Chinese engineering serving the world’s largest two markets of the USA and the EU. I am encouraged by the presence of representatives of industry at this conference; often, the people building and innovating on neurotechnology occasionally read reports of workshops attended by only academic philosophers and occasionally academic neuroscientists. There is an enormous disconnect between the practice of neurotechnology and the hypothetical ethical concerns raised in academic workshops on neuroethics, and that gap is wide and filled with misconceptions at best – and science fiction at worst.
At Muse, we make neurotechnology for consumers and for brain health. Muse itself is the most widely adopted consumer neurotechnology in the world by about an order of magnitude, if not more. What is it? Muse is a biosignal system that incorporates electroencephalography and other sensors in a portable and wearable form factor. We believe that we became the biggest by focusing on delivering real and persistent value to people who use our technology. Our users, we believe, should encounter a human-centered technology experience, and it should empower them to lead happier and healthier lives. Our belief is that neurotechnology should be designed and built from the user outward, to solve a real problem for a real person, in a way that person can understand.
We built Muse by accident. Our founding story originates in the laboratory of the father of wearable computing, Steve Mann, where our founders were trying to make an active brain-computer interface (a BCI) to control a mouse cursor in an augmented reality heads-up display called EyeTap, ten years before Google Glass would launch. We failed, of course, as nearly everyone who’s tried has failed to build active BCIs from EEG.
In the process of failing, though, one of our founders made a discovery: that in learning to push the EEG signal around, one had to achieve mastery of one’s own thoughts. And after asking a few people (including some Buddhists), he discovered that he had accidentally learned some of the essential skills of meditation. In attempting to use neurotechnology to control machines in the external world, we had instead turned the spotlight inward toward the user. This really worked, and so we created what may have been the first mindful neurotechnology.
We will soon reach a milestone: sometime in the not too distant future, a consumer neurotechnology will sell millions of units. Not long after, a neurotechnology will sell millions of units per year, and so on. This is NOT fiction. This is where we’re going right now.
With this in mind, there are several important questions we must ask of ourselves and our field. Most important among them: what problems are we really aiming to solve; what, as the deep learning people say, is the reward function of our technology, and perhaps most importantly: what are we doing with neural plasticity, especially as it relates to manipulating the human reward system. That last one – what are we doing with the human reward system – is especially critical given multiple converging crises brought about by technology.
The technologies that many people interact with most frequently, the smartphone and social media, have without any doubt been optimized to elicit reward seeking behaviour. As practitioners in fields adjacent to psychology and neuroscience, we know fairly well both the mechanisms and consequences of this manipulation. We know how to use it to change the behaviour of animals, and we know now what happens when technology companies use it to change the behaviour of humans: a crisis of mental health so severe that it may have affected the happiness and life satisfaction of an entire generation. Jaron Lanier calls the big tech companies “behaviour modification empires” for good reason: someone with inside knowledge once said that a big US tech company had “a couple thousand people who thought deeply about human psychology.” How sure are we that the technologies we’re building are more than just digital Skinner boxes?
It is perhaps fitting that we’ve gathered here in Shanghai to address the challenge of a policy for technology of the brain and mind. After all, just about 200 years ago, this city had a defining role in what are now called the Opium Wars, in which new technologies of trade and shipping potentiated an unprecedented and devastating outbreak of addiction so severe that it crippled the Chinese economy and left a legacy so terrible we still feel its effects today. We now face mental health challenges brought about unintentionally by technology on multiple fronts.
Opium is still with us, amplified by technology into shockingly potent forms, causing addiction and social problems worse than ever in some parts of the world. The basic human desire for social connection is exploited by social media platforms optimized to hijack our reward systems and keep us checking our notifications, even as I speak. (It’s okay… go ahead and check your phone.) The problem is so acute that companies like Google and Apple have reluctantly incorporated elements of digital detoxification into their very mobile operating systems. It is in this context that we must ask ourselves the question, as we build out neurotechnology at scale: are we respecting human autonomy, or are we attempting to use our special access to the human brain for some less altruistic and noble purpose? It’s almost too easy when designing technology to fall into the trap of exploiting the neuroplasticity of the reward system.
How should we treat human autonomy and privacy? At Muse, we apply the principles of privacy by design. From the ground up, we design for privacy and data protection, and we communicate that directly and visibly to all of our users. We want our users to know that they are in control of their technology and of their brain data.
As an industry and as a field we must be keenly aware: end users will judge the value and security of neurotechnology not by the bulk of it or the best of it. Consumers and the general public will judge neurotechnology based on the worst exemplars of privacy breaches and failures. It will only take a few irresponsible actors to drive the whole field into the ditch. This is perhaps our gravest risk – it may not take much to sour public opinion against neurotechnology and to hold back all of its potential benefits.
We need the trust of neurotechnology end users, because we need people around the world to trust us with their brain data. Why? Because with extremely large datasets arising from consumer and home neurotechnology, we will soon know things we didn’t before. Some of the things we think we know about the brain and about neuroscience will turn out to have been artifacts of very small sample sizes or limited measurement tools. At Muse we’re already seeing this – when you can ask a brain health question of a few hundred thousand people, or of a set of a few thousand people repeatedly over weeks, months, or years, the stability of measures is an entirely different thing. What is state and what is trait takes on a different meaning when you can measure a single individual over 1000 consecutive days. The value of these insights for the betterment of brain health for everyone, including especially those who share their data, is potentially profound.
Another question for us as a field: are we doing the right work? Every month, hundreds of papers are published on new engineering approaches to non-invasive BCI. Many of the students working on these problems work on non-standard datasets, and many more work only on algorithms and data, never having themselves tested neurotechnology with humans, totally ignoring behaviour, user experience, and so much more. It is entirely possible to earn a Masters degree on neurotechnology in most universities these days without ever testing with an animal or a human, and taking on faith the descriptions of data and behaviour. Can this really be a reliable way to build technology for people?
Are too many academic philosophers and their students worrying about ethics for science fiction neurotechnology, when many practical concerns are overlooked? Ethicists seem to enjoy working most on hypothetical risks arising from technologies that don’t yet (and may never) exist, yet so many technologies with such huge potential benefits are inaccessible to those who might benefit most, and so many ethically questionable practices that are already widespread, like neuromarketing, go almost completely ignored.
Are our finite resources being wisely used, when we could and perhaps should be thinking about how to apply neurotechnology to the world’s real and huge problems like mental health? Is neurotechnology being driven by the areas of greatest need and opportunity, or those of esoteric curiosity?
We must ask ourselves, as an industry and as a field: are we telling the truth? Attend any technology conference, any hackathon, anywhere on the internet, and you’ll find a story about how a non-invasive BCI helped a disabled person move again, helped assess an athlete during activity, or helped someone speak with their minds. This tugs at the heartstrings – and it does so precisely because it appeals to our emotional rather than our rational minds. But this is not reality. Active, non-invasive “thought controlled” brain computer interfaces may in fact never work well enough to be widely adopted – they may only ever be useful to people with locked-in syndrome, for example.
It is so very tempting, especially where neurotechnology collides with marketing, to use technology parlour tricks and exaggerate for attention and hype, but there are grave, serious risks from “neurostorytelling” and “neurodeterminism.” We already know from psychology research that providing explanations which include appeals to “brain explanations” are believed more than the same explanations without, regardless of whether they’re true.
Sparse EEG systems, for example, simply cannot deliver enough degrees of freedom to control robotic arms reliably. And to get even two degrees of freedom in a high-density system requires weeks of training for a single individual. Yesterday, I saw the future of active brain computer interfaces for control: using wrist-worn electromyography, CTRL-Labs in New York can sort individual motor units in the forearm, and users can access seven degrees of freedom almost instantaneously and without training. What does head-worn EEG or fNIRS add in a world in which EMG works so much better for 99% of users? Should we even be working on head-based active BCI at the scale we now see, if the solution is right in front of us and it’s on the arm?
Low-density EEG simply cannot be used in high motion environments. What are the consequences of telling a story that can’t be backed up by the technology? As a field, if we stretch the truth, are we pushing the field forward or holding it back? These are critical questions, because it may take only a few irresponsible statements, repeated regularly, to undermine all of the good work of an entire field.
We may be tempted to worry about the science fiction implications of neurotechnology –neuroslavery and the like. Hype about the deployment of wearable brain sensing in industrial monitoring in China set the technology press ablaze with controversy over what is little more than an industrial safety application already used in the West. It’s important to remember, too, that there may never be a non-invasive, head-worn neurotechnology that works better than the alternatives. Remember: for widespread adoption of “brain-controlled technology,” you don’t have to be better than the state of the art (which is pretty terrible). You need to be better than two hands with opposable thumbs, or better than AI voice recognition. Those are the real comparators.
For insights about human behaviour, do we need to see inside the head, or is the measurement of behaviour and human psychology enough?
What should we be working toward, as a field, in applied neurotechnology? I’m not talking about neuroscience, which should be largely unconstrained in its pursuit of understanding the brain. I’m talking about the potential for tremendous leaps in mental health based on technologies that already mostly work: Are we doing enough to adapt these neurotechnologies to solve the world’s multiple mental health crises? As Professor Mu-ming Poo said: we cannot wait for new neuromodulation therapies.
Are we teaching neuroethics and critical perspectives to enough of our students and graduate students? There’s a natural tendency for students to focus on hypothetical questions of technical possibility or on transhumanist philosophy. I’ll leave you with this: are we giving students the right tools to work in the growing neurotechnology industry in a practical, honest, and informed way?
I hope that I’ve given you an idea where we stand at Muse and about what we see as the most important problems that neurotechnology should address. Our belief is that this field should focus on solving real problems for people, on improving mental health and increasing individual autonomy, and that it should do so in a manner supported by robust evidence.
I wish to thank the OECD, Tongji University, and the organizers of the Minding Neurotechnology workshop for including us in this important discussion.