The Future of Mind-Reading
Almost everyone has an internal monologue — that little voice inside your head which puts thoughts into words, but words that only you can “hear.” And it’s probably better that no one else can hear these often passing thoughts, as we tend to self-censor before those ideas become words which fly out of or out mouths or pens. Or, in other words: We can’t read each other’s minds — and that’s probably a good thing.
But it may change. Somewhat.
When we think in words, our bodies get ready to speak. At times — particularly when we’re reading, although hardly limited to that event — this manifests itself in something called “subvocalization.” This mental prep manifests in our throats; as the Guardian notes, “inner speech is accompanied by tiny muscular movements in the larynx.” Those muscle movements are the first steps toward turning some of our thoughts into words, and importantly, they happen even if those thoughts are never carried further. If you can hear the words in your head, someone else can also see those very same words in your throat.
This isn’t a new discovery; we’ve known about this for a bit more than a century. Practically, though, there isn’t a lot we can do with these teeny-tiny movements. They’re hardly visible to the naked eye, and monitoring these movements requires that you have all sorts of sensors and doodads placed on your throat. Further, the tiny tremors created in one’s larynx by subvocalization aren’t complete sounds; it would require a lot of data to map these movements to comprehendible words or thoughts.
At least for now. Just ask NASA.
In 2004, the American space agency issued a press release describing their efforts to turn tiny throat movements into recognizable words. First, the space agency created “small, button-sized sensors” to be placed on the necks and under the chins of a willing group of participants. (Here’s a picture.) Then, the test subjects were asked to say, to themselves and only in their minds, a handful of words: “stop,” “go,” “left,” “right,” “alpha” and “omega,” and the numbers “zero” through “nine.” The NASA software recorded the throat movements as those words were thought, creating a database against which it could track future movements. It worked; per the press release, “initial word recognition results were an average of 92 percent accurate.”
The translations are, for the most part, basic, but that is something that further trials should be able to improve upon — you just need to spend more time mapping more and more sounds. The larger leap, for now, is whether we can gather that information from a distance, without having to put sensors on the throats of those whose subvocalizations we aim to detect. NASA, per the same press release, was “testing new, ‘noncontact’ sensors that can read muscle signals even through a layer of clothing.” To date, those haven’t been successful — and that may be for the better.
[McDonald] was starting to feel the strain on her vocal chords, and her ear, nose and throat doctor said I recommend strongly, in fact I’m telling you, to shut down on your one day off. Don’t talk at all. And so she incorporated Mondays as her silent day. And I thought as a pre-emptive strike, I’m going to do the same.
Instead, one day a week, Cranston carried “little notepads and a whiteboard” with him, writing notes instead of speaking. (And yes, one of the notepads had a pre-written explanation as to why he was doing this.)
From the Archives: Alone in the Ocean: What if no one else could hear you speak? That’s the fate of one unique whale, which speaks at a frequency well above the normal range of its species.