28.7 C
New York
Friday, July 26, 2024

When Mind Melds With Machine, Who’s in Control?

The last time I saw my friend James was at the townie bar near our old high school. He had been working in roofing for a few years, no longer a rail-thin teenager with lank hippie hair. I had just gotten back from a stint with the Peace Corps in Turkmenistan. We reminisced about the summer after our freshman year, when we were inseparable—adventuring in the creek that sliced through the woods, debating the merits of Batman versus the Crow, watching every movie in my father’s bootlegged VHS collection. I had no idea what I wanted to do next. His future, on the other hand, was decided: He had recently joined the Navy and was starting boot camp the following week. He wanted to serve in Afghanistan.

James Raffetto trained for the next three years as a special-operations medic. He got married and, shortly after, was deployed to southern Afghanistan. About four months into his first tour, just after he had treated a local woman’s sick daughter, he stepped on an improvised explosive device—an ingenious contraption triggered by a balsa-wood pressure plate, invisible to bomb detectors. He recalls finding himself face down, unable to right himself, screaming “No!”

His platoon mates asked him what to do. James directed them to tourniquet his limbs, inject him with morphine, and tell his wife, Emily, how much he loved her. He woke up a week later in a hospital in Maryland, missing both legs, his left arm, and three fingers on his right hand.

I was on the other side of the country by that point, working toward a PhD in neuroscience. We messaged a few times. He expressed how hard it was for him to accept help after years of fierce competence.

James’ injury prompted me to attend a symposium on the emerging field of brain-computer interfaces—devices designed to read a person’s neural activity and use it to drive a robotic prosthetic, speech synthesizer, or computer cursor. At one point, a member of a neuroscience lab at Brown University showed a video involving a paralyzed, nonverbal patient named Cathy Hutchinson. The researchers had fitted her with a system called BrainGate, which consists of a tiny electrode array implanted into the motor cortex, a plug perched jauntily atop the head, a shoebox-sized signal amplifier, and a computer running software that can decode the patient’s neural signals.

In the video, Hutchinson attempts to use a robotic arm to pick up a bottle of coffee with a straw in it. After a few moments of intense concentration, her face hard as a fist, she grasps the bottle. Haltingly, she brings it to her mouth and takes a sip from the straw. Her face softens, then breaks into a joyful smile. Her eyes radiate accomplishment. The researchers applaud.

I wanted to applaud with them. Neuroscience is a field starved of concrete therapeutics. Few neurological drugs work much better than placebo, and when they do researchers don’t understand why. Even Tylenol is a mystery. New techniques and procedures can have striking effects without clear mechanisms; the protocols get worked out by trial and error. So the promise of tangibly improving the lives of people with motor disorders and physical disabilities was intoxicating. I imagined James playing video games, doing repairs around his house, unlimited in his career options, cradling his future children with both arms.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

But Hutchinson’s feat, I learned at the symposium, had required her to accept great risks. The hole in her skull made her vulnerable to infection. And the electrode array—a square of metal with a hundred hair’s-width needles sticking out one side—would inevitably cause tissue damage. Implanting one of these devices in brain matter is like mounting a painting on Jell-O. With each wobble, there’s a chance that the electrodes will tear up cells and connections, or drift and lose contact with their original neurons. Hutchinson might spend months training particular cells to operate the robot arm, only for those cells to end up dead or out of range. And eventually her body’s defenses would shut the experiment down: Over time, scar tissue forms around the electrodes, isolating them from the neighboring neurons and rendering them useless.

Why would someone take this big a gamble on a short-term gain? Maybe because a loss of bodily agency is one of the most brutal experiences a person can have—not only physically but also psychologically. Brains exist to perceive the world, make predictions about it, exert control over it. Your brain has an electrochemical reward system that makes it chase the feeling that control gives it. When there is cause but no effect—when your body can’t do what your brain wants—your mind loses a fundamental source of satisfaction and purpose. This is the despair that fueled James’ tortured “No!” Even very small perturbations can do it: Just think how infuriating it is to use a laggy computer mouse.

James didn’t end up needing a risky brain implant to rebuild his life. He has a family, good work, a loving community. He credits his wife for being the key to his recovery, not the fancy computerized prosthetic legs he spent months learning to use, then abandoned because they were too ungainly. He even forgoes a motorized wheelchair for a manual model that’s harder to maneuver but doesn’t break down. He’s wary of implanted medical devices, which he likens to temperamental Bluetooth gadgets. “Taking those kinds of problems and adding them into my body is terrifying,” he tells me. Instead, he celebrates his body’s natural adaptiveness: For example, he learned to use a bone spur that grew out of his healing femur for balance and stability. “It’s never going to be an advantage, but it doesn’t have to be a disadvantage,” he says.

Since James’ injury, I have come to believe that his reservations are very wise. His experience set me on a path that ventured deep into the world of brain-computer interfaces—and straight into an ethical morass.

At first, I believed that the main problem with brain-computer interfaces was technical. Wasn’t there some less invasive, less damaging way to improve the lives of people like James and Hutchinson? After the symposium, my classmate Aaron Koralek and I excitedly discussed how such a device might work. Aaron had recently developed one of the first brain-computer interfaces for rodents. I had been using a defanged virus to deliver a gene for fluorescence into brain cells, which makes them glow brightly when they’re electrically active. We decided to merge methods. Rather than tapping into the brain, we’d make it shine for us.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

We trained about a dozen mice on Aaron’s interface, which worked like a neural joystick. Using small sets of brain cells, they could control the pitch of a sound we played them, dialing it up or down until it reached a target frequency. Each time they succeeded, they’d get a reward. Aaron and I observed the process live under a microscope, which picked up faint flashes of light from the fluorescent cells. It was like watching a lightning storm from space. We were stunned at how quickly the mice learned the task, how precisely they were able to control the interface. If they could use this technology to manipulate a sound, why couldn’t a human use something similar to guide a robotic arm?

Though Aaron and I had done away with electrodes, our technique was still quite invasive. In order to see the neurons glow, we had installed glass windows in the mice’s skulls, fixed there with dental cement. But it was a proof of concept. Researchers are already developing ways of peering through the skull with ultrasound or infrared waves. In the future, instead of undergoing a risky surgery, people could don sleek headgear studded with wireless sensors. The technical problems with brain-computer interfaces, I felt sure, would eventually fall away.

But something else nagged at me. After months of training mice in a pitch-black basement with the grandiose hope that you will one day restore agency to your friend and maybe lots of other people, you get to wondering what agency is. What happens in the space between volition and action? As Ludwig Wittgenstein put it in his Philosophical Investigations, “When ‘I raise my arm’ my arm goes up. And the problem arises: What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?”

Experiments on the brain have indicated that Wittgenstein was on to something: If you disrupt activity in a certain area, a subject moving their arm will suddenly feel as though an alien entity is doing it for them; if you disrupt a different area, the person might feel as though they desperately willed their arm to move but couldn’t affect it.

Scientists have a smattering of these descriptive studies of agency, but they’re far from having a causal understanding of it. The fact that they know so little should make a brain-computer interface’s job impossible. How can it distinguish an imagined action from an intended one? What’s the neural signature of a snarky thought versus a comment spoken aloud? How can a machine be expected to conjure the variable missing from Wittgenstein’s equation, to make a raised arm out of patterns of neural activity?

That activity is far from neat. The notion that there are spots in the brain that perform discrete mental functions from start to finish (the “love area,” the “fear nucleus”) is the result of bad pop science. In truth, the brain is a highly trafficked communications network, and the computer must learn to interpret the signals as best it can. It does this in much the same way that other machines figure out how to auto-complete your emails and texts—by crunching lots of historical data and using it to guide future behavior.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

If you’ve used automatic text completion, you know that it can subtly elide the boundary between intended self and machine-predicted self: Sometimes you choose words that aren’t quite yours. Similarly, something may get lost in translation in a brain-computer interface. Our neural decoding techniques work well enough to maneuver a straw between a person’s lips. But as the tasks get more complicated, how can we be sure a particular action is exactly what the user intended to elicit? Researchers have no meaningful way of eavesdropping on the conversation between mind and machine. If the answer to Wittgenstein’s question passes between them, the scientists never hear it. They don’t know what the algorithms are learning. They only know that the Tylenol basically works.

Around the same time that Aaron and I published our mouse study, Phil Kennedy, an Irish-born neuroscientist living in the US, flew down to South America to undergo brain surgery. Kennedy had been a well-known figure in the field since the late 1990s, when he became the first researcher to implant a paralyzed person with electrodes that could be used to control a computer cursor. But he didn’t think the applications should end there. He believed that every brain would eventually have a computer interface, and that this would shape the course of civilization.

In Belize, beyond the reach of the US Food and Drug Administration, Kennedy sought to acquire a brain implant of his own to conduct research on himself. The experiment was a dangerous flop. Although Kennedy recorded plenty of neural activity, persistent infections forced him to remove the electrodes after three months. Yet his faith in the technology didn’t seem to suffer. “Your brain will be infinitely more powerful than the brains we have now,” he told a WIRED reporter in 2016, two years after his ordeal. “This is how we’re evolving.”

By 2017, that idea had seized Silicon Valley. Elon Musk announced a new company called Neuralink, which was developing a way to “knit” low-impact electrodes into the brain and transmit the signals wirelessly. Musk’s short-term goal was to treat illness and disability. But eventually, he said, Neuralink would augment everyone’s agency. It would allow people to upgrade their intelligence and abilities by mind-melding with machines—which, in turn, would help humans survive the inevitable death match with the genocidal AIs of the future. The other big entrant that year, Facebook, had more modest designs: The company planned to build a noninvasive headset that could decode thought at a rate of 100 words per minute. (One potential application: Posting something on Facebook.)

I eyed the new class of Tylenol-pushers, doubtful that they understood the quagmire they were stepping into. Once you answer the practical questions about brain-computer interfaces, the philosophical ones begin to multiply.

Say someone was strangled with a pair of robotic arms and the main suspect claims that his brain-computer interface is to blame. Maybe his implant was on the fritz; maybe his algorithm made a bad call, mistaking an intrusive thought for a willed intention or allowing anxiety to trigger an act of self-defense. If you don’t know the neural signature of agency—only that, somehow, volition becomes action—how do you prove him guilty or not guilty? And if it turns out that his brain did intend to kill, was it the machine’s responsibility to stop him?

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

These aren’t hypothetical questions for a distant future. We’re wrestling with them today. How do we assign responsibility when self-driving cars hit pedestrians, or when passenger planes crash on autopilot? In the Air France 447 and Boeing 737 Max crashes, the autonomous systems got confused by faulty sensor information and the pilots couldn’t recover from the malfunction. This belies the promise, touted by many corporations, that keeping humans in the loop will prevent things from spiraling out of control. It may, in fact, just be a legal sleight of hand to pin liability on an entity that courts are already equipped to hold responsible. A key difference, however, is that a brain interface is part of the body, which makes responsibility harder to demarcate.

There are also, of course, major privacy and security questions with brain interfaces. By virtue of the fact that many signals are globally available throughout the brain, a recording device could be picking up signals about your sensory experience, your perceptual processes, your conscious cognition, your emotional states. Ads could be targeted not to your clicks but to your thoughts and feelings. These signals could even potentially be used for surveillance. Ten years ago, members of Jack Gallant’s lab at UC Berkeley were able to hazily reconstruct visual scenes from the brain activity of people watching video clips. The technique has gotten better with time. If, one day in the far future, someone tapped into your wireless neural receiver, imagine what they could see and hear. Certainly a lot more than if they hacked your webcam or smart speaker. Through our own eyes and ears, we might become the unwitting operatives of a distributed panopticon.

Direct brain-to-brain communication is just as ethically fraught. It’s a beautiful, utopian impulse—the sense that if only we could fully see what’s inside one another contentions would cease. Should it prove technically possible, however, the question of privacy becomes all the more salient. In the same way that social media companies must grapple with content moderation, brain devices would need to filter inter-brain communication for harmful, hateful, or violent thoughts. There might even be patterns of problematic neural activity that can be passed between people like computer viruses. Epileptic seizures, for example, can be learned by the brain in a process known as “kindling.” Like arsonists setting fire to a city, malicious actors might seek to inject such maladaptive brain activity in a bid to harm other users.

The history of technology, the history of humankind, is one of relentlessly extended agency—exerting control over materials, plants, animals, and perhaps, one day, minds. The invention of computers has transmuted that agency to a programmable realm, wherein a hand can control a mouse that is by turns a digital paintbrush, a text cursor, or a drone’s gun sight. While I’m still hopeful about what brain-machine interfaces will be able to do for people with impaired motor function, we should acknowledge where good intentions might be obfuscating a potential ethical catastrophe. We’ve got to reckon with the implications of agency and privacy as they pertain to AI today, before they’re interfaced with our bodies and minds. We’re being promised new avenues of human control, when it is precisely control we’d be ceding in what could be the largest deprivatization of thought since the invention of language.


More Great WIRED StoriesThe race to find “green” heliumYour rooftop garden could be a solar-powered farmThis new tech cuts through rock without grinding into itThe best Discord bots for your serverHow to guard against smishing attacks👁️ Explore AI like never before with our new database🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones

Related Articles

Latest Articles