Making Robotic Prosthetics We Can Control With Our Minds | Science Not Fiction

![dococ][1]In [_Spider-Man 2_][2]–which I know isn’t canon, but work with me here–[Dr. Octopus][3] can only do his research thanks to some spectacular artificial arms: Each of his four bonus arms is heat resistant, incredibly precise, and has a brain of its own, so they can work independently. The arms join in a knapsack-sized device that connects directly to his spinal cord, so Dr. Octopus can send signals to the arms with his thoughts. He can think sends orders to the arms through a direct link into his spine. Now here in the real world, we have trouble linking robotic limbs directly to nerves because our bodies reject metal attachments to our nerves. So Doc Ock really achieved something there, setting aside the later problems with the arms’ AI (surely an easily fixed bug).

Now a crew of scientists at Southern Methodist University [is working on their own technique][4] for creating two-way communications between an artificial limb and a user’s brain. It uses non-metallic polymers, and at its core, it uses the same principal as whispering galleries of the sort that can be found in St. Paul’s Cathedral in London, or at certain parts of Grand Central Station in New York. Indeed, they call it a “whispering gallery mode.”

A person standing in the gallery outside the Oyster Bar in Grand Central could turn and face the wall, and speak a sentence. None of the rushing commuters and tourists walking through the gallery would hear a thing, but a person standing on the other side of the gallery, in exactly the right spot, would hear the first person’s voice as if s/he were standing right there.

The [effect works][5] because the shape of the gallery allows the sound wave to bounce up and around the wall with hardly any loss of energy. The effect only works if the gallery has just the right shape, and the people are standing the right distance apart, and it only works along a narrow band along the surface of the gallery, called the “mode.” Hence, the term “[whispering gallery mode][6]” to describe this technology.

Now imagine that instead of a great stone edifice covered in soot and filled with people, the gallery is a tiny polymer sphere. When the sphere is slightly deformed by an electric charge, light enters. When the sphere snaps back, light gets trapped in the sphere, zipping around and around, with hardly any loss of signal strength, until the sphere is deformed again.

A group of such spheres forms [the crucial interface][7] between a robotic limb and the nerves that communicate with it. The end of the nerve is surrounded by a cuff, which terminates in these tiny balls. When the brain sends a signal down the nerve, the electric pulse triggers the cuff, which then deforms the sphere, sending a beam of light to the prosthesis, which then moves.


The signal works the other way, too. The robotic arm can trigger the cuff or the nerve itself with infrared light, allowing for true two-way communication between the robotic limb and the brain.

Researchers at Southern Methodist University have already built a device that pretty much achieves all of these functions but it’s far too large, at several hundred microns. But they received a grant from DARPA or $5.6 million, and they’re hoping to build a robotic limb using this technology for a dog or a cat within the next two years. Just think: Doc Ock could have a little friend!

[![][9]][10] [![][11]][12]


[1]:×300.jpg (dococ) [2]: [3]: [4]: [5]: [6]: [7]: [8]:×226.jpg (neurophotonic) [9]: [10]: [11]: [12]: [13]: [14]:


Leave a Reply

Your email address will not be published. Required fields are marked *