Here’s a look at the work at the intersection of brain-computer interface, robotics and AI taking place at Johns Hopkins Applied Physics Lab.
When there’s dessert involved, most people cut a sweet treat and eat it without thinking too much about what they’re doing. But when you take a minute to consider, there’s a lot involved.
First there’s picking up the utensils. Then your brain decides how big of a piece it wants to cut, and goes about cutting it. After picking up the piece and bringing it toward your mouth, you have to aim right. Ultimately, it gets to your mouth. This is usually when the first thought kicks in and it’s something to the effect of, mmmm.
This is the process that’s shown in a recent video showing Robert “Buz” Chmielewski feeding himself a piece of sponge cake at Johns Hopkins Applied Physics Laboratory (APL) in Laurel. But instead of Chmielewski’s own hands and arms, it is robotic arms that is performing the work.
Yet Chmielewski was still controlling the actions, just with his mind.
Being able to use the two prosthetic limbs to perform an every day task is a mark of progress two years into a research study conducted by APL and Baltimore-based Johns Hopkins School of Medicine. And it was the first time Chmielewski did this particular skill.
“The fact that he was able to do something like that on the first shot was remarkable in and of itself,” said Francesco Tenore, an APL neuroscientist who is principal investigator on the project.
Combining a brain-computer interface, AI and robotics with APL’s work in prosthetics, it’s helping to show the way for technology to restore function and autonomy for patients who have lost use of their limbs.
It’s also one of the latest efforts out of Maryland that builds on work which started at the Defense Advanced Research Projects Agency (DARPA) in 2005 and was carried forward at the University of Pittsburgh. At APL, the Smart Prosthetics project offers a look at advances at the intersection of our understanding of how the brain works, and how that can be used to work machines.
At APL, the idea is that putting this technology to work can help with the everyday tasks, aka “activities of daily living.” It presents a real-world application.
“It’s always tasks of increasing complexity,” said APL’s Tenore. “Start simple and then progress your way toward previously what nobody had seen before.”
For Hopkins, the path started two years ago. Chmielewski, who is quadriplegic, underwent surgery in which six electrode arrays were implanted into each hemisphere of his brain. They’re just millimeters big — as little as 4x4x1.5, to be exact. But with these devices in place, it allows researchers to capture the neural activity taking place in a particular part of the brain. After recording what happens in the brain when Chmielewski thinks about a certain task, the scientists then can map between the patterns and an activity — say, how an arm is moving.
“The robot knows how to do most of it, but the key is, for each of those steps of the task, what should the person be doing?”
– David Handelman, Johns Hopkins Applied Physics Laboratory
The system also allows for the perception of how it feels to pick up and touch the items.Those patterns are key as scientists then enable artificial intelligence and robotic devices to take the signals from Chmielewski’s brain, and translate it to the robotic limbs.
The idea is that the robot can do a lot of the work. But when it comes to making the key decisions like what to eat, how big of a bite to cut and where to cut, it is a person’s thoughts that are controlling it.
“We want to give the person the option to customize what the machine is doing,” said David Handelman, a senior roboticist at APL. “The robot knows how to do most of it, but the key is, for each of those steps of the task, what should the person be doing? We want the person to be doing the most important thing.”
While the milestone has a big wow factor, it also offers a window into how advanced technology is developed. Advances are made step by step.
Pitt researchers have demonstrated tasks that could be performed by both arms, but with signals from one hemisphere. In the case of Chmielewski, it was significant that he was controlling the technology with both hemispheres. They built to the task, training up to the point where he could cut the food and eat on the first try. Going forward, they’ll add tasks that require more dexterity. Tying shoelaces is a particular focus. And they’ll work to add more feeling.
As with lots of advanced technology, this started with government funding. That led to helping a specific community, and one where the technology could significantly help people. But as they move forward, the teams working on this have an eye toward how it might apply in society more widely, as well.
Having similar setups for other day-to-day activities would a “game changer” for people who lacked function in their arms, Tenore points out: “You would use the best of both worlds — the brain to direct what you want to do and how you want to do it, and the machine to help you get there.”
As more applications of these technologies are built out, Handelman sees more robots and humans working side by side.
“This is a very tight coupling between humans and machines and so it’s fascinating exploration of how might we best collaborate in the future,” he said.
Along with Tenore and Handleman, team members behind the advance shown in the video included Andrew Badger, Matthew Fifer and Luke Osborn from APL. Tessy Thomas, Robert Nickl, Nathan Crone, Gabriela Cantarero and Pablo Celnik contributed from the School of Medicine.