Wheelchairs can’t yet be summoned via voice commands or connect with other wheelchairs to issues warnings about what’s ahead. Dr. Konstantinos Sirlantzis wants to change that.
Electric wheelchairs offer independence to those with mobility issues, but there are still limitations. They can’t yet be summoned via voice commands or connect with other wheelchairs to warn each other about obstacles ahead. Or can they?
Dr. Konstantinos Sirlantzis, Senior Lecturer in Intelligent Systems at the University of Kent, envisions a smart wheelchair future enabled by robotic plug-ins and add-ons. We met with him during a recent trip to the UK, where he gave us a sneak peek at his current project: Assistive Devices for empowering Disabled People through robotic Technologies (ADAPT).
The project’s overarching goal is to improve people’s independence and quality of life. It’s funded by the European Regional Development Fund (ERDF) and the University of Kent is partnering with 16 other institutions, including National Health Service hospitals.
While his PhD students zoomed around the lab in robotic-assisted wheelchairs, we sat down with Dr. Sirlantzis to find out more. Here are edited and condensed excerpts from our conversation.
PCMag: You’ve been at the University of Kent for 25 years now and are considered an expert in artificial intelligence techniques, neural networks, genetic algorithms, and other biologically inspired computing paradigms. How did you become interested in assistive robotics tech?
Dr. Konstantinos Sirlantzis: I was a systems analyst at the National Bank of Greece, then I did my Masters in statistics—dealing with the training algorithms of artificial neural networks—before doing my PhD in the same field. I started looking into artificial intelligence and, subsequently, robotic applications of AI when I began discussing these areas with my colleagues in the engineering school and with clinical engineers at the local hospital. I could see a way that the technologies we were interested in developing could be applied to existing electric wheelchair design.
So you’re not re-engineering from the ground up with a robotic system? This is entirely modular?
Yes, because we want to make sure it fits into current requirements, and is also pre-approved and used by the current healthcare system.
Otherwise, like a lot of new technology that looks great but doesn’t abide by regulatory bodies, it gets stuck on the shelf.
Some of your robotic modular add-ons use iris detection so the user can move their wheelchair just by blinking?
We have a combination of technologies, including tracking the movement of a person’s head, iris, or nose in order to control the wheelchair, again depending on the user’s changing abilities over time and condition progression.
How is AI used within the robotic design?
We are integrating AI and sensing technology to ascertain both collaborative control of the wheelchair, and assessment of the physiological and emotional state of the user. The wheelchair using AI is learning the user’s preferences and style of driving, as well as his or her current physiology (heart rate, etc). This will enable us to vary wheelchair autonomy depending on the user’s state (e.g. provide more assistance when a user shows fatigue), as well as report out data to health agencies, carers, and so on. The platform is cloud-connected.
Will the promise of always-on 5G connectivity change your project?
Yes, our concept of the robotic-connected wheelchair is as a generation 5G device which can detect events and constantly scan the environment, geotagging issues with the corresponding images or video and uploading them to the cloud. Then, if there’s an obstruction on the path, another wheelchair user who is coming to this same physical point will be made aware of the situation.
The wheelchair will be smart enough to take in the information on real-time obstruction data and re-route?
Exactly. We consider our work to be “socially robotic assistive technology.” It helps users to participate more fully in their world through the use of technology.
There are many issues with first-generation tech, though.
That’s true, which is why we’re not re-engineering the wheelchair from the ground up. We are careful to ensure that the wheelchair will always work. We cannot have a problem, as with newer technologies, where the system just stops working and the user is stuck there. That would be very frustrating for them and very wrong.
How much smart automation have you built-in?
We had requests from many users to build an app so they could “call” the wheelchair to their bed when they woke up, rather than waiting for their carer to arrive. Using Wi-Fi or Bluetooth communications on the chair, an app on the user’s mobile phone, and the same parking assist technology that’s used in semi-automated cars based on ultrasound and vision sensors, it drives itself safely around the clutter of the environment to get to the user.
Does the wheelchair create a 3D map of the location?
Eventually yes, we will have intelligent autonomous 3D mapping, but not at the moment. The electronic circuit boards we’re using are still not that cheap, so what we’re doing, apart from the ultrasound, is utilizing the cheapest kind of Lidar as a two-dimensional mapping device.
Computer vision has been one of my fields of work for many years. As you know, it requires very heavy computational processing. But we want to embed this capability on the device itself, rather than need a computer to do so.
Let’s talk about the software
All the software is written here in the lab, mainly [in] C or C++ plus a little Python. We are using the Robot Operating System (ROS), creating nodes with our development partners so we can share code throughout the project to ensure interoperability.
How many people are participating in your trials at the moment?
We have recruited 10 people with disabilities through a local hospital’s neurorehabilitation unit, which caters to people with brain injury and damage from strokes.
Are you using any University of Kent students who are wheelchair users in your trials?
In order to stay within the strict ethics boundaries of clinical trials, we have to go through the appropriate ethics approval bodies. So, in the majority of cases, trials recruitment goes through our health services and hospital partners. This is crucial so we don’t have obstacles later when our technology gets to market.
Finally, what’s next for you here at the lab?
We are exhibiting and presenting our work at NAIDEX 2019 [on March 26 and 27]. This is the most established professional and public event dedicated to independent living for people with a disability or impairment.
We also have a new European project, MOTION, which was just got funded. This aims to develop an autonomous exoskeleton for children with cerebral palsy to help them with their exercises. We also plan to integrate smart clothing with sensors and haptics to help these children know when they’re in the right posture to help improve their condition.
By S.C. Stuart