Advancing Robots’ Autonomy and Physical Performance for Interaction and Rehabilitation

Performance in
Interaction via

Scientific Coordinator: Prof. Alessandro Di Nuovo, Sheffield Hallam University (UK)

Project Summary

The next-generation personal robots will possess cognitive and motor intelligence that can enhance human interaction. However, the current AI and robotics paradigms require additional resources to achieve performance and reasoning abilities similar to those of humans. The EU-funded PRIMI project integrates research from neurophysiology, psychology and machine intelligence to formulate models for higher-order cognitive capabilities. The aim is to usher in a new era of autonomous technologies that exhibit enhanced precision, speed and safety in real-time learning and adaptation processes. PRIMI envisions a transformative impact on AI and robotics, accomplished by the development of socially interactive autonomous robots. To validate this vision, prototypes of neuromorphic humanoid robots will undergo clinical pilot studies focusing on stroke rehabilitation.

Why Primi?

The PRIMI research will boost the uptake of personal social robots, an area with huge potential impact that can improve people lives, but they are not yet widely adopted. This is because the current platforms lack the required autonomy to adapt to the behaviours of different individuals and the environmental physical conditions for functioning in the real world and interacting with humans. In practice, it is not just a matter of performing an interaction task correctly, but in a way that is believable and acceptable to humans. The user’s experience of personalised and adaptive interaction with the robot is key to its large-scale adoption.

These open-ended interactive robots cannot be created with the dominant machine learning paradigm and traditional computing and sensing. A paradigm shift is needed; current AI models miss a coherent integration with a multimodal sensory system to support self-determined learning via autonomous interaction with the environment, and they are characterised by an unnatural discretisation of time imposed by mainstream processing and sensing architectures. Furthermore, machine learning requires a large number of examples to train models that are suitable only for well-defined and narrow tasks. In contrast, humans can learn with less data by making predictions based on mental imagery. Indeed, neurophysiology and developmental psychology increasingly highlight the embodied nature of human intelligence, which is shaped by the body and the experiences acquired through it, such as manipulatives, gestures, and movements. Thanks to this closer link, multimodal higher-order cognition abilities like mental imagery can be implemented to improve the physical performance of robots, simulating what humans do with mental training techniques for the acquisition and refinement of motor skills.


PRIMI’s ambition is to induce a paradigm shift in AI and robotics with biologically plausible innovations that realise efficient integration between a robot’s mind and body to enable higher order cognition abilities for robots’ awareness of:

    • self (motor imagery),
    • the environment (mental representation),
    • humans (Theory of Mind)

PRIMI’s research will lay the foundation for the future personal robotics services that will be provided by socially aware, high-performance, neuromorphic humanoid robots.

Our Innovations

To go beyond what is currently possible, the overarching objective of PRIMI is to combine strategic research and development in neurophysiology, psychology, sport science, machine intelligence, neuromorphic engineering, cognitive mechatronics and humanoid robotics to produce new biologically plausible engineering design principles for autonomous interactive robots with a more tightly coupled mind (neuromorphic AI) and body (humanoid robot).

PRIMI research will address the multifaceted aspects of human intelligence via large scale brain models and its efficient embodiment in robots with multimodal interaction abilities. PRIMI research will co-design and develop innovative solutions that more closely couple energy-efficient large-scale neuromorphic computing – based on the most recent SpiNNaker2 ASIC; developed within the EU Flagship Human-brain project – multimodal sensing – event-driven cameras for vision and skin for tactile sensing, the latter produced by IIT – and a spike-based cognitive architecture with high-order capabilities – mental imagery, abstract reasoning and Theory of Mind (ToM).

As a proof-of-principle, the PRIMI cognitive architecture and neuromorphic hardware will be embedded into two prototypes of neuromorphic humanoid robots with high-order cognition. They are based on two significantly different platforms produced by the project partners: the neuromorphic iCub by IIT – a research platform designed for developmental robotics that integrates neuromorphic vision, tactile and auditory sensing – and a new biped humanoid platform by PAL. The use of two advanced platforms with several distinct characteristics will allow the definition of a modular and flexible solution that can be extended to other advanced humanoids.

The advanced abilities of these neuromorphic robots will be demonstrated in increasingly challenging validation studies in relevant scenarios, which will demonstrate their abilities to reason, behave and interact in a human-like fashion, thanks to their capabilities to learn new motor actions by observing human teachers, then to mentally represent the physical and social environment to resemble experiences, simulate new and more complex actions, build their knowledge by grounding abstract concepts into physical experiences, and infer human intentions with ToM.

This research programme will culminate in an impact case study in a clinical environment: two clinical pilots in which the robots will perform as a tutor for the physical rehabilitation of patients. Clinicians will give instructions for mental practice and teach the motor actions that the robot should replicate to solicit the patient response. The robot will monitor and autonomously adapt the interaction to the patient level. This final study will close the multidisciplinary loop, with the robot prompting the human rehabilitation via the same principles, action observation and motor mental simulations, which were used for its learning. Also applying ToM to understand the patient status and adapt the interaction accordingly.



We're in Tokyo! First stop on the agenda: #HARUFEST2024
Back to top of page