The 3rd Workshop on

Neur    design in Human-Robot Interaction

The making of engaging HRI technology your brain can’t resist.

Full-day HYBRID workshop as part of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2025)

October 9, 2022

14:00 - 18:30 (CET)

powered by


Lorem ipsum dolor sit amet, consectetur adipiscing elit. xbdfljnldjn;djv;dljvn;dljvnvarius enim in eros elementum Duis cursus, mi quis viverra gergegegegeggegeinterutrumergrgegegegeggjnjnjntjtngjntjjtggb lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

3d-Carousel

     Our Mission     

Bring      Human-Robot Interaction Research
to the Real World with
Brain-Centered Experience

     Statement     

Problem

  • Current robots interacting with humans don't understand personalized intents, attentions, specific needs and/or emotions to serve people appropriately in different contexts.
  • The design of interactive robot behaviors doesn’t match human's intuition, user's expectations or social norms, such that the people have a hard time interpreting the objectives of the robots.
  • The humans and the robots don't have mutual understandings to perform coordinated, co-adaptive joint actions in close contact, proximity or tele-operated spaces.
  • The human-machine interfaces are not ergonomically designed. The software AI algorithms or the embodied intelligence in mechanisms used in applications lack the cognitive smoothness for the people to interact with. As a result, human and machine cannot communicate naturally and effortlessly.
  • The academic HRI researches have difficulties to enter into the consumer markets to make immediate practical impact.

Solution

  • Reinventing human-robot interaction by working with scientists, entrepreneurs and end-users to define, prove, innovate and scale the need-driven, value-based HRI innovations.
  • Placing the brain-centric experiences at the center of design to compensate, augment, facilitate us human, our societies and mother nature's harmony, as well as to improve the quality of life everywhere.
  • We investigate the special characteristics of human behavioral psychologies when experimenting with the users. Building upon that, we will explore and implement the fundamental neuroscience principles of sensing, cognition, planning/decision making and motor control in tuning HRI interaction dynamics and designing bio-compatible human-robot interaction devices, algorithms, theories and strategies.
  • Mixing multi-disciplinary domain expertise, diverse mindsets, market-driven research funding to shorten the time-to-market of traditional lab researches during the entire innovation process.

Approach

  • Optimize Cognitive Load of HRI
  • Emotion-Aware Interactions
  • Intuitive Human-Machine Channels
  • AI-Enhanced Brain-Centered Experience
  • Design the Innovation Pipeline through the Lens of Entrepreneurship

Invited Speakers

Jeff Krichmar

Professor
Department of Cognitive Sciences and Computer Science
University of California, Irvine (UCI)
“Kind, warm-hearted and considerate” are always the accolades Dr. Sugaya receives from her students. And that’s why her research in human emotion estimation and the applications on robots and Internet of Things are so touching people’s hearts. Dr. Sugaya has work experiences in both industry and academia, with tremendous knowledge on how to combine research findings and practical applications in the sweet spots.
Explore more

Alessandra Sciutti

Tenure Track Researcher, Head
COgNiTive Architecture for Collaborative Technologies
Italian Institute of Technology (IIT)
Dr. Alimardani is a scientist and educator who combines her passion for robots and neuroscience in her research. She spent many years in Japan, the land of robots, before moving to the Netherlands. When she is not coding or writing, you can find her gardening or doing Yoga.
Explore more

Kyle Yoshida

Assistant Professor
Mechanical and Aerospace Engineering
University of California, Los Angeles (UCLA)
NSF award-winning hero in dimensionality reduction in control and coordination of human hand, Dr. Vinjamuri graduated from University of Pittsburgh with PhD in BCI and neuroprosthesis. During his daily life, he is an enthusiastic researcher and prize-winning teacher. His dream is to decipher the working principles of complex neuromuscular control, to one day bring the most intuitive and simplest human-machine interface to our society.
Explore more

Kai Arulkumaran

Research Team Lead
Reinforcement Learning Team
Araya Inc.
Waiman is a postdoc in mechanical engineering at Georgia Tech. His main research interests are in healthcare robotics, specifically focused in MRI and rehabilitation. Outside (and in) the lab he enjoys designing and building things.
Explore more

Kinsey Herrin

Senior Research Scientist
School of Mechanical Engineering
Georgia Institute of Technology (Georgia Tech)
NSF award-winning hero in dimensionality reduction in control and coordination of human hand, Dr. Vinjamuri graduated from University of Pittsburgh with PhD in BCI and neuroprosthesis. During his daily life, he is an enthusiastic researcher and prize-winning teacher. His dream is to decipher the working principles of complex neuromuscular control, to one day bring the most intuitive and simplest human-machine interface to our society.
Explore more

Raphaelle Roy

Professor, PI
Conception and Control of Aeronautical and Spatial Vehicles
ISAE-SUPAERO, Université de Toulouse
Waiman is a postdoc in mechanical engineering at Georgia Tech. His main research interests are in healthcare robotics, specifically focused in MRI and rehabilitation. Outside (and in) the lab he enjoys designing and building things.
Explore more

Alexis Block

Assistant Professor
Electrical, Computer and Systems Engineering
Case Western Reserve University
Jennifer Molnar is a PhD student in Robotics at Georgia Tech with a passion for the movement arts. When she's not fire spinning, rock climbing, or playing piano, she's outside watching animals and plants in the woods and trying to figure out how to make robots that subtle and effective.
Explore more

Matteo Bianchi

Associate Professor
Information Engineering, Research Centre “E. Piaggio”
University of Pisa (UniPi)
As a father with a daughter diagnosed with cerebral palsy, Jeremiah founded CIONIC and introduced the first FDA-cleared Cionic Neural Sleeve, combining powerful adaptive algorithms, sensing, analysis and augmentation into a wearable garment to help individual's mobility needs. Jeremiah graduated with BS and MS from Computer Science at Stanford, with 20 years experience in product innovation at Apple, Openwave Systems, Slide, and Jawbone. Superpowering the human body is his dedicated lifetime mission.
Explore more

Tharun Iyer

Electrical Engineer
OpenBCI
Jennifer Molnar is a PhD student in Robotics at Georgia Tech with a passion for the movement arts. When she's not fire spinning, rock climbing, or playing piano, she's outside watching animals and plants in the woods and trying to figure out how to make robots that subtle and effective.
Explore more

Tharun Iyer

Neuroscience Technical Representative
Wearable Sensing
Jennifer Molnar is a PhD student in Robotics at Georgia Tech with a passion for the movement arts. When she's not fire spinning, rock climbing, or playing piano, she's outside watching animals and plants in the woods and trying to figure out how to make robots that subtle and effective.
Explore more

All Stake Holders

Involved for Discussion

Event Schedule

08:30

Set Up

Welcome and greeting all people
09:00

Opening Remarks

Organizers
09:15

Human-Inspired Cognition for Embodied Robotic Communication

Alessandra Sciutti, Italian Institute of Technology, Italy
09:45

Electrophysiology-based characterization and adaptation of human-robot interaction

Raphaëlle N. Roy, Université de Toulouse, France
10:15

Human-inspired computational and embodied intelligence to enrich HRI

Matteo Bianchi, University of Pisa, Italy
10:45

Coffee Break (Poster and NeuroDesign Showcase Demo)

11:00

Smartphones as Portable Haptics Labs: Design, Development, and Application

Kyle Yoshida, University of California - Los Angeles, USA
11:30

Panel Discussion

Matteo Bianchi, Kyle Yoshida, Kai Arulkumaran, Luka Peternel, Cesco Willemse (more to come)
12:30

Lunch Break (Poster and NeuroDesign Showcase Demo)

13:45

Scaling Brain-Robot Interfaces to Multiagent Systems

Kai Arulkumaran, Araya Inc., Japan
14:15

From Lab to Life: Lessons Learned toward Real-World Robotic Exoskeletons and Prostheses through Patient-Centered Experimentation

Kinsey Herrin, Georgia Institute of Technology, USA
14:45

   

Alexis Block, Case Western Reserve University, USA
15:15

Coffee Break (Poster and NeuroDesign Showcase Demo)

15:30

Industry Spotlight Talks & Demo

OpenBCI & Wearable Sensing
16:30

Human Neurorobot Interaction – Towards Brain-Inspired HRI

Jeff Krichmar, University of California, Irvine, USA
17:00

Award Ceremony and Closing Remarks

All participants

Industry Spotlight

NeuroDesign EXPO & Competition  (Online)

NeuroDesign in HRI Student Showcase Competition

[Meetings] [CFP] (3rd Call) Call for Submission: NeuroDesign EXPO in HRI Student Showcase Competition at ICRA 2024 (USD $1800 in cash)

—------------------------------------------------------------


===============================Call for Submission===============================

NeuroDesign EXPO in HRI Student Showcase Competition  ($1800 USD for your participation)
2nd Workshop on NeuroDesign in Human-Robot Interaction: The making of engaging HRI technology your brain can’t resist

IEEE International Conference on Robotics and Automation (ICRA 2024)
Yokohama, JapanMay 17, 2024https://neurodesign-in-hri.webflow.io/


Dear Colleagues,

Are you working on a human-interactive robot project that already has prototypes or some initial research findings? Perhaps you're pondering how to evolve these into more human-centric, real-world applications with a seamless, intuitive "brain"-centered experience, aiming to connect robots more deeply with our bodies, minds, and souls.

Worry not! We're excited to announce the NeuroDesign EXPO in HRI Showcase Competition at ICRA 2024, in conjunction with the NeuroDesign in HRI workshop (https://neurodesign-in-hri.webflow.io/). This event will feature a panel of world-renowned experts from diverse fields such as Human-Robot Interaction (HRI), Artificial Intelligence (AI), Cognitive Neuroscience, Social and Behavioral Psychology, Art & Design, RoboEthics, and the Startup community. These professionals will offer live, integrated-perspective feedback and recommendations to help refine your projects into more impactful research and commercial products.
We invite students at the BSc, MSc, and PhD levels to submit your projects. Submissions can fall into, but are not limited to, the following categories:

Affective Computing
Social and Service Robot
Industrial Collaborative Robot
Wearable Robot/ Device
Brain-Machine/Computer Interface
Haptics and Tele-operation
Soft Robotics
VR/AR & Metaverse
Cyborg and Bionic System
Healthcare Robotics
Exoskeleton and Rehabilitation Robot
LLM and Foundation Model for Human Robot Interaction
Brain Dynamics and Psychology for Cognitive and Physical HRI (cHRI/pHRI)
Human-Drone/AutoVehicle Interaction
Assistive Technology
Intelligence Augmentation and Human 2.0 Technologies
Supernumerary Limbs
Biometric Information Processing
Pervasive-Ubiquitous/Spatial Computing
Smart Home/Internet of Things (IoTs)
Edge-Fog-Cloud Computing
Speech/Gesture Recognition or Image/Audio Processing
Big Data, AI & Machine Learning for Multimodal Interaction
Smart Tutoring/Chatbot System
RoboEthics
RoboFashion, Clothing and Robot Skin
Diversity, Equity & Inclusion (DEI) for HRI Technologies
 
 
Participation Procedure:
Our selection process is on a rolling basis, and we aim to choose 10 projects for the final on-stage pitch presentation. We especially encourage those who have already submitted their work as posters or papers to ICRA 2024 or have publications elsewhere to participate in our event. This competition offers a fantastic opportunity to increase the visibility of your research globally.

Submission Options:

Submissions can be made in one of three formats:
1. A concise 100-word abstract and a 1-2 minute video, offering a brief yet engaging overview.
2. A 100-word abstract accompanied by 5 detailed slides for a short but thorough presentation.
3. A 2-page extended abstract (Please follow IEEE ICRA format:
https://ras.papercept.net/conferences/support/word.php), for a more in-depth submission.
*4. Participants are also welcome to submit using any combination of the above formats.


Final Round and Presentation:

The selected 10 projects will each have a 5-minute pitch presentation on stage during the final round. Alternatively, you may submit a polished, pre-recorded 5-minute video presentation, which we will play on stage.

Exhibition and Virtual Participation:

All submitted projects will receive a dedicated booth for poster and prototype demonstrations.
The event is designed to be "Hybrid" to ensure that everyone has the opportunity to participate, regardless of their ability to travel to Japan.

Awards:

We are thrilled to present two distinguished award categories at the competition. The "Best Innovation in HRI NeuroDesign Award" will be awarded to 3 outstanding projects. The First Prize will be awarded for USD 1000 USD and the 2nd Prize will be awarded 800 hundred for the prototyping. These projects will exemplify groundbreaking innovation within Human-Robot Interaction NeuroDesign. Additionally, the "Most Popular Project in HRI NeuroDesign Award" will be given to 2 projects, for which the First Prize will be awarded for “Full Waiver” for the submission to Journal of Frontiers in Robotics and AI in that capture the hearts of our workshop attendees and audience, determined through a popular vote. Winners in both categories will receive certificates acknowledging their achievements.

Timeline:

- Submission Deadline: Your entries must be submitted by May 1, 2024. Please note that our selection process is rolling, so early submissions are encouraged.- Announcement of Final Project Teams: The teams selected for the final round will be announced on May 3, 2024.- Competition Date: The competition will take place on May 17, 2024, where finalists will present their projects to the panel and attendees.

Submission Website:
https://forms.gle/fQMxJtkXb8JEU2WR6

If you have any questions, please don't hesitate to reach out. We're looking forward to seeing you at ICRA 2024!

Best regards,

Organizing Committee
2nd Workshop on NeuroDesign in Human-Robot Interaction: The making of engaging HRI technology your brain can’t resisthttps://neurodesign-in-hri.webflow.io/

Register or submit a contribution!
STARTER
Free
Get started with training routines designed for beginners.
Access to 60+ training videos
1 free personalised nutritional plan
Monthly personalised training plans
Video form review
-
Pre-Register
Free Forever

In recent years, there has been a rapid rise of innovations in robotics around the globe. This has been largely driven by the fact of labor force shortage in the lower-level, dirty, dull, dangerous and repetitive/tiring jobs, such as manufacturing, agriculture, food industry, infrastructure constructions, and/or autonomous vehicles, etc., as where the robots can provide faster, precise, safer and reliable task performance, working for long hours without taking breaks, compared to our human counterparts. During the past two years, the world-wide pandemic even pushes the demands further of using robots to replace the frontline healthcare workers, nurses and physicians to avoid body contacts, mitigating dangerous virus infections and transmissions. All these great examples contributed to the vast innovations of robot automation, which excels when the robot can work alone without human intervention, separated from our living environment without worrying too much about harming the people nearby. However, when the robots come to our homes and the hospitals, or in the environment where it needs tight human-robot interactions (HRI), the safety issues and uncertain human factors make the presumed technical assumptions falter, and the developmental processes and the business models fail. Good representative examples can be found in the recent shutdowns of several well-known startup companies, including Rethink Robotics, Jibo and Anki, of which they were all developing the forefront human-robot interaction solutions.

We surmise that HRIinnovation and commercialization of HRI products present unique challenges that are typically not encountered in other industries. With the constant increasing demands of using HRI technologies to compensate, augment and empower our human capabilities, we need to seriously address the fundamental flaws when developing HRI technologies, and translate the traditional “ivory tower” lab research into the real-world applications and the consumer products more fluently. In our competition, we intend to find the best projects that exemplify the NeuroDesign innovation principles to identify, invent and implement the developed HRI technologies, which could provide us a practical guidance to quickly bring Human-Robot Interaction lab research to solve the real-world problems.

🧠 What Is NeuroDesign in HRI?

NeuroDesign in Human-Robot Interaction (HRI) is an emerging interdisciplinary approach that brings together principles from neuroscience, cognitive and behavioral psychology, robotics, AI, and human-centered interaction design to create human-robot systems that are not just about performance metrics—but creating experiences that deeply intuitive, ergonomic, emotionally resonant, and cognitively aligned with the human brain.

It emphasizes designing at every level of the system: from the physical form factor of the robot to its internal software, AI and control logic; from multi-modal sensing strategies (e.g., EEG, EMG, IMU, voice, vision, etc.) to interaction flows and feedback mechanisms—every element is co-designed to create seamless, brain-centered experiences. Whether it’s a robot adjusting its behavior based on your mental fatigue, a soft exosuit synchronizing with your intention and muscle activity, or a socially assistive robot responding to your emotional state through haptics and voice, NeuroDesign focuses on creating interactions that feel natural to the brain and body—smooth, engaging, and human-centered.

At its core, NeuroDesign covers both cognitive human-robot interaction (cHRI) and physical human-robot interaction (pHRI). It supports four fundamental modes of brain-body-robot interaction, each representing a bidirectional loop between human and machine:

🔺 (1) Human Brain ⟷ Robot Brain (cHRI)
User intent and cognitive states are decoded through neural signals (e.g., EEG), attention tracking, or other neural-sensing modalities to guide robotic decision-making and adaptive control (e.g., shared autonomy). Conversely, the robot’s “brain” can communicate back to the human brain using neuromodulation or non-contact feedback cues such as visual patterns, or auditory signals—enabling a direct, closed-loop interface between cognitive states and machine intelligence.

🔺 (2) Human Brain ⟷ Robot Body (cHRI)
Thought-driven interfaces (e.g., BCIs) translate cognitive states into physical action, enabling control of robotic limbs, exosuits, or tele-operated robots via imagined movement, motor imagery, or mental workload. The robot body, in turn, conveys its intention or status back to the user through expressive gestures, non-contact body movement, visual indicators, speech, or sound—supporting fluid two-way cognitive communication.

🔺 (3) Robot Brain ⟷ Human Body (cHRI)
Intelligent robotic systems adaptively shape the user's experience by delivering context-aware feedback—via haptics, visual cues, auditory signals, or neuromodulation (e.g., brain or muscle stimulation)—based on the user’s physiological or affective state. At the same time, users can interact with the robot using body language, facial expressions, or voice commands, and other non-contact modalities to interact with the robot—allowing mutual understanding without the need for physical touch.

🔺 (4) Human Body ⟷ Robot Body (pHRI)
Here, physical interaction becomes central, as muscle activity, joint movement, and biomechanical cues (sensed via EMG, IMUs, or force sensors) drive collaborative behavior. This includes co-manipulation, shared locomotion, and synchronized motion between the human body and robotic components—enabled through wearable robots, soft exosuit, Co-Bots, and/or force-based feedback systems.

These interaction loops are made possible through multi-modal sensing and actuation, combining EEG, EMG, IMU, skin conductance, eye tracking, and more. Achieving seamless integration requires careful co-design of software, hardware, AI, and physical interfaces to ensure that robotic systems align with the user's natural cognitive and physical processes.

Ultimately, NeuroDesign invites us to imagine and build technologies that are not only usable but embrace the richness of human cognition and embodiment. It informs us to design robots and interactive systems that understand us, respond to us, and evolve with us—in ways that are emotionally meaningful, neurologically intuitive, and functionally empowering.

==================================================

📝💡 Suggested Submission Topics

We welcome submissions across a wide spectrum of human-robot interaction, including (but not limited to):

 * Affective & Social Robotics
 * Brain-Machine / Brain-Computer Interfaces (BCI)
 * Wearable & Assistive Robots
 * Exoskeletons & Rehabilitation Systems
 * Human-AI Co-adaptation
 * Embodied AI / Large Language Models for HRI
 * VR/AR, XR, & Metaverse-based Interaction
 * Haptics, Teleoperation & Sensory Feedback
 * Cognitive & Physical Human-Robot Interaction (cHRI/pHRI)
 * Neuroergonomics & Human Factors
 * Soft Robotics & Bionic Systems
 * Smart Environments & Pervasive Computing (IoT, Smart Homes)
 * Multimodal Interfaces: Gesture, Speech, Emotion Recognition
 * RoboEthics, Inclusion & DEI in HRI
 * RoboFashion, Adaptive Wearables & Robot Skins
 * Supernumerary Limbs, Human 2.0, & Intelligence Augmentation
 * Anything that Connects Minds, Bodies, and Robots

==================================================

📥 NE Submission

Submission is very simple!! 😊 Please simply click the icon below to express your intent to participate.


So we can reserve your spot for a 5-minute pitch presentation. After that, you may submit "Any One" or "Any Combination" of the submission options listed below, along with your materials.

🔺 (1) Video Abstract: A 100-word summary + 1–2 minute video overview
           (A concise 100-word abstract and a 1-2 minute video, offering a brief yet engaging overview of your project.)

🔺 (2) Slide Deck: A 100-word summary + 5 informative slides
           (A 100-word abstract accompanied by 5 detailed slides for a short but thorough presentation.)

🔺 (3) Extended Abstract: A 2-page write-up (IEEE RAS format)
           (for a more in-depth submission)
           IEEE Template: https://ras.papercept.net/conferences/support/word.php
==================================================

🎤 Finalists & Presentation Format

Our selection process is on a “rolling basis”, and we aim to choose 10 projects for the final on-stage pitch presentation. We especially encourage those who have already submitted their work as posters or papers to Neuroscience 2025 or have publications elsewhere to participate in our event. This competition offers a fantastic opportunity to increase the visibility of your research globally.

 * 10 projects will be selected for on-stage 5-minute pitch presentations.

 * Finalists may join in person, via Zoom, or submit a pre-recorded presentation.

 * All submissions (accepted or not) will be offered poster/demo space at our venue.

 * We will print and exhibit posters for virtual participants.

==================================================

🖼️💻 Exhibition and Virtual Participation

All submitted projects will receive a dedicated booth for poster and/or prototype demonstrations. For participants unable to travel to Neuroscience 2025, we will display your posters "virtually" on our website.

The event is designed to be "Hybrid" to ensure that everyone has the opportunity to participate, regardless of their ability to travel to Neuroscience (@ San Diego, CA, USA). A Zoom link will be provided for your virtual participation.

==================================================

🏆 Ne Awards & Recognition

We are excited to present two prestigious award categories in this year’s competition. The "Best Innovation in HRI NeuroDesign Award" will recognize 3 outstanding projects demonstrating groundbreaking advances in Human-Robot Interaction NeuroDesign. The First Prize winner will receive €500 EUR, and the 2nd Prize winner will receive €250 EUR to support further prototyping.

In addition, the "Most Popular Project in HRI NeuroDesign Award" will honor 2 projects that win the hearts of our workshop audience, as determined by a popular vote. The First Prize in this category includes a full waiver for submission to Frontiers in Robotics and AI.

All remaining participants will have the chance to win gifts through a lucky draw, and every finalist will receive awards and official certificates in recognition of their achievements.

🔺 (1) 🧠 Best Innovation in HRI NeuroDesign Award
          3 top projects selected by our expert panel based on originality, NeuroDesign alignment, and real-world potential.
         
          Prizes:
         
           🥇 1st Prize: $250 EUR €500 EUR

           🥈 2nd Prize: $150 EUR €250 EUR

           🏅 3rd Prize: €125 EUR

🔺 (2) ❤️ Most Popular Project in NeuroDesign Award
           2 projects voted by workshop attendees and online audience for charm, creativity, and engagement.
         
          Prizes:
         
            🏆 1st Prize: Full submission fee waiver to Frontiers in Robotics and AI (APC waiver)

            🥈 2nd Prize: €125 EUR

🔺 (3) 🎁 Participation Gifts
            The remaining participants will receive the following gifts in a post-event drawing.

               Kit   x1    DIY Neuroscience Kit – Pro
                Ras x1    Raspberry Pi 5 (16GB RAM)

               AIH x1    Raspbery Pi AI HAT+ (13 Tops)

                uMy   x1    uMyo wearable EMG sensor

               So       x1    Souvenir from Eindhoven

All finalists will receive official certificates and feature opportunities on our website and post-workshop publications.

==================================================

📅 Key Dates

🔺 * Submission Deadline: November 10, 2025

🔺 * Competition Day & Time: November 19, 2025
@ 9:00–10:30 AM (Pacific Time)
| 12:00–1:30 PM (Eastern Time) | 6:00–7:30 PM (Central European Time) | 10:30 PM–12:00 AM (Indian Standard Time)

==================================================

🔗 Show your intent of participation: https://forms.gle/TeywZB55NchRXeWn6

👉 Material submission form: https://forms.gle/fQMxJtkXb8JEU2WR6

For any questions, feel free to reach out via our website submission form, or email us to: neurodesign.hri@gmail.com  

Let’s shape the future of human-robot interaction—by design, for the brain.



Best regards,

The NeuroDesign in Human-Robot Interaction Student EXPO & Competition
https://www.neurodesign-hri.ws/2025

Team 1: Gaze Based Attention Monitoring Assessment System
Augustine Wisely Bezalel (Sri Sivasubramaniya Nadar College of Engineering)

This project uses real-time gaze-tracking technology where visual attention is monitored & analysed. WebGazer.js like technologies make the use of standard webcam possible and cost effective. This solution is for education, healthcare, & HCI. During calibration, participants fixate on points to produce baseline gaze data to ensure individuality & accuracy. Participants fixate on specific images with instructions. Integration of Kalman filtering minimizes noise and stabilized gaze data. Heat maps and visuals are generated that present the gaze intensity, assist in understanding trends. This project has made a step in enhancing human attention.

Intro SlidesPoster

Team 2: NeuroAgent: A Brain-Aware AI Agent Framework for Adaptive Human-Robot Interaction
Aditya Kumar (Indian Institute of Information Technology, Surat)

Traditional Human-Robot Interaction (HRI) systems often lack cognitive smoothness—the seamless alignment between human perceptual states and robotic behavior. This paper introduces NeuroAgent, a brain-aware AI framework designed to perceive, interpret, and adapt to a human partner’s cognitive and emotional state in real time. The architecture fuses multimodal neurophysiological and behavioral signals through three synergistic layers: (1) a Multimodal Sensing Layer that integrates electroencephalography (EEG) for attention, workload, and emotion analysis, electrodermal activity (EDA) for arousal, and behavioral cues such as facial micro-expressions and voice prosody; (2) an AI Cognition Layer, powered by a multi-agent reasoning system built upon Large Language Models (LLMs), where cognitive, behavioral, and conversational agents collaborate using a novel Cognitive Retrieval-Augmented Generation (RAG) pipeline to ground AI reasoning in the user’s live neuro-physiological context; and (3) an Embodied Output Layer, where decisions manifest through physical (ROS-based) or virtual (Unity-based) embodiments, enabling adaptive gestures, modulated speech tone, and dynamic task complexity adjustment.

Intro Slides

Team 3: Toward Falsifiable Grammars for NeuroDesign in HRI
Vinay Kumar Verma (Indraprastha Institute of Information Technology)

Current HRI and NeuroDesign frameworks lack a formal, verifiable basis for interaction success, detecting failures only post hoc. We introduce a minimal, falsifiable grammar to model neuro-physical coordination using primitives (signal, request, response) and temporal operators (sequence, repair). By binding these operators to measurable neuro-temporal "budgets," such as cognitive latency and sensorimotor synchrony, the grammar specifies when collaboration succeeds, fails, or must repair itself. This transforms HRI from descriptive analysis to a priori verification, enabling auditable, adaptive, and certifiable brain-body interfaces that align formal methods with neuro-cognitive fluidity.

Extended AbstractPoster

Team 4: VR Controlled Robot Arm: Bridging Virtual Reality and Real-World Robotics
Sushim Saini (Indian Institute of Technology Roorkee)

We developed a low-cost VR-controlled robotic arm that creates a real-time digital twin between the virtual and physical worlds. Movements made through the VR headset and controllers are mirrored instantly by the physical robotic arm, enabling intuitive and immersive human–robot interaction. The design focuses on accessibility, adaptability, and embodied control, using a 3D-printed structure and servo-based actuation. By blending VR and robotics, the system bridges human intention with robotic motion, demonstrating applications in assistive technology, remote operation, and education. Future development aims to integrate cognitive and sensory feedback for more natural, brain-aligned interaction.

Intro SlidesShort VideoPoster

Team 5: The Goldilocks Problem: The Smartphone Based Search for 'Just Right' Control in Human-Machine Systems
Tanisha Majumdar (Indian Institute of Technology Delhi (IIT-D))

We present HAMOT (Hand Motion Object Tracking), a smartphone-based sensorimotor testbed prototype with MATLAB GUI that employs information-theoretic benchmarking to optimize human-machine control paradigms. This platform maps wrist flexion/extension to cursor movements, systematically comparing position versus velocity control under perturbations mirroring adaptive device challenges: variable gain, noise, latency, and nonlinearity. Performance metrics reveal granular behavioral profiles of users - specifically, error correction strategies, cognitive flexibility, sensorimotor resilience - that accurately map users' cognitive states and motor intent for usage in human-robot collaboration context. HAMOT's accessible architecture enables transition from controlled laboratory experiments to interactive out-of-lab perception studies and real-world deployment in assistive robotics.

Intro SlidesPoster

Team 6: Neuroprosthetic Embodiment: From Neural Flux to Self Beyond
Hozan Mohammadi (National Brain Mapping Laboratory (NBML))

Human-machine integration remains a challenge due to the inability of robotic systems to intuitively align with individual intent, agency, and embodiment. This work proposes a neuroprosthetic and avatar-based feedback framework that translates stochastic neural and sensorimotor signals into coherent, hybrid selfhood experiences. By integrating neuroprosthetic control, multimodal sensory feedback, and immersive avatar representation, we investigate the emergence of adaptive, participatory embodiment in real-time human-robot interactions through the lens of NeuroDesign principles, optimizing cognitive load, emotion-aware interaction, and intuitive brain-centered control. This approach positions brain-centric experience as a driver of HRI innovation, leveraging closed-loop feedback to enhance agency, ownership, and presence, while laying a foundation for bio-compatible, market-ready neuroadaptive technologies.

Extended Abstract

Team 7: Combining SOFT and RIGID exoskeletons: adding pronosupination to AGREE
Noemi Sacco (Politecnico di Milano)

My thesis project encompasses the manufacturing and control design of a soft pneumatic exosuit for forearm pronation/supination, developed at the Soft NeuroBionics Lab of Scuola Superiore Sant’Anna in Pisa (R. Ferroni et al., 2025). The ultimate goal of the work is to integrate this module into an existing rigid upper-limb exoskeleton called AGREE (S. Dalla Gasperina et al., 2023), developed at Politecnico di Milano.

Intro Slides

Team 8: Deep Learning-Based EEG Classification for Virtual Reality Mobile Robot Control in Dementia Patients Using DL
Youcef YAHI (University of Rome II – Tor Vergata)

This study presents a deep learning–based brain–computer interface (BCI) system designed to enable patients suffering from dementia to control a virtual reality (VR) mobile robot through electroencephalogram (EEG) signals. The proposed framework employs a hybrid Convolutional Neural Network combined with Bidirectional Long Short-Term Memory (CNN-BiLSTM) architecture to classify motor imagery EEG patterns associated with four directional commands: forward, backward, left, and right. EEG data were preprocessed to remove artifacts and segmented into epochs corresponding to each intended movement. The CNN layers effectively extracted spatial–spectral features from the EEG signals, while the BiLSTM layers captured the temporal dependencies critical for robust decoding of neural activity. The trained model achieved an average classification accuracy of 72.47%, demonstrating reliable translation of cognitive intentions into robot control commands within the VR environment. These results highlight the potential of hybrid deep learning architectures for restoring mobility and interaction capabilities in patients with cognitive impairments such as dementia, offering a promising pathway toward immersive neuro-rehabilitation and assistive robotic systems.

Extended Abstract

Team 9: The Neuroevolution of Collaborative Decision-Making in Robotic Assistants
Joao Gaspar Oliveira Cunha (University of Minho)

How can robots learn to collaborate naturally with humans? This project explores how principles of Neuroevolution and Dynamic Field Theory can give rise to adaptive and interpretable control in robotic assistants. By evolving the neural architectures that govern perception, memory, and decision-making, our system enables robots to learn when to assist and when to act independently in shared tasks. In a human–robot packaging scenario, evolved controllers exhibit emergent collaboration, complementarity, and self-organization, without manual tuning. This work proposes a new generation of evolved neuroadaptive robotic partners capable of fluid, human-like collaboration shaped by the principles of the brain.

Short VideoPoster

Team 10: Toward a Gaze-Independent Brain-Computer Interface Using the Code-Modulated Visual Evoked Potentials
Radovan Vodila (Donders Institute, Radboud University)

Our project works toward developing a gaze-independent brain–computer interface (BCI) that enables communication without eye movement, crucial for paralyzed, and locked-in users who lost control over their eye movements. We employ noise-tagging of symbols and decode covert visuospatial attention from Electroencephalography (EEG). By extending visual speller BCIs beyond gaze-dependency, this work aims to create inclusive and effortless communication systems at the forefront of brain-based assistive technology, restoring interaction and connection for those who have lost all conventional means of expression.

Extended Abstract

Team 11: Neuromorphic BCI for HRI: Leveraging Spiking Neural Networks, Reservoir Computing, and Memristive Hardware for Biologically Plausible and Low-Power Interaction
Denis Yamunaque (Universidad Complutense de Madrid)

Our project introduces a neuromorphic Brain-Computer Interface (BCI) based on reservoir computing (RC) and Spiking Neural Networks (SNN). This design emphasizes biological realism, leveraging the principles of RCs and SNNs. The key innovation lies in its potential for physical implementation using memristors, which significantly enhances energy efficiency and adaptability in the physical readout layer. This approach bridges the gap between biologically plausible neural processing and efficient hardware, in a search for advanced, low-power BCI applications in human-robot interaction.

Intro Slides

Team 12: Real-Time Assistive BCI: A Quantum-Optimized Neurointerface for Motor Intent Decoding
Mustafa Arif (Northern Collegiate Institute)

Motor impairments affect over 1.3 billion people worldwide, yet most brain-computer interfaces (BCIs) remain invasive or prohibitively expensive. This project presents CBWAE (Correlation-Based Weighted Average Ensemble), a novel AI-driven, non-invasive neurointerface that decodes motor intent in real time with 95% accuracy using an $80 custom EEG headset.
Unlike existing BCIs, CBWAE dynamically assigns models per motor class and optimizes feature-model pairs via quantum-inspired search (Grover’s & QAOA), achieving instant adaptability across users. The system enables seamless control of assistive devices—prosthetics, exoskeletons, or wheelchairs—bridging neuroscience, AI, and accessibility for a more inclusive human–machine future.

Extended AbstractPoster

Team 13: Autonomous Apple Harvesting with Immersive Human-Robot Interaction
Chris Ninatanta (Washington State University)

The apple industry is critical to Washington State's economy; however, labor shortages, reliance on costly seasonal workers, and narrow harvesting windows threaten its sustainability. Autonomous apple-harvesting robots could provide relief to farmers; however recent prototypes typically capture less than 60% of available fruit. Dense foliage, clustered apples and obstructing branches reduce performance. To address this, I propose combining human and artificial intelligence (AI) through virtual reality (VR), enabling immersive human-robot interaction. Human movements and decisions collected in orchards during harvest and within VR environments will train the AI, improving autonomy and raising harvesting efficiency above 80%.

Intro Slides

Team 14: Pre-Hospital EEG Integration for Stroke Triage
Hailey Fox (University of Southern California San Diego)

40% of strokes are Large Vessel Occlusion (LVO), which require treatment only available at specific hospitals. Every 30-minute delay before initiating treatment lowers patient outcomes by 7%. Current ambulance protocol rushes patients to the nearest hospital. Taking patients to the incorrect hospital produces a median delay of 109 minutes. Cognovate Labs is developing a pre-hospital EEG device to provide real-time cognitive insights that cannot be identified through conventional stroke assessments. Our device enables an informed decision to take patients directly to the appropriate care center, thereby increasing the number of patients who achieve functional independence after recovery.

Extended AbstractIntro SlidesShort VideoPoster

Team 15: Bring Your Own Labels: Graph-Based Self-Supervised Learning for EEG with Implications for Brain-Centered HRI
Udesh Habaraduwa (Tilburg University)

Bring Your Own Labels" explores how graph neural networks and self-supervised learning can unlock hidden patterns in EEG data for psychiatric screening. By building a reproducible, scalable preprocessing pipeline and testing novel graph-based models, this work advances the search for reliable biomarkers in mental health. Results show that graph-based approaches outperform conventional baselines, highlighting their potential to transform human-centered neurotechnology and adaptive human-robot interaction.

Extended AbstractPoster

Team 1: Observational Error Related Negativity for Trust Evaluation in Human Swarm Interaction
Joseph P. Distefano (University of Buffalo)

This groundbreaking study marks the inaugural exploration of Observation Error Related Negativity (oERN) as a pivotal indicator of human trust within the paradigm of human-swarm teaming, while simultaneously delving into the nuanced impact of individual differences, distinguishing between experts and novices. In this institutional Review Board (IRB) approved experiment, human operators physiological information is recorded while they take a supervisory control role to interact with multiple swarms of robotic agents that are either compliant or non-compliant. The analysis of event-related potentials during non-compliant actions revealed distinct oERN and error positivity (Pe) components localized within the frontal cortex.

Extended AbstractShort VideoPoster

Team 2: Novel Intuitive BCI Paradigms for Decoding Manipulation Intention - Imagined Interaction with Robots
Matthys Du Toit (University of Bath)

Human-robot interfaces lack intuitive designs, especially BCIs relying on single-body part activation for motor imagery. This research proposes a novel approach: decoding manipulation intent directly from imagined interaction with robotic arms. EEG signals were recorded from 10 subjects performing motor execution, visual perception, motor imagery, and imagery during perception while interacting with a 6-DoF robotic arm. State-of-the-art classification models achieved average accuracies of 89% (motor execution), 94.9% (visual perception), 73.2% (motor imagery), and highest motor imagery classification of 83.2%, demonstrating feasibility of decoding manipulation intent from imagined interaction. The research invites more intuitive BCI designs through improved human-robot interface paradigms.

Short VideoPoster

Team 3: Multimodal Emotion Recognition for Human-Robot Interaction
Farshad Safavi (University of Maryland, Baltimore County)

Our project is a multimodal emotion recognition system to enhance human-robot interaction by controlling a robotic arm through detected emotions, using facial expressions and EEG signals. Our prototype adjusts the robotic arm's speed based on emotions detected from facial expressions and EEG signals. Two experiments demonstrate our approach: one shows the arm's response to facial cues—it speeds up when detecting happiness and slows down for negative emotions like angry face. The other video illustrates control via EEG, adjusting speed based on the user's relaxation level. Our goal is to integrate emotion recognition into robotic applications, developing emotionally aware robots.

Intro SlidesShort VideoPoster

Team 4: Learning Hand Gestures using Synergies in a Humanoid Robot
Parthan Olikkal (University of Maryland, Baltimore County)

Hand gestures, integral to human communication, hold potential for optimizing human-robot collaboration. Researchers have explored replicating human hand control through synergies. This work proposes a novel method: extracting kinematic synergies from hand gestures via a single RGB camera. Real-time gestures are captured through MediaPipe and converted to joint velocities. Applying dimensionality reduction yields kinematic synergies, which can be used to reconstruct gestures. Applied to the humanoid robot Mitra, results demonstrate efficient gesture control with minimal synergies. This approach surpasses contemporary methods, offering promise for near-natural human-robot collaboration. Its implications extend to robotics and prosthetics, enhancing interaction and functionality.

Intro SlidesShort VideoPoster

Team 5: A Wearable, Multi-Channel, Parameter-Adjustable Functional Electrical Stimulation System for Controlling Individual Finger Movements
Zeyu Cai (University of Bath)

As the survival rate of patients with stroke and spinal cord injuries rises, movement dysfunction in patients after surgery has become a concern. Among them, hand dysfunction seriously impairs patients' quality of life and self-care ability. Recent, many studies have demonstrated that functional electrical stimulation (FES) in the rehabilitation of upper limb motor function. At the same time, compared to traditional treatment methods, functional electrical stimulation is more effective. However, existing FES studies for the hand placed the electrodes in the forearm, which does not allow full control of individual movements of single fingers. In this study, an electrode glove was designed to place the electrodes on the hand, which can achieve this goal. Furthermore, existing FES systems are large in size, the FES system developed in this study is more lightweight and can be made wearable. In summary, this study aimed to develop a novel, wearable functional electrical stimulation system for the hand, which can adjust the stimulation parameters and, with an electrode glove, can control the individual movements of single fingers providing a personalized rehabilitation approach.

Extended AbstractIntro SlidesPoster

Team 6: Enhancing MI-BCI Training with Human-Robot Interaction Through Competitive Music-Based Games
Alessio Palatella (University of Padova)

Motor Imagery Brain-machine Interfaces (MI-BMIs) interpret users' motor imagination to control devices, bypassing traditional output channels like muscles. However, MI-BMIs' proficiency demands significant time and effort, especially for novices. To address this, we propose a novel MI-BMI training method using Human-Robot Interaction via rhythmic, music-based video games and NAO robots. Our experimental setup involves a rhythm game connected to a real NAO robot via a BMI. EEG signals are processed using a CNN-based decoder. Despite data limitations, our approach demonstrates promising control capabilities, highlighting the potential of combining MI-BMIs with robotics for intuitive human-robot interaction and enhanced user experience.

Extended AbstractIntro SlidesShort VideoPoster

Team 7: Enhancing Synergy - The Transformative Power of AR Feedback in Human-Robot Collaboration
Akhil Ajikumar (Northeastern University)

Through this paper, we introduce a novel Augmented Reality system to enable intuitive and effective communication between humans and collaborative robots in a shared workspace.By using multimodal interaction data like gaze, speech, and hand gestures, which is captured through a head-mounted AR device, we explore the impact created by the system in improving task efficiency, communication clarity, and user trust in the robot. We validated this using an experiment, based on a gearbox assembly task, and it showed a significant preference among users for gaze and speech modalities, it further revealed a notable improvement in task completion time, reduced errors, and increased trust among users. These findings show the potential of AR systems to enhance the experience of human-robot teamwork by providing immersive, real-time feedback, and intuitive communication interfaces.

Extended AbstractPoster

Team 8: EEG Movement Detection for Robotic Arm Control
Daniele Lozzi (University of L'Aquila)

This research introduces a novel approach to the construction of an online BCI dedicated to the classification of motor execution, which importantly considers both active movements and essential resting phases to determine when a person is inactive. Then, it explore the best Deep Learning architecture suitable for motor execution classification of EEG signals. This architecture will be useful for control an external robotic arm for people with severe motor disabilities.

Extended AbstractPoster

Team 9: EEG and HRV Based Emotion Estimation Robot for Elderly Interaction
Yuri Nakagawa (Shibaura Institute of Technology)

The increasing demand for emotional care robots in nursing homes aims to enhance the Quality of Life for the elderly by estimating their emotions and providing mental support. Due to the limited physical state of elderly, traditional methods of emotion estimation pose challenges; thus, we explore physiological signals as a viable alternative. This study introduces an innovative emotion estimation method based on Electroencephalogram and Heart Rate Variability, implemented in a care robot. We detail an experiment where this robot interacted with three elderly individuals in a nursing home setting. The observed physiological changes during these interactions suggest that the elderly participants experienced positive emotions.

Intro SlidesPoster

Moments from Past Events

    Organizers     

No. 1

DR. Ker-jiun Wang

ECE & Bioengineering
University of Pittsburgh
No. 2

Dr. Zhi-Hong Mao

ECE & BIOEngineering
University of Pittsburgh
No. 3

Dr. Midori Sugaya

CSE
Shibaura Institute of Technology
No. 3

Dr. maryam alimardani

Computer science
Vrije Universiteit Amsterdam
No. 3

Dr. Ramana vinjamuri

CSEE
University of Maryland Baltimore
No. 3

Dr. Jun Ueda

Mechanical Engineering
Georgia Institute of Technology

Local Arrangement Chairs    

No. 3

Ethel Pruss

Doctoral Student
Clinical Psychology
VU Amsterdam
No. 3

Ruoshi Zheng

Doctoral Student
Computer Science
VU Amsterdam

HRI and Neuroscience at Scale

Innovation is hard. The core innovation is rooted in a scientific discovery that requires additional technical de-risking, finding profitable and sustainable solutions that meet the target needs. Neurodesign in HRI for a better “brain-centered experience” provides a sticky glue connecting the technologies and the end-users, changing the people’s behaviors to accept the new tech and finding the real use cases to apply the tech. Through hosting this workshop, we hope the HRI researchers and the neuroscientists could bring their groundbreaking research to the real world with more impact. The world needs science at scale.
keep the latest news
Email address
join our email list
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Come participate online in our creative workshop.,
with inspiring talks, thought-provoking discussions
and lots of fun interactions  . and networking.

 August 25, 2025  
09:00-17:00

IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2025)