Avatar

Ravi Pandya

PhD Candidate

Carnegie Mellon University

As of fall 2020, I am a PhD student in the Robotics Institute at Carnegie Mellon University advised by Prof. Changliu Liu and Prof. Andrea Bajcsy. I am grateful to be funded by the NSF Graduate Research Fellowship. My PhD research focuses on enabling robots to safely and efficiently interact with humans while accounting for the influence they have on peoples’ actions and intentions.

Previously, I was a data scientist at the Global AI Accelerator (GAIA) within Ericsson.

As an undergrad at UC Berkeley, I primarily worked with Prof. Anca Dragan, but I also had the privilege of working in Prof. Ruzena Bajcsy's and Prof. Ron Fearing's labs.

Please see my Google Scholar for an up-to-date list of publications.

Interests

  • Safe Control
  • Human-Robot Interaction

Education

  • PhD in Robotics, 2020 - Present

    Carnegie Mellon University

  • BS in Electrical Engineering and Computer Science, 2015-2019

    UC Berkeley

News

All news»

[July 2024] I attended the American Control Conference (ACC) 2024 in Toronto to present our work on multimodal safe control for HRI!

[May 2024] I attended ICRA 2024 in Yokohama to present our work on multi-agent strategy explanations and on model-based conditional behavior prediction!

[May 2024] I gave a talk on some of my recent work in the CMU Learning and Control Seminar.

[Apr 2024] I passed my PhD Thesis Proposal!

[Jan 2024] Our work on multi-agent strategy explanations was accepted to ICRA 2024!

Publications

Robots that Learn to Safely Influence via Prediction-Informed Reach-Avoid Dynamic Games

We formulate a prediction-informed robust dynamic game to allow a robot to safely influence a human partner. We instantiate our method, called SLIDE (Safely Leveraging Influence in Dynamic Environments), in a simulated human-robot collaborative task. We find that SLIDE consistently enables the robot to leverage the influence it has on the human when it is safe to do so, ultimately allowing the robot to be less conservative while still ensuring a high safety rate.

Robust Safe Control with Multi-Modal Uncertainty

We introduce a least-conservative robust safe controller for dynamical systems with additive and multiplicative multimodal uncertainty for energy-function-based safe control methods. We test our method on a simulated segway robot and find it is less conservative than existing unimodal robust control methods.

Multimodal Safe Control for Human-Robot Interaction

We derive a least-conservative robust safe controller for dynamical systems with additive multimodal uncertainty (where additive refers to how the uncertainty enters into the dynamics with respect to the control input). We test our controller on a simulated human-robot system where the robot is uncertain of the human’s goal and find this approach to be safer than existing maximum-likelihood-based unimodal robust controllers.

Multi-Agent Strategy Explanations for Human-Robot Collaboration

We introduce a novel method for generating explanations of collaborative strategies for humans and robots in tasks with multiple Nash equilibria. We generate a visual state-based explanation of what each agent should do in an upcoming collaboration. Ultimately, we find that our explanations help real participants better explore the full space of strategies and collaborate with autonomous partners more quickly.

Towards Proactive Safe Human-Robot Collaborations via Data-Efficient Conditional Behavior Prediction

We formulate a novel modification to typical human intention prediction via Bayesian inference that accounts for the influence that the robot will have on the person. Using this conditional behavior prediction model, the robot can proactively influence a human collaborator to choose efficient actions for the task. We find in a user study that participants tend to enjoy collaborating with this algorithm over baselines.

Safe and Efficient Exploration of Human Models During Human-Robot Interaction

We study the problem of adapting a robot’s dynamics model of a human collaborator online while staying safe; we test out controllers with different risk preferences and measure how they are affected by the presence of safe control. Ultimately, we find that a risk-seeking control can learn a good model, but necessitates activating the safety controller more than other methods.

Nonverbal Robot Feedback for Human Teachers

We study the problem of enabling a robot learner to give nonverbal feedback to a human teacher. We focus on using gaze as a predictor of the human teacher’s next action and find in simulation that this approach leads to faster and more accurate task learning. In both online and in-person user studies, we find that this nonverbal feedback also helps real human teachers get a better mental model of the robot learner and helps improve the robot’s learning performance.

Human-AI Learning Performance in Multi-Armed Bandits

We study how an AI agent can assist a human by suggesting options in a multi-armed bandit problem when both agents are learning the reward from arms from scratch. We find in a user study that people have two main modes of selecting arms that can be distibguished by the entropy of the arm frequencies over time, and that participants matched with an assistant with similar entropy profiles will be most helpful to them.

Learning Image-Conditioned Dynamics Models for Under-actuated Legged Millirobots

We enable a small underactuated robot to learn how to walk on different terrains with a small amount of data collected in the real world by training a neural network dynamics model and running MPC over it to track trajectories. Importantly, the dynamics model takes in images of the environment to condition on, allowing the robot to learn different gaits for different terrains with just a single model.

Learning Human Ergonomic Preferences for Handovers

We focus on understanding how to best learn ergonomic preferences from a human in object handovers, since each person will have individual comfort preferences or constraints. We study an active learning approach to learning a human ergonomic cost function compared to passive and random baselines, and find that while active learning estimates the human’s cost function quickly, it incurs a higher ergonomic cost during learning.

Talks

All talks»

Towards Influence-Aware Safe Human-Robot Interaction

Towards Influence-Aware Safe Human-Robot Interaction

PhD Thesis Proposal

Safely Influencing Humans in Human-Robot Interaction

PhD Speaking Qualifier

Nonverbal Robot Feedback for Human Teachers