The IRIS lab focuses on three reserach directions: (1) human-autonomy alignment, (2) contact-rich dexterous manipulation, and (3) fundamental methods in robotics. Below are some recent publications in each set of research interest. Please visit Publications page for a full list of publications.


Human-autonomy alignment
We develop certifiable, efficient, and empowering methods to enable robots to align their autonomy with human users through various natural interactions.

  • Robot learning from general human interactions
  • Planning and control for human-robot systems

Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos
Harsh Mahesheka, Zhixian Xie, Zhaoran Wang, Wanxin Jin
arXiv preprint, 2024

Safe MPC Alignment with Human Directional Feedback
Zhixian Xie, Wenlong Zhang, Yi Ren, Zhaoran Wang, George. J. Pappas and Wanxin Jin
Submitted to IEEE Transactions on Robotics (T-RO), 2024

Learning from Human Directional Corrections
Wanxin Jin, Todd D Murphey, Zehui Lu, and Shaoshuai Mou
IEEE Transactions on Robotics (T-RO), 2023

Learning from Sparse Demonstrations
Wanxin Jin, Todd D Murphey, Dana Kulic, Neta Ezer, and Shaoshuai Mou
IEEE Transactions on Robotics (T-RO), 2023

Inverse Optimal Control from Incomplete Trajectory Observations
Wanxin Jin, Dana Kulic, Shaoshuai Mou, and Sandra Hirche
International Journal of Robotics Research (IJRR), 40:848–865, 2021

Inverse Optimal Control for Multiphase cost functions
Wanxin Jin, Dana Kulic, Jonathan Lin, Shaoshuai Mou, and Sandra Hirche
IEEE Transactions on Robotics (T-RO), 35(6):1387–1398, 2019


Contact-rich dexterous manipulation
We develop efficient physics-based representations/modeling, planning/control methods to enable robots to gain dexterity through frequently making or breaking contacts with objects

  • Learning, planning, and control for contact-rich manipulation
  • Computer vision and learnable geometry for dexterous manipulation

ContactSDF: Signed Distance Functions as Multi-Contact Models for Dexterous Manipulation
Wen Yang and Wanxin Jin
Submitted to IEEE Robotics and Automation Letters (RA-L), 2024

Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation
Wanxin Jin
arXiv preprint, 2024

Task-Driven Hybrid Model Reduction for Dexterous Manipulation
Wanxin Jin and Michael Posa
IEEE Transactions on Robotics (T-RO), 2024

Adaptive Contact-Implicit Model Predictive Control with Online Residual Learning
Wei-Cheng Huang, Alp Aydinoglu, Wanxin Jin, Michael Posa
IEEE International Conference on Robotics and Automation (ICRA), 2024

Adaptive Barrier Smoothing for First-Order Policy Gradient with Contact Dynamics
Shenao Zhang, Wanxin Jin, Zhaoran Wang
International Conference on Machine Learning (ICML), 2023

Learning Linear Complementarity Systems
Wanxin Jin, Alp Aydinoglu, Mathew Halm, and Michael Posa
Learning for Dynamics and Control (L4DC), 2022


Fundamental methods in robotics
We develop fundamental algorithms for efficient, safe, and robust robot intelligence, by harnessing the complementary benefits of model-based and data-driven approaches.

  • Optimal control, motion plannig, reinforcement learning
  • Differentiable optimization, inverse optimization
  • Hybrid system learning and control

Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
Wanxin Jin, Zhaoran Wang, Zhuoran Yang, and Shaoshuai Mou
Advances in Neural Information Processing Systems (NeurIPS), 2020

Safe Pontryagin Differentiable Programming
Wanxin Jin, Shaoshuai Mou, and George J. Pappas
Advances in Neural Information Processing Systems (NeurIPS), 2021

Robust Safe Learning and Control in Unknown Environments: An Uncertainty-Aware Control Barrier Function Approach
Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche
IEEE Robotics and Automation Letters (RA-L), 2023

Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments
Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu
International Conference on Machine Learning (ICML), 2023

A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
Kun Cao, Xinhang Xu, Wanxin Jin, Karl H. Johansson, and Lihua Xie
Submitted to IEEE Transactions on Robotics (T-RO), 2024