ComFree-Sim: A GPU-Parallelized Analytical Contact Physics Engine for Scalable Contact-Rich Robotics Simulation and Control

1Arizona State University    2University of Illinois, Urbana-Champaign


Overview: ComFree-Sim is a GPU-parallelized analytical physics engine powered by complementarity-free contact dynamics modeling. In dense contact scenarios, it delivers 2-3x higher throughput than MJWarp while maintaining comparable and tunable physical fidelity. This low-latency performance significantly improves the success rates of high-frequency simulation-based control for contact-rich tasks—such as dexterous manipulation and agile locomotion. Furthermore, its analytical approach to contact physics bypasses the complexity of traditional iterative solvers, offering a framework that is significantly simpler, more lightweight, and easier to tune and integrate into downstream tasks.


How does ComFree-Sim work?


Description of the image


Throughput scaling graph


Comfree-Sim for Real-time Dexterous Manipulation

In object reorientation tasks, the LEAP Hand rotates the cube to the desired pose. We implement MPPI with ComFree-Sim and achieve real-time control at ~35-72Hz. We deploy the system for 4 objects.





ComFree-Sim for Dual Arm Manipulation

Dual arm manipulation tasks require synchronized control of two robotic arms to perform complex object manipulation and assembly operations. ComFree-Sim enables real-time simulation and control of dual arm systems in the MuJoCo rollout environment with high contact complexity.


Dual Arm Lifting

Dual Arm Rotating


Comfree-Sim for Dynamics-Aware Retargeting For Agile Locomotion

We evaluate ComFree-Sim on dynamics-aware motion retargeting for locomotion, using MuJoCo-CPU as the rollout environment. The results demonstrate that ComFree-Sim achieves reference-tracking performance comparable to the native MuJoCo backend with a manageable sim-to-sim gap, while substantially reducing total optimization time.





Acknowledgements

We thank the MuJoCo Warp (MJWarp) team at Google DeepMind and NVIDIA for making the code publicly available. We also thank Vamsi Sai Abhijit Tadepalli of the IRIS Lab for maintaining the vision-tracking module used in our real-world in-hand manipulation experiments.