DexGrasp-1M: Dexterous Multi-finger Grasp Generation Through Differentiable Simulation
1University of Toronto & Vector Institute, 2Nvidia, 3Samsung
IEEE International Conference on Robotics and Automation (ICRA) 2023
Abstract. Multi-finger grasping relies on high quality training data, which is hard to obtain: human data is hard to transfer and synthetic data relies on simplifying assumptions that reduce grasp quality. By making grasp simulation differentiable, and contact-dynamics amenable to gradient-based optimization, we accelerate the search for high-quality grasps without any simplifying assumptions. We present DexGrasp-1M: a large-scale dataset for multi-finger robotic grasping synthesized with FastGrasp’D, a novel diffferentiable grasping simulator. DexGrasp1M contains one million training examples for three (three, four and five-fingered) robotic hands, each with multimodal visual inputs (RGB+depth+segmentation, available in mono and stereo). Grasp synthesis with Fast-Grasp’D is 10x faster than GraspIt!  and 20x faster than Grasp’D differentiable simulator . Our evaluations show that these grasps are more stable and contact-rich than GraspIt! grasps regardless of the distance threshold used for contact generation. We validate usefulness of our data by retraining an existing vision-based grasping pipeline on DexGrasp-1M, and showing a dramatic increase in model performance, with predicted grasps with 30% more contact, have a 33% higher epsilon metric and 35% lower simulated displacement.
 A. T. Miller and P. K. Allen, “Graspit! a versatile simulator for robotic grasping,” IEEE Robotics & Automation Magazine, 2004.
 D. Turpin, L. Wang, E. Heiden, Y.-C. Chen, M. Macklin, S. Tsogkas, S. Dickinson, and A. Garg, “Grasp’d: Differentiable contact-rich grasp synthesis for multi-fingered hands,” arXiv preprint arXiv:2208.12250, 2022.
- Code and Dataset (available soon)
- Paper (coming soon)
- Video walkthrough