Robotics

Spotlight: Galbot Builds a Large-Scale Dexterous Hand Dataset for Humanoid Robots Using NVIDIA Isaac Sim

Robotic dexterous grasping is a critical area of research and development, aimed at enabling robots to interact with and manipulate objects as flexibly as humans can. By enabling robots to handle complex tasks that require fine motor skills, dexterous grasping can significantly enhance productivity and efficiency.

The first step to enabling human-like dexterous object manipulation for robots is through robotic dexterous grasping. However, validating dexterous grasping data has been a major challenge, as human annotations are impractical on a scale of millions of grasps. Without large-scale datasets, robotic dexterous grasping has remained under-explored compared to object grasping with parallel grippers.

Using NVIDIA Isaac Sim, a reference application for robotics simulation, robotics company Galbot successfully addressed this challenge. They validated a vast number of grasps to develop DexGraspNet, a comprehensive simulated dataset for dexterous robotic grasps that can be applied to any dexterous robotic hand.

DexGraspNet demonstrates diverse object grasps using Isaac Sim for verification.
Figure 1. Some diverse grasps on the objects from DexGraspNet, a dataset built by robotics company Galbot

DexGraspNet contains 1.32 million ShadowHand grasps on 5,355 objects—two orders of magnitude larger than the previous Deep Differentiable Grasp dataset. DexGraspNet covers more than 133 object categories and contains more than 200 diverse grasps for each object instance, making it a more complete sample for research.

Video 1. Learn more about DexGraspNet, a large-scale robotic dexterous grasp dataset for general objects based on simulation

Balancing data variety and quantity on robotic dexterous hand grasping

Galbot leveraged a deeply accelerated optimizer that can efficiently and robustly synthesize stable and diverse grasps on a large scale to find the grasping poses that meet force-closure conditions and have high graspness scores. The dataset includes many types of grasps that are not possible with other popular tools like GraspIt.

Through cross-dataset experiments, the Galbot team demonstrated that training several algorithms for dexterous grasp synthesis on DexGraspNet significantly outperformed training on the previous dataset.

Building generalized dexterous hand grasping skills

The Galbot research team proposed UniDexGrasp++, a novel, object-independent approach for learning generalized strategies for dexterous object grasping from real point cloud observations and proprioceptive information in a tabletop environment.

To address the challenge of learning vision-based strategies across thousands of object instances, the team used GeoCurriculum Learning and Geometry-Aware Iterative Generalist-Specialist Learning (GiGSL), which leverages the geometric features of the task to significantly improve generalizability.

Image displaying a variety of robot hands grasping objects in different ways.
Figure 2. Galbot uses Isaac Sim to verify the dexterous grasping policy learned through Geometry-Aware Curriculum Learning at scale

Using these techniques in Isaac Sim, the Galbot team’s final iteration successfully demonstrated generalized dexterous grasping of more than 3,000 object instances with random object poses on a table-top setting. The success rates were 85.4% on the training set and 78.2% on the test sets, outperforming the state-of-the-art baseline UniDexGrasp by 11.7% and 11.3%, respectively.

The research paper, UniDexGrasp++: Improving Dexterous Grasping Policy Learning via Geometry-aware Curriculum and Iterative Generalist-Specialist Learning, was awarded Best Paper Finalist at the 2023 International Conference on Computer Vision.

A GIF showing a large grid of rectangular cubes arranged in rows and columns, each with a robot hand grasping a different object.
Figure 3. Galbot used Isaac Sim to train policies that contain more than 3,000 training objects with random poses

Scaling dexterous hand grasping models

Galbot’s most recent work is DexGraspNet 2.0. It features dexterous grasping in cluttered scenes and the team has demonstrated zero-shot sim-to-real transfer with 90.70% real-world dexterous grasping success rate. DexGraspNet 2.0 has been evaluated on real robots, featuring a LEAP hand mounted on a UR-5 robot arm for dexterous grasp experiments, and a Franka Panda arm for gripper tasks. The DexGraspNet 2.0 project will be showcased at the 2024 Conference on Robot Learning (CoRL).

The Galbot team also built a simulation test environment for dexterous hand grasping models with Isaac Sim and NVIDIA Isaac Lab, an open-source modular framework for robot learning designed to simplify how robots adapt to new skills. With the simulation environment, developers can significantly accelerate the exploration and scaling of dexterous hand-grasping models, and more quickly implement generalized dexterous hand-grasping skills in real-world use cases.

Summary

Using NVIDIA Isaac Sim, Galbot developed DexGraspNet, a comprehensive dataset for humanoid robots, which includes 1.32 million ShadowHand grasps on 5,355 objects across more than 133 categories, providing a vast and diverse range of grasps. The dataset has proven effective for training algorithms in dexterous grasp synthesis, significantly outperforming previous datasets in cross-dataset experiments. This work enables robots to better handle complex tasks that require fine motor skills, enhancing productivity and efficiency.

To see the DexGraspNet code, visit the PKU-EPIC/DexGraspNet GitHub repo. You can also reference the dataset the team used. To learn more, see DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General Objects Based on Simulation.

Discover the latest in robot learning and simulation in the livestream on November 13, and don’t miss the NVIDIA Isaac Lab Office Hours for hands-on support and insights.

Discuss (0)

Tags