Khaberni - NVIDIA researchers unveiled a new robotic learning system called "DoorMan", which enables humanoid robots to open doors more efficiently than human operators.
The system was tested on the humanoid-shaped robot "Unitree G1", which is priced at $16,000. It only uses integrated "RGB" cameras and relies entirely on reinforcement learning trained in simulation.
This system allows the robot to open various types of real doors faster and more successfully than humans who remotely control the robot, according to a report by the website "Interesting Engineering" specializing in technology and science news.
In real-world tests, the robot completed the task up to 31% faster than experienced human operators and achieved a higher overall success rate.
According to the researchers, this represents a significant advancement in "manipulative mobility", a category of tasks that require the robot to walk, perceive, coordinate its limbs, and handle objects simultaneously.
The "DoorMan" system relies on a new training method, using a vision-only reinforcement learning policy that was trained entirely in NVIDIA's "Isaac Lab" simulation, then deployed in the real world without any additional modifications. The robot operates solely based on the raw footage from "RGB" cameras while performing tasks.
Contrarily, most current humanoid robots require specialized inputs, such as depth sensors or motion tracking markers, but these inputs may be unreliable or difficult to use in unpredictable environments.
In the "DoorMan" study, the robot's performance was directly compared to that of human operators who controlled the same "G1" unit via virtual reality systems.
The researchers reported that human operators often struggled to understand the door joint force through the virtual reality interface, sometimes pulling the door with excessive force or losing their balance.
In contrast, the autonomous system's training, simulated across millions of doors with different joint stiffness, shock absorption, and geometric shapes, enabled it to develop a more balanced and adaptable interaction style.
This simply means that the "DoorMan" system enabled the robot to learn in simulation how to handle all types of doors better than humans controlling it remotely.




