Teaching Robots with an Apple Vision Pro and Synthetic Data – NVIDIA GR00T Mimic with
Автор: FS Studio
Загружено: 2025-08-29
Просмотров: 835
Описание:
In this video, @LycheeAI walks you through how to generate synthetic motion data for imitation learning using @NVIDIA's GR00T Mimic inside Isaac Lab, all demonstrated on the Apple Vision Pro for an immersive, hands-on view.
We start with the basics: what imitation learning is and how it allows robots to learn from human demonstrations instead of trial-and-error. Lychee then shows how a few simple demonstrations of a robot stacking task can be expanded into thousands of high-quality trajectories using interpolation, randomization, and noise.
Because all of this happens in simulation, we can parallelize environments and massively speed up dataset creation compared to the real world. After filtering for successful rollouts, the dataset is ready for training imitation learning models.
On top of motion data, Lychee also show how to capture visual datasets (RGB, depth, segmentation) and augment them with COSMOS for bridging the sim-to-real gap. Finally, he explains how this workflow ties into world foundation models like NVIDIA GR00T N1.5, which learn from multimodal data (vision, language, state) to produce robust robot motions.
This is a complete step-by-step workflow—from a handful of demos to a massive synthetic dataset, accelerating robotics AI training at scale, now enhanced through the Apple Vision Pro experience.
#Robotics #AI #ImitationLearning #SyntheticData #NVIDIA #IsaacLab #GR00T #Simulation #AppleVisionPro #SpatialComputing #DigitalTwins #MachineLearning #RobotLearning #Sim2Real
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: