Video Friday: Digit Learns to Dance—Virtually Overnight
Hi there, little friend! Guess what? Robots are learning super cool new tricks!
Imagine your toy robot, Digit. Usually, it just walks, right? But now, smart people are teaching Digit to dance! Like a real dancer! They show it how to move its whole body, and Digit learns it almost overnight. It's like teaching your puppy a new trick, but way faster!
They also made a new robot brain called GEN-1. This brain helps robots do little jobs, like picking up toys, super-duper well. It's like giving your robot a magic brain that makes it really good at helping out.
And some robots are even learning to look at you and make you smile, just like a friendly person!
So, robots are getting smarter and learning to do amazing things, like dancing and helping, all by themselves! Isn't that cool?
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026 : 1–5 June 2026, VIENNA RSS 2026 : 13–17 July 2026, SYDNEY Summer School on Multi-Robot Systems : 29 July–4 August 2026, PRAGUE Enjoy today’s videos! Getting Digit to dance takes more than putting on some fancy shoes–our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training. [ Agility ] We’ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-pur
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA 2026: 1–5 June 2026, VIENNA
RSS 2026: 13–17 July 2026, SYDNEY
Summer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUE
Enjoy today’s videos!
Getting Digit to dance takes more than putting on some fancy shoes–our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training.
[ Agility ]
We’ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model that crosses a new performance threshold: mastery of simple physical tasks. It improves average success rates to 99% on tasks where previous models achieve 64%, completes tasks roughly 3x faster than state of the art, and requires only 1 hour of robot data for each of these results. GEN-1 unlocks commercial viability across a broad range of applications—and while it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world.
[ Generalist ]
Unitree open-sources UnifoLM-WBT-Dataset—high-quality real-world humanoid robot whole-body teleoperation (WBT) dataset for open environments. Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.
[ Hugging Face ]
Autonomous mobile robots operating in human-shared indoor environments often require paths that reflect human spatial intentions, such as avoiding interference with pedestrian flow or maintaining comfortable clearance. This paper presents MRReP, a Mixed Reality-based interface that enables users to draw a Hand-drawn Reference Path (HRP) directly on the physical floor using hand gestures.
[ MRReP ]
Thanks, Masato!
Eye contact, even momentarily between strangers, plays a pivotal role in fostering human connection, promoting happiness, and enhancing belonging. Through autonomous navigation and adaptive mirror control, Mirrorbot facilitates serendipitous, non-verbal interactions by dynamically transitioning reflections from self-focused to mutual recognition, sparking eye contact, shared awareness, and playful engagement.
[ ARL ] via [ Cornell University ]
Experience PAL Robotics’ new teleoperation system for TIAGo Pro, the AI-ready mobile manipulator designed for advanced research. This real-time VR teleoperation setup allows precise control of TIAGo Pro’s dual arms in Cartesian space, ideal for remote manipulation, AI data collection, and robot learning.
[ PAL Robotics ]
Utter brilliance from Robust AI. No notes.
[ Robust AI ]
Come along with our Senior Test Engineer, Nick L., as he takes us on a tour of the Home Test Labs inside the iRobot HQ.
[ iRobot ]
By automating the final “magic 5%” of production—the precise trimming of swim goggles’ silicone gaskets based on individual face scans—UR cobots allow THEMAGIC5 to deliver affordable, custom-fit goggles, enabling the company to scale from a Kickstarter sensation to selling over 400,000 goggles worldwide.
[ Universal Robots ]
Sanctuary AI has once again demonstrated its industry-leading approach to training dexterous manipulation policies for its advanced hydraulic hands. In this video, their proprietary hydraulic hand autonomously manipulates a lettered cube, continuously reorienting it to match a specified goal (displayed in the bottom-left corner of the video).
[ Sanctuary AI ]
China’s Yuxing 3-06 commercial experimental satellite, the first of its kind to be equipped with a flexible robotic arm, has recently completed an in-orbit refueling test and verification of key technologies. The test paves the way for Yuxing 3-06, dubbed a “space refueling station,” to refuel other satellites in orbit, manage space debris, and provide other in-orbit services.
[ Sanyuan Aerospace ] via [ Space News ]
This is a demonstration of natural walking, whole-body teleoperation, and motion tracking with our custom-built humanoid robot. The control policies are trained using large-scale parallel reinforcement learning (RL). By deploying robust policies learned in a physics simulator onto the real hardware, we achieve dynamic and stable whole-body motions.
[ Tokyo Robotics ]
Faced with aging railway infrastructure, a shrinking workforce and rising construction costs, Japan Railway West asked construction innovator Serendix to replace an old wooden building at its Hatsushima railway station using its 3D printing technology. An ABB robot enabled the company to assemble the new building in a single night ready for the first train service the next day.
[ ABB ]
Humanoid, SAP, and Martur Fompak team up to test humanoid robots in automotive manufacturing logistics. This joint proof of concept explores how robots can streamline operations, improve efficiency, and shape the future of smart factories.
[ Humanoid ]
This MIT Robotics Seminar is from Dario Floreano at EPFL, on “Avian Inspired Drones.”
[ MIT ]
This MIT Robotics Seminar is from Ken Goldberg at UC Berkeley, on “Good Old-Fashioned Engineering Can Close the 100,000 Year “Data Gap” in Robotics.”
[ MIT ]
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltrainingavailable

135,000 OpenClaw Users Just Got a 50x Price Hike. Anthropic Says It's 'Unsustainable.'
Originally published at news.skila.ai A single OpenClaw session can burn through $1,000 to $5,000 in compute. Anthropic was eating that cost on a $200/month Max plan. As of April 4, 2026 at 12pm PT, that arrangement is dead. More than 135,000 OpenClaw instances were running when Anthropic flipped the switch. Claude Pro ($20/month) and Max ($200/month) subscribers can no longer route their flat-rate plans through OpenClaw or any third-party agentic tool. The affected users now face cost increases of up to 50 times what they were paying. This is the biggest pricing disruption in the AI developer tool space since OpenAI killed free API access in 2023. And the ripple effects reach far beyond Anthropic's customer base. What Actually Happened (and Why) Boris Cherny, Head of Claude Code at Anthro

Gemma 4 Complete Guide: Architecture, Models, and Deployment in 2026
Google DeepMind released Gemma 4 on April 3, 2026 under Apache 2.0 — a significant licensing shift from previous Gemma releases that makes it genuinely usable for commercial products without legal ambiguity. This guide covers the full model family, architecture decisions worth understanding, and practical deployment paths across cloud, local, and mobile. The Four Models and When to Use Each Gemma 4 ships in four sizes with meaningfully different architectures: Model Params Active Architecture VRAM (4-bit) Target E2B ~2.3B all Dense + PLE ~2GB Mobile / edge E4B ~4.5B all Dense + PLE ~3.6GB Laptop / tablet 26B A4B 25.2B 3.8B MoE ~16GB Consumer GPU 31B 30.7B all Dense ~18GB Workstation The E2B result is the most surprising: multiple community benchmarks confirm it outperforms Gemma 3 27B on s
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Silverback AI Chatbot Introduces Advanced AI Assistant to Support Streamlined Customer Interaction and Operational Efficiency - Burlington Free Press
Silverback AI Chatbot Introduces Advanced AI Assistant to Support Streamlined Customer Interaction and Operational Efficiency Burlington Free Press

Silverback AI Chatbot Outlines AI Chatbot Feature for Structured Digital Interaction and Automated Communication - The Providence Journal
Silverback AI Chatbot Outlines AI Chatbot Feature for Structured Digital Interaction and Automated Communication The Providence Journal



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!