Good News|Dynamic Dance of Tech and Art: University Team's Robots Debut on Hot Search

发布者:汤靖玲发布时间:2025-11-07浏览次数:12

    Recently, a humanoid robot dance video titled “Robot’s Paradise” has taken the video platform Bilibili by storm, quickly climbing onto the site’s trending list and amassing over 3 million views. The performance has become a crossover hit, blending technology and pop culture.

    The video showcases a motion control algorithm co-developed by Dr. Long Xiaoxiao, a tenure-track associate professor at our institute, along with Prof. Cao Xun and Assoc. Prof. Shen Qiu from the School of Electronics. The dance, characterized by complex rhythms and intricate leg movements, places extreme demands on the robot’s joint flexibility, motion control accuracy, and overall balance system.The team’s successful demonstration not only highlights our institute’s leading expertise in embodied intelligence—particularly in dynamic balance and precision motion control—but also vividly illustrates the boundless possibilities of embodied AI to the broader public.

    Amid rapid advancements in robotics, humanoid robots remain one of the most challenging and captivating research domains. The full-body motion control algorithm developed by Long Xiaoxiao, Cao Xun, and Shen Qiu empowers humanoid robots to perform more natural and efficient movements. Their approach integrates optical motion capture, kinematic remapping algorithms, and reinforcement learning.First, human movements are captured using optical motion capture technology, which tracks joint dynamics with high precision and records every subtle motion detail. This method avoids the drift errors common in traditional inertial systems while capturing high-dimensional data in real time.

    Next, a kinematic remapping algorithm translates the captured human motions into executable joint movements for the robot. This involves inverse kinematics computation and morphological matching to ensure the robot accurately replicates even the finest nuances of human motion. As a result, the robot performs not only simple imitations but also fluid and dynamic spatial movements.

    However, relying solely on static mapping is insufficient for highly intelligent robotic systems. To enable the robot to adapt to complex environments and autonomously refine its motions, the team introduced reinforcement learning. Within this framework, the robot continuously adjusts its actions based on a predefined reward mechanism. Through iterative training and trial-and-error, it gradually improves movement precision and smoothness, ultimately developing a whole-body coordinated control system capable of flexible and natural motion in challenging scenarios.

    This integrated approach—combining optical motion capture, kinematic remapping, and reinforcement learning—aligns with leading research such as Generative Motion Reproduction (GMR) and BeyondMimic. It marks a shift from mere “imitation” toward “self-learning” in robotics, paving the way for real-world applications in human-robot interaction, entertainment, healthcare, and beyond.



Baidu
map