Stanford's new research simplifies training humanoid robots using human body and hand poses, revolutionizing data collection for robot learning.
The open-source Vision-Language-Action model, OpenVLA, showcases improved robotic control and performance, highlighting the benefits of collaborative industry contributions.
Harvard and Deepmind's study on virtual rodent brain activity provides insights into brain-controlled motion, with potential implications for brain-machine interfaces and robotics.