Hi and thank you for the amazing work on RoboBrain 2.0!
I noticed in your videos that you were able to run RoboBrain on a Unitree G1 humanoid robot, performing tasks like locomotion, grasping and object interaction. I'm currently working with a G1 as well, and I would love to replicate or build on your results.
Would it be possible to clarify the following:
How did you extract the model’s output and translate it into actual robot commands?
Did you use joint-level actions, end-effector poses, or high-level commands?
Is any of the code available for this robot-side integration?
What action format did you use for the G1?
I assume, from what I read, that RoboBrain outputs symbolic actions or task plans.
Any guidance or examples would be greatly appreciated.
Thanks again for the great research and open-source release!