Skip to content
This repository was archived by the owner on Nov 12, 2025. It is now read-only.

Commit 451c363

Browse files
authored
Update README.md (#44)
1 parent 5973e72 commit 451c363

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,9 @@
4747
</a>
4848
</p>
4949

50+
> 项目将移至团队的主项目仓库以便集中维护与更新. The project will be moved to the team’s main repository for centralized maintenance and updates.
51+
> 👉 https://github.com/SHAILAB-IPEC/EO1
52+
5053
## Interleaved Vision-Text-Action Pretraining for General Robot Control
5154

5255
We introduce **EO-1** model, an open-source unified embodied foundation model comprising 3B parameters, trained on the carefully curated interleaved embodied dataset EO-Data1.5M, Web Multimodal Data, and Robot Control Data (AgiBotWorld, Open X-Embodiment, RoboMIND, SO100-Community, etc.). The **EO-1** model adopt a single unified decoder-only transformer that integrates discrete auto-regressive decoding with continuous flow matching denoising for multimodal embodied reasoning and robot control, enabling seamless perception, planning, reasoning, and acting in single model. This work highlights the following features:

0 commit comments

Comments
 (0)