Skip to content

Commit cd15b68

Browse files
committed
Merge remote-tracking branch 'upstream/main'
2 parents 4575d1a + e0096fe commit cd15b68

39 files changed

+1694
-237
lines changed

.github/workflows/release.yml

Lines changed: 45 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,11 @@ on:
1919
tags:
2020
- 'v*.*.*' # Trigger on tags like v0.1.0, v1.0.0
2121

22+
# Sets up the environment variables
23+
env:
24+
UV_VERSION: "0.8.0"
25+
PYTHON_VERSION: "3.10"
26+
2227
jobs:
2328
# This job builds the Python package and publishes it to PyPI
2429
build-and-publish:
@@ -50,6 +55,7 @@ jobs:
5055
VERSION_NUMBER=${VERSION#v}
5156
echo "tag_version=$VERSION_NUMBER" >> $GITHUB_OUTPUT
5257
- name: Check if version matches pyproject.toml
58+
if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
5359
# zizmor: ignore[template-injection]
5460
run: |
5561
TAG_VERSION=${{ steps.extract_info.outputs.tag_version }}
@@ -86,13 +92,29 @@ jobs:
8692
env:
8793
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
8894
# zizmor: ignore[template-injection]
89-
run: gh release create ${{ github.ref_name }} --release-name "Release ${{ github.ref_name }}" --generate-notes ./dist/*
95+
run: |
96+
gh release create ${{ github.ref_name }} \
97+
--title "Release ${{ github.ref_name }}" \
98+
--generate-notes \
99+
--draft=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
100+
--prerelease=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
101+
./dist/*
102+
103+
- name: Publish to TestPyPI for pre-releases
104+
# True for tags like 'v0.2.0-rc1'
105+
if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
106+
uses: pypa/gh-action-pypi-publish@v1.12.4 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
107+
with:
108+
repository-url: https://test.pypi.org/legacy/
109+
verbose: true
110+
print-hash: true
90111

91112
- name: Publish to PyPI
92-
if: startsWith(github.ref, 'refs/tags/v')
113+
if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
93114
uses: pypa/gh-action-pypi-publish@v1.12.4 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
94115
with:
95-
password: ${{ secrets.PYPI_API_TOKEN }}
116+
verbose: true
117+
print-hash: true
96118

97119
# This job runs end-to-end tests on the release
98120
test-release:
@@ -119,15 +141,31 @@ jobs:
119141
enable-cache: true
120142
version: ${{ env.UV_VERSION }}
121143
python-version: ${{ env.PYTHON_VERSION }}
144+
- name: Create uv virtual environment
145+
run: uv venv
122146
- name: Install lerobot release
123-
run: uv run pip install lerobot==${{ needs.build-and-publish.outputs.version }} # zizmor: ignore[template-injection]
124-
147+
# zizmor: ignore[template-injection]
148+
run: |
149+
VERSION="${{ needs.build-and-publish.outputs.version }}"
150+
if [[ "$VERSION" == *-* ]]; then
151+
BASE_VERSION="${VERSION%%-*}"
152+
echo "Installing pre-release version $BASE_VERSION from TestPyPI..."
153+
uv pip install \
154+
--index-url https://test.pypi.org/simple/ \
155+
--extra-index-url https://pypi.org/simple \
156+
--index-strategy unsafe-best-match \
157+
"lerobot[all]==$BASE_VERSION"
158+
else
159+
echo "Installing release version $VERSION from PyPI..."
160+
uv pip install "lerobot[all]==$VERSION"
161+
fi
125162
- name: Check lerobot version
126-
run: uv run lerobot --version
163+
run: uv run python -c "import lerobot; print(lerobot.__version__)"
127164

128165
- name: Run end-to-end tests
129166
run: uv run make test-end-to-end
130167

131168

132-
# TODO(Steven): Publish draft/pre-release and to test pypi
169+
# TODO(Steven): Publish draft/pre-release and to test pypi weekly
170+
# TODO(Steven): Separate build and publish job
133171
# TODO(Steven): Tag documentation with the same version as the package

README.md

Lines changed: 33 additions & 168 deletions
Large diffs are not rendered by default.

docs-requirements.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# docs-requirements.txt
2+
hf-doc-builder @ git+https://github.com/huggingface/doc-builder.git@main
3+
watchdog>=6.0.0

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ To generate the documentation, you first have to build it. Several packages are
2020
you can install them with the following command, at the root of the code repository:
2121

2222
```bash
23-
pip install -e ".[docs]"
23+
pip install -e . -r docs-requirements.txt
2424
```
2525

2626
You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)

docs/source/il_robots.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,7 @@ dataset.push_to_hub()
294294

295295
#### Dataset upload
296296

297-
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
297+
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. `https://huggingface.co/datasets/${HF_USER}/so101_test`) that you can obtain by running:
298298

299299
```bash
300300
echo https://huggingface.co/datasets/${HF_USER}/so101_test
@@ -428,7 +428,7 @@ Your robot should replicate movements similar to those you recorded. For example
428428

429429
## Train a policy
430430

431-
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](../src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
431+
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
432432

433433
```bash
434434
python -m lerobot.scripts.train \
@@ -444,7 +444,7 @@ python -m lerobot.scripts.train \
444444
Let's explain the command:
445445

446446
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
447-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
447+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
448448
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
449449
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
450450

@@ -462,9 +462,9 @@ If you do not want to push your model to the hub after training use `--policy.pu
462462

463463
Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
464464

465-
#### Train using Collab
465+
#### Train using Google Colab
466466

467-
If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
467+
If your local computer doesn't have a powerful GPU you could utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
468468

469469
#### Upload policy checkpoints
470470

docs/source/il_sim.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ If you uploaded your dataset to the hub you can [visualize your dataset online](
9696

9797
## Train a policy
9898

99-
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](../src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
99+
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
100100

101101
```bash
102102
python -m lerobot.scripts.train \
@@ -111,7 +111,7 @@ python -m lerobot.scripts.train \
111111
Let's explain the command:
112112

113113
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/il_gym`.
114-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
114+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
115115
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
116116
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
117117

docs/source/policy_act_README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
## Paper
2+
3+
https://tonyzhaozh.github.io/aloha
4+
5+
## Citation
6+
7+
```bibtex
8+
@article{zhao2023learning,
9+
title={Learning fine-grained bimanual manipulation with low-cost hardware},
10+
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
11+
journal={arXiv preprint arXiv:2304.13705},
12+
year={2023}
13+
}
14+
```
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
## Paper
2+
3+
https://diffusion-policy.cs.columbia.edu
4+
5+
## Citation
6+
7+
```bibtex
8+
@article{chi2024diffusionpolicy,
9+
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
10+
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
11+
journal = {The International Journal of Robotics Research},
12+
year = {2024},
13+
}
14+
```

docs/source/policy_smolvla_README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
## Paper
2+
3+
https://arxiv.org/abs/2506.01844
4+
5+
## Citation
6+
7+
```bibtex
8+
@article{shukor2025smolvla,
9+
title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
10+
author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
11+
journal={arXiv preprint arXiv:2506.01844},
12+
year={2025}
13+
}
14+
```

docs/source/policy_tdmpc_README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
## Paper
2+
3+
https://www.nicklashansen.com/td-mpc/
4+
5+
## Citation
6+
7+
```bibtex
8+
@inproceedings{Hansen2022tdmpc,
9+
title={Temporal Difference Learning for Model Predictive Control},
10+
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
11+
booktitle={ICML},
12+
year={2022}
13+
}
14+
```

0 commit comments

Comments
 (0)