Skip to content

Commit fe2b53f

Browse files
committed
Add MaxText Llama 3.1 70B training with GCS recipe
1 parent af2a7cd commit fe2b53f

File tree

5 files changed

+285
-0
lines changed

5 files changed

+285
-0
lines changed
Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
# Instructions for training Llama3.1-70B-MaxText on TPU trillium (v6e-256) with Google Cloud Storage (GCS)
2+
3+
## GCS Bucket setup
4+
1. Create a bucket with a dataset for dataloading and a bucket to write checkpoints. To create a regional HNS bucket use the following command:
5+
```
6+
# Set variables
7+
export DATASET_BUCKET="dataloading-bucket-name"
8+
export CHECKPOINT_BUCKET="checkpoint-bucket-name"
9+
export DATASET_STORAGE_NAME="dataset-bucket"
10+
export CHECKPOINT_STORAGE_NAME="checkpoint-bucket"
11+
export REGION="us-central1"
12+
13+
# Create dataset bucket
14+
gcloud storage buckets create gs://${DATASET_BUCKET} --location=${REGION} --default-storage-class=Standard --enable-hierarchical-namespace --uniform-bucket-level-access
15+
16+
# Create checkpoint bucket
17+
gcloud storage buckets create gs://${CHECKPOINT_BUCKET} --location=${REGION} --default-storage-class=Standard --enable-hierarchical-namespace --uniform-bucket-level-access
18+
```
19+
Replace the following values:
20+
- `<DATASET_BUCKET>`:the name of your Cloud Storage bucket with training dataset. Do not include the gs:// prefix
21+
- `<CHECKPOINT_BUCKET>`: the name of your Cloud Storage bucket where checkpoints will written. Do not include the gs:// prefix
22+
- `<DATASET_STORAGE_NAME>`: name of the XPK storage for dataset bucket
23+
- `<CHECKPOINT_STORAGE_NAME>`: name of the XPK storage for checkpoint bucket
24+
- `<REGION>`: the region where your cluster is located ([available locations](https://cloud.google.com/storage/docs/locations#location-r))
25+
26+
2. Follow these [instructions](https://github.com/AI-Hypercomputer/maxtext/blob/b93beba652db6b3f4e6c82dc48a83b03229f5d3a/getting_started/Data_Input_Pipeline.md#tfds-pipeline) to download the Allenai c4 dataset which is used in this recipe.
27+
Then follow these [instructions](https://github.com/google/array_record/tree/main/beam) to convert the dataset into ArrayRecord.
28+
29+
## XPK setup
30+
1. Please follow this [link](https://github.com/AI-Hypercomputer/tpu-recipes/blob/main/training/trillium/XPK_README.md) to create your GKE cluster with XPK.
31+
2. GCSFuse lets you mount and access Cloud Storage buckets as local file systems, so applications can read and write objects in your bucket using standard file system semantics. It adds pv and pvc to the cluster
32+
https://github.com/AI-Hypercomputer/xpk?tab=readme-ov-file#storage. You'll need to use the below commands to create XPK storage resources for both the dataset and checkpoint buckets in order to mount them to the MaxText workload using GCSFuse.
33+
```
34+
35+
export RECIPE_REPO="path-to-this-recipe-repo" # Update
36+
37+
cd ~/xpk
38+
39+
python3 xpk.py storage attach $DATASET_STORAGE_NAME type=gcsfuse project=$PROJECT cluster=$CLUSTER zone=$ZONE mountpoint=/tmp/dataset readonly=false bucket=$DATASET_BUCKET size=64 automount=false manifest=$RECIPE_REPO/tpu-recipes/training/trillium/Llama3.1-70B-MaxText-with-Storage/dataset_pvc.yaml
40+
41+
python3 xpk.py storage attach $CHECKPOINT_STORAGE_NAME type=gcsfuse project=$PROJECT cluster=$CLUSTER zone=$ZONE mountpoint=/tmp/ckpt readonly=false bucket=$CHECKPOINT_BUCKET size=64 automount=false manifest=$RECIPE_REPO/tpu-recipes/training/trillium/Llama3.1-70B-MaxText-with-Storage/checkpoint_pvc.yaml
42+
```
43+
For the dataset bucket and checkpoint bucket use separate manifest files `checkpoint_pvc.yaml` and `dataset_pvc.yaml` from this repo.
44+
Be sure to update `volumeHandle` in the yamls with your correct bucket names. Creating a bucket and xpk storage is a one time setup.
45+
46+
## Prep for MaxText
47+
48+
### Install MaxText and Build Docker Image
49+
Please follow this [link](https://github.com/AI-Hypercomputer/tpu-recipes/blob/main/training/trillium/MAXTEXT_README.md) to install maxtext and build the docker image.
50+
51+
In step 2, use the jax-stable-stack image containing JAX 0.5.2:
52+
```
53+
BASE_IMAGE=us-docker.pkg.dev/cloud-tpu-images/jax-stable-stack/tpu:jax0.5.2-rev1
54+
bash docker_build_dependency_image.sh DEVICE=tpu MODE=stable_stack BASEIMAGE=${BASE_IMAGE}
55+
```
56+
57+
## Run MaxText Llama3.1-70B workloads on GKE
58+
59+
### Starting workload
60+
61+
From the MaxText root directory, start your Llama3.1-70B workload.
62+
63+
Run MaxText Llama 3.1 70B with synthetic data and no checkpointing:
64+
```
65+
python3 benchmarks/benchmark_runner.py xpk \
66+
project=$PROJECT \
67+
zone=$ZONE \
68+
device_type=v6e-256 \
69+
num_slices=1 \
70+
cluster_name=$CLUSTER \
71+
base_output_directory=$OUTPUT_DIR \
72+
model_name="llama3_1_70b_8192_synthetic" \
73+
num_steps=100 \
74+
base_docker_image=maxtext_base_image
75+
```
76+
77+
Run MaxText Llama 3.1 70B with checkpointing and loading real data from GCS:
78+
```
79+
python3 benchmarks/benchmark_runner.py xpk \
80+
project=$PROJECT \
81+
zone=$ZONE \
82+
device_type=v6e-256 \
83+
num_slices=1 \
84+
cluster_name=${CLUSTER} \
85+
base_output_directory=/tmp/ckpt \
86+
model_name="llama3_1_70b_8192_rd_ckpt_grain" \
87+
num_steps=100 \
88+
base_docker_image=maxtext_base_image \
89+
xpk_storage=$DATASET_STORAGE_NAME xpk_storage=$CHECKPOINT_STORAGE_NAME
90+
```
91+
92+
If you would like to run on multiple slices of v6e-256, you may modify the `--num_slices` flag.
93+
94+
### Workload Details
95+
96+
For reference, here are the `llama3_1_70b_8192_synthetic` and `llama3_1_70b_8192_rd_ckpt_grain` workload details:
97+
98+
```
99+
MaxTextModel(
100+
model_name="llama3_1-70b-8192",
101+
model_type="llama3.1-70b",
102+
tuning_params={
103+
"per_device_batch_size": 4,
104+
"ici_fsdp_parallelism": -1,
105+
"remat_policy": "custom",
106+
"decoder_layer_input": "offload",
107+
"query_proj": "offload",
108+
"key_proj": "offload",
109+
"value_proj": "offload",
110+
"max_target_length": 8192,
111+
"attention": "flash",
112+
"use_iota_embed": True,
113+
"dataset_path": "gs://max-datasets-rogue",
114+
"dataset_type": "synthetic",
115+
"enable_checkpointing": False,
116+
"sa_block_q": 2048,
117+
"sa_block_kv": 2048,
118+
"sa_block_kv_compute": 2048,
119+
"sa_block_q_dkv": 2048,
120+
"sa_block_kv_dkv": 2048,
121+
"sa_block_kv_dkv_compute": 2048,
122+
"sa_block_q_dq": 2048,
123+
"sa_block_kv_dq": 2048,
124+
"sa_use_fused_bwd_kernel": True,
125+
"profiler": "xplane",
126+
"skip_first_n_steps_for_profiler": 10,
127+
"profiler_steps": 5,
128+
},
129+
xla_flags=(
130+
xla_flags_library.DENSE_VMEM_LIMIT_FLAG
131+
+ xla_flags_library.LAYOUT_FOR_ALL_REDUCE_SCATTER
132+
+ xla_flags_library.DATA_PARALLEL_OVERLAP
133+
+ xla_flags_library.CF_FOR_ALL_GATHER
134+
+ xla_flags_library.HOST_OFFLOAD_FLAGS
135+
),
136+
)
137+
138+
139+
MaxTextModel(
140+
model_name="llama3_1_70b_8192_rd_ckpt_grain",
141+
model_type="llama3.1-70b",
142+
tuning_params={
143+
"per_device_batch_size": 2,
144+
"ici_fsdp_parallelism": -1,
145+
"remat_policy": "custom",
146+
"decoder_layer_input": "offload",
147+
"query_proj": "offload",
148+
"key_proj": "offload",
149+
"value_proj": "offload",
150+
"max_target_length": 8192,
151+
"attention": "flash",
152+
"use_iota_embed": True,
153+
"dataset_path": "/tmp/dataset",
154+
"dataset_type": "grain",
155+
"grain_train_files": "/tmp/dataset/array-record/c4/en/3.0.1/c4-train.array_record*",
156+
"grain_worker_count": 24,
157+
"enable_checkpointing": True,
158+
"async_checkpointing": True,
159+
"checkpoint_period": 20,
160+
"sa_block_q": 2048,
161+
"sa_block_kv": 2048,
162+
"sa_block_kv_compute": 2048,
163+
"sa_block_q_dkv": 2048,
164+
"sa_block_kv_dkv": 2048,
165+
"sa_block_kv_dkv_compute": 2048,
166+
"sa_block_q_dq": 2048,
167+
"sa_block_kv_dq": 2048,
168+
"sa_use_fused_bwd_kernel": True,
169+
},
170+
xla_flags=(
171+
xla_flags_library.DENSE_VMEM_LIMIT_FLAG
172+
+ xla_flags_library.LAYOUT_FOR_ALL_REDUCE_SCATTER
173+
+ xla_flags_library.DATA_PARALLEL_OVERLAP
174+
+ xla_flags_library.CF_FOR_ALL_GATHER
175+
+ xla_flags_library.HOST_OFFLOAD_FLAGS
176+
+ xla_flags_library.ENABLE_SPARSECORE_OFFLOADING_FOR_ALL_REDUCE
177+
+ " --xla_tpu_iova_dma_chunk_size_bytes=104857"
178+
),
179+
)
180+
```
181+
182+
This equivalent workload code can be found in the [maxtext_trillium_model_configs.py](https://github.com/AI-Hypercomputer/maxtext/blob/1e4d513ad70dd4074d975a9f7936295008d4b900/benchmarks/maxtext_trillium_model_configs.py#L1103-L1146) file within the MaxText repository.
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
apiVersion: v1
2+
kind: PersistentVolume
3+
metadata:
4+
name: checkpoint-bucket-pv
5+
spec:
6+
accessModes:
7+
- ReadWriteMany
8+
capacity:
9+
storage: 64Gi
10+
persistentVolumeReclaimPolicy: Retain
11+
storageClassName: gcsfuse-sc # dummy storage class
12+
claimRef:
13+
namespace: default
14+
name: checkpoint-bucket-pvc
15+
mountOptions:
16+
- metadata-cache:ttl-secs:-1
17+
- metadata-cache:negative-ttl-secs:0
18+
- metadata-cache:stat-cache-max-size-mb:-1
19+
- metadata-cache:type-cache-max-size-mb:-1
20+
- file-cache:enable-parallel-downloads:false
21+
- file-system:kernel-list-cache-ttl-secs:0
22+
- write:enable-streaming-writes:true
23+
- file-system:precondition-errors:false
24+
csi:
25+
driver: gcsfuse.csi.storage.gke.io
26+
volumeHandle: checkpoint-bucket-name # Update with your checkpoint bucket name
27+
volumeAttributes:
28+
gcsfuseMetadataPrefetchOnMount: "true"
29+
---
30+
apiVersion: v1
31+
kind: PersistentVolumeClaim
32+
metadata:
33+
name: checkpoint-bucket-pvc
34+
namespace: defaultls
35+
spec:
36+
accessModes:
37+
- ReadWriteMany
38+
resources:
39+
requests:
40+
storage: 64Gi
41+
volumeName: checkpoint-bucket-pv
42+
storageClassName: gcsfuse-sc # dummy storage class
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
apiVersion: v1
2+
kind: PersistentVolume
3+
metadata:
4+
name: dataset-bucket-pv
5+
spec:
6+
accessModes:
7+
- ReadWriteMany
8+
capacity:
9+
storage: 64Gi
10+
persistentVolumeReclaimPolicy: Retain
11+
storageClassName: gcsfuse-sc # dummy storage class
12+
claimRef:
13+
namespace: default
14+
name: dataset-bucket-pvc
15+
mountOptions:
16+
- metadata-cache:ttl-secs:-1
17+
- metadata-cache:stat-cache-max-size-mb:-1
18+
- metadata-cache:type-cache-max-size-mb:-1
19+
- file-cache:enable-parallel-downloads:false
20+
- file-system:kernel-list-cache-ttl-secs:-1
21+
- write:enable-streaming-writes:true
22+
csi:
23+
driver: gcsfuse.csi.storage.gke.io
24+
volumeHandle: dataloading-bucket-name # Update with your bucket name
25+
volumeAttributes:
26+
gcsfuseMetadataPrefetchOnMount: "true"
27+
---
28+
apiVersion: v1
29+
kind: PersistentVolumeClaim
30+
metadata:
31+
name: dataset-bucket-pvc
32+
namespace: default
33+
spec:
34+
accessModes:
35+
- ReadWriteMany
36+
resources:
37+
requests:
38+
storage: 64Gi
39+
volumeName: dataset-bucket-pv
40+
storageClassName: gcsfuse-sc # dummy storage class
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
python3 benchmarks/benchmark_runner.py xpk \
2+
project=$PROJECT \
3+
zone=$ZONE \
4+
device_type=v6e-256 \
5+
num_slices=1 \
6+
cluster_name=${CLUSTER} \
7+
base_output_directory=/tmp/ckpt \
8+
model_name="llama3_1_70b_8192_rd_ckpt_grain" \
9+
num_steps=100 \
10+
base_docker_image=maxtext_base_image \
11+
xpk_storage=$DATASET_STORAGE_NAME xpk_storage=$CHECKPOINT_STORAGE_NAME
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
python3 benchmarks/benchmark_runner.py xpk \
2+
project=$PROJECT \
3+
zone=$ZONE \
4+
device_type=v6e-256 \
5+
num_slices=1 \
6+
cluster_name=$CLUSTER \
7+
base_output_directory=$OUTPUT_DIR \
8+
model_name="llama3_1_70b_8192_synthetic" \
9+
num_steps=100 \
10+
base_docker_image=maxtext_base_image

0 commit comments

Comments
 (0)