Skip to content

docs: add LoRA fine-tuning tutorial#3601

Draft
chiajunglien wants to merge 2 commits intoAI-Hypercomputer:jackyf/feat/lora-nnxfrom
CIeNET-International:emma/lora-tutorial-final
Draft

docs: add LoRA fine-tuning tutorial#3601
chiajunglien wants to merge 2 commits intoAI-Hypercomputer:jackyf/feat/lora-nnxfrom
CIeNET-International:emma/lora-tutorial-final

Conversation

@chiajunglien
Copy link
Copy Markdown

Description

Start with a short description of what the PR does and how this is a change from
the past.

The rest of the description includes relevant details and context, examples:

  • why is this change being made,
  • the problem being solved and any relevant context,
  • why this is a good solution,
  • some information about the specific implementation,
  • shortcomings of the solution and possible future improvements.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Notice 1: Once all tests pass, the "pull ready" label will automatically be assigned.
This label is used for administrative purposes. Please do not add it manually.

Notice 2: For external contributions, our settings currently require an approval from a MaxText maintainer to trigger CI tests.

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@@ -0,0 +1,128 @@
<!--
Copyright 2023–2025 Google LLC
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please update to 2026


If you want to resume training from a previous run or further fine-tune an existing LoRA adapter, you can specify the LoRA checkpoint path.
- **load_parameters_path**: Points to the frozen base model weights (the original model).
- **lora_restore_path**: Points to the previous LoRA adapter weights you wish to load.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please mentioned usage of hf_lora_to_maxtext script

scan_layers=True
```

Your fine-tuned model checkpoints will be saved here: `$BASE_OUTPUT_DIRECTORY/$RUN_NAME/checkpoints`. No newline at end of file
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also instruct the usage of maxtext_lora_to_hf script


```sh
# -- Model configuration --
export MODEL_NAME=<MaxText Model> # e.g., 'llama3.1-8b-Instruct'
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we use PRE_TRAINED_MODEL to align sft.md?

If you already have a MaxText-compatible model checkpoint, simply set the following environment variable and move on to the next section.

```sh
export MAXTEXT_CKPT_PATH=<gcs path for MaxText checkpoint> # e.g., gs://my-bucket/my-model-checkpoint/0/items
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we use PRE_TRAINED_MODEL_CKPT_PATH to align sft.md?

Refer the steps in [Hugging Face to MaxText](https://maxtext.readthedocs.io/en/maxtext-v0.2.1/guides/checkpointing_solutions/convert_checkpoint.html#hugging-face-to-maxtext) to convert a hugging face checkpoint to MaxText. Make sure you have correct checkpoint files converted and saved. Similar as Option 1, you can set the following environment and move on.

```sh
export MAXTEXT_CKPT_PATH=<gcs path for MaxText checkpoint> # e.g., gs://my-bucket/my-model-checkpoint/0/items
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also naming alignment

run_name="${RUN_NAME}" \
base_output_directory="${BASE_OUTPUT_DIRECTORY}" \
model_name="${MODEL_NAME}" \
load_parameters_path="${MAXTEXT_CKPT_PATH}" \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also naming alignment


### Option 2: Converting a Hugging Face checkpoint

Refer the steps in [Hugging Face to MaxText](https://maxtext.readthedocs.io/en/maxtext-v0.2.1/guides/checkpointing_solutions/convert_checkpoint.html#hugging-face-to-maxtext) to convert a hugging face checkpoint to MaxText. Make sure you have correct checkpoint files converted and saved. Similar as Option 1, you can set the following environment and move on.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refer "to" the steps

Comment on lines +100 to +126
```sh
python3 -m maxtext.trainers.post_train.sft.train_sft \
maxtext/configs/post_train/sft.yml \
run_name="${RUN_NAME}" \
base_output_directory="${BASE_OUTPUT_DIRECTORY}" \
model_name="${MODEL_NAME}" \
load_parameters_path="${MAXTEXT_CKPT_PATH}" \
lora_restore_path="${LORA_RESTORE_PATH}" \
hf_access_token="${HF_TOKEN}" \
hf_path="${DATASET_NAME}" \
train_split="${TRAIN_SPLIT}" \
hf_data_dir="${HF_DATA_DIR}" \
train_data_columns="${TRAIN_DATA_COLUMNS}" \
steps="${STEPS}" \
per_device_batch_size="${PER_DEVICE_BATCH_SIZE}" \
max_target_length="${MAX_TARGET_LENGTH}" \
learning_rate="${LEARNING_RATE}" \
weight_dtype="${WEIGHT_DTYPE}" \
dtype="${DTYPE}" \
profiler=xplane \
enable_nnx=True \
pure_nnx_decoder=True \
enable_lora=True \
lora_rank="${LORA_RANK}" \
lora_alpha="${LORA_ALPHA}" \
scan_layers=True
```
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safety Syntax: Use the ${VAR?} syntax in the training command to ensure users don't run the script with missing configuration.


```sh
python3 -m maxtext.trainers.post_train.sft.train_sft \
maxtext/configs/post_train/sft.yml \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove maxtext/configs/post_train/sft.yml, maxtext can detect the correct path

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants