Skip to content

Conversation

@Baidu-AIAK
Copy link

Problem Description

Currently in the codebase, when the low-precision optimizer is enabled, optimizer states are saved in FP32 format when storing checkpoints. We implemented saving the optimizer states in low precision rather than fp32 when the low-precision optimizer is enabled to reduce disk space usage, and this change has no impact on accuracy.

Our Solution

We use a dedicated function to save the optimizer states according to the optimizer state dtype specified in the config.

def _get_dtype_by_key(self, key):
    if key == "param":
        return torch.float32
    elif key == "exp_avg":
        return self.config.exp_avg_dtype
    elif key == "exp_avg_sq":
        return self.config.exp_avg_sq_dtype
    else:
        raise ValueError(f"Invalid key: {key}")

At the same time, to remain compatible with TransformerEngine, we check the concrete dtype when loading the optimizer states and perform the corresponding format conversion in function _set_main_param_and_optimizer_states.

if k == "param":
    if self.config.store_param_remainders and self.config.bf16:
        v = v.to(torch.int16)
    self.optimizer.set_scaled_state(sharded_model_param, "master_param", v)
else:
    if v.dtype != torch.float32:
        v = v.to(torch.float32)
    self.optimizer.set_scaled_state(sharded_model_param, k, v)

Experiment

All the experiments below start from random initialization and run for 20 iterations. A checkpoint is saved at the 10th iteration using the corresponding optimizer precision. We then load the checkpoint and resume training from the 11th iteration to verify correctness.

parameter settings:

--use-precision-aware-optimizer
--exp-avg-dtype bf16
--exp-avg-sq-dtype bf16
bf16-opt-from-bf16-ckpt Saving and loading bf16 optimizer states has no impact on accuracy. fp32-opt-from-fp32-ckpt Saving and loading fp32 optimizer states has no impact on accuracy.

For the dsv2lite model, after saving the optimizer states in BF16, the disk usage is reduced from 222 GB to 164 GB, a reduction of approximately 27%.

@Baidu-AIAK Baidu-AIAK requested review from a team as code owners December 31, 2025 08:29
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 31, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions github-actions bot requested a review from Phlip79 December 31, 2025 08:29
@BestJuly BestJuly added the dev branch Dev branch related issues and development label Jan 2, 2026
@yaox12 yaox12 added the Expert Review Apply this label to indicate that your PR is ready for expert review. label Jan 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request dev branch Dev branch related issues and development Expert Review Apply this label to indicate that your PR is ready for expert review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants