-
Notifications
You must be signed in to change notification settings - Fork 210
refactor: split train and val dataset in response dataset #1649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
f8dcf7c to
2f78c84
Compare
2f78c84 to
fd448be
Compare
2aa7ce0 to
6a093d1
Compare
terrykong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some initial thoughts
since it's a big PR @ashors1 could you help as a second review?
| output_key: generated_solution | ||
| split: train_1M | ||
| seed: 42 | ||
| split_validation_size: 0.05 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i kind of feel we shouldn't split on the fly, it makes reproducing potentially problematic. i think it's better that each dataset is static at the time of running
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's reproducible since it'll use seed when using train_test_split. actually we also used this before, just not expose the split_validation_size param.
| split_ds = original_ds.train_test_split(test_size=test_size, seed=seed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw for the seed, which one do you think is better?
- remove
seedin the data config, pass it thruload_response_datasetusingconfig["grpo"]["seed"]. - keep
seedin the data config, inherit from${grpo.seed}.
| prompt_file: NotRequired[str | None] | ||
| system_prompt_file: NotRequired[str | None] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i see now that there are two, should we just remove this outer one to avoid dealing with surprising precedence issues if someone forgot to set one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's discuss it here #1649 (comment).
and even if we have a default like I said in that conversation, we still need to keep it for now, because this PR only refactor the response dataset, the preference dataset will still need to use it.
| assert hasattr(data, "processor"), "Dataset must have a processor attribute" | ||
| task_data_processors[task_name] = (task_spec, data.processor) | ||
| # setup train dataset | ||
| update_single_dataset_config(data_config["train"], data_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdyt about just expecting users to populate the train config? then we don't have dup keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should have a default value especially when we support multiple datasets in next PR, otherwise people need to write the same things for every dataset, then the data config will be a bit redundant.
and I'm thinking if it's better to provide a default like train and validation, it seems more directly than just put it outside. wdyt?
# now
data:
train:
# this dataset will override prompt_key and use the default values for other vars
- data_path: /path/to/local/train_dataset_1.jsonl
prompt_key: question
# this dataset will use all the default values
- data_path: /path/to/local/train_dataset_2.jsonl
validation:
- data_path: /path/to/local/val_dataset.jsonl
# will use below vars as default values if dataset doesn't specify it
dataset_name: BinaryPreferenceDataset
prompt_key: prompt
chosen_key: chosen
rejected_key: rejected
prompt_file: null
system_prompt_file: null
env_name: math
# add `default`
data:
train:
# this dataset will override prompt_key and use the default values for other vars
- data_path: /path/to/local/train_dataset_1.jsonl
prompt_key: question
# this dataset will use all the default values
- data_path: /path/to/local/train_dataset_2.jsonl
validation:
- data_path: /path/to/local/val_dataset.jsonl
default:
# will use below vars as default values if dataset doesn't specify it
dataset_name: BinaryPreferenceDataset
prompt_key: prompt
chosen_key: chosen
rejected_key: rejected
prompt_file: null
system_prompt_file: null
env_name: math| "tulu3_sft_mixture", | ||
| ]: | ||
| base_dataset.set_processor() | ||
| base_dataset.set_processor() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think we need to keep this? it kind of seems like we could do without it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some datasets are associated with processor (e.g., helpsteer3), so we need to keep it for now.
I think we don't need to keep this eventually, as designed I'll make the processor associated with algorithm instead of dataset in later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
track it here #1658.
| """Loads response dataset.""" | ||
| dataset_name = data_config["dataset_name"] | ||
|
|
||
| # TODO @yukih: remove duplicated dataset_name (openmathinstruct2, clevr_cogent) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what was this comment referring to? not sure i follow from the changes you made
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| self.task_name = "oasst" | ||
|
|
||
| # load from huggingface | ||
| filename = hf_hub_download( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks a lot cleaner :)
| [ | ||
| ("clevr-cogent", format_clevr_cogent_dataset), | ||
| ("geometry3k", format_geometry3k_dataset), | ||
| # ("refcoco", format_refcoco_dataset), # this needs download 13.5G image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@terrykong shall we enable this?
6b34af3 to
fea258d
Compare
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Rayen <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
Signed-off-by: Yuki Huang <[email protected]>
2a4cedd to
20f3a62
Compare
Related issue: #1050
nemo_rl/data/datasets/response_datasets/into a similar format.clevr_cogentandopenmathinstruct2.New Param
Add a new param
split_validation_sizeto handle the case that one dataset is used for both training and validation. (e.g.,OpenMathInstruct-2inexamples/configs/grpo_math_1B.yaml)data.train.split_validation_size > 0 and data.validation is None, will use part of the training dataset as validation dataset.data.train.split_validation_size > 0 and data.validation is not None, will use both "part of the training dataset" and "provided validation dataset" as validation dataset.Usage
Test Result
Summary by CodeRabbit
Release Notes
New Features
trainandvalidationblocks in data settingsDocumentation
Bug Fixes & Improvements
✏️ Tip: You can customize this high-level summary in your review settings.