-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Hi @yunlong10 🤗
Niels here from the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2511.17490.
The paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim
the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
I saw from your GitHub repository that the code, model (Video-R4-7B), and datasets (Video-R4-CoT-17k, Video-R4-RL-30k) are "coming soon", and you even have placeholder links for Hugging Face! That's fantastic.
It'd be great to make the Video-R4-7B model checkpoint and the Video-R4-CoT-17k and Video-R4-RL-30k datasets available on the 🤗 hub once they are released, to improve their discoverability/visibility.
We can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.
Uploading models
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class which adds from_pretrained and push_to_hub to any custom nn.Module. Alternatively, one can leverages the hf_hub_download one-liner to download a checkpoint from the hub.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page. Given Video-R4-7B is a video reasoning LMM (video+text input, text output), its pipeline tag would likely be video-text-to-text.
Uploading dataset
Would be awesome to make the datasets available on 🤗 , so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser. For Video-R4-CoT-17k and Video-R4-RL-30k, as they are for text-rich video reasoning, their task categories would likely be video-text-to-text.
Let me know if you're interested/need any help regarding this once your artifacts are ready for release!
Cheers,
Niels
ML Engineer @ HF 🤗