Skip to content

Commit 3df154c

Browse files
committed
add arxiv link
1 parent 10cc3d5 commit 3df154c

File tree

1 file changed

+15
-3
lines changed

1 file changed

+15
-3
lines changed

index.html

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -512,7 +512,7 @@ <h1 class="title is-1 publication-title is-bold">
512512
<div class="publication-links">
513513
<!-- PDF Link. -->
514514
<span class="link-block">
515-
<a href="https://arxiv.org/abs/2511.xxxxx" class="external-link button is-normal is-rounded is-dark">
515+
<a href="https://arxiv.org/abs/2511.17490" class="external-link button is-normal is-rounded is-dark">
516516
<span class="icon">
517517
<i class="ai ai-arxiv"></i>
518518
</span>
@@ -1250,7 +1250,19 @@ <h1 class="title is-1 mathvista_other">
12501250
<div class="column is-four-fifths">
12511251
<div class="content has-text-justified" style="max-width: 100%; margin: 0 auto;">
12521252
<p style="text-align: center;">
1253-
This work was supported by Sony Group Corporation. We would like to thank Sayaka Nakamura and Jerry Jun Yokono for their insightful discussion.
1253+
This work was supported by Sony Group Corporation. We would like to thank Sayaka Nakamura and Jerry Jun Yokono for their insightful discussion. We also thank the authors of the following projects for their contributions: <a href="https://github.com/bytedance/VTVQA" target="_blank" style="margin: 0 0px;">M4-ViteVQA</a>,
1254+
<a href="https://github.com/nttmdlab-nlp/SlideVQA" target="_blank" style="margin: 0 0px;">SlideVQA</a>,
1255+
<a href="https://github.com/rubenpt91/MP-DocVQA-Framework" target="_blank" style="margin: 0 0px;">MP-DocVQA</a>,
1256+
<a href="https://github.com/huggingface/open-r1" target="_blank" style="margin: 0 0px;">Open-R1</a>,
1257+
<a href="https://github.com/OpenRLHF/OpenRLHF" target="_blank" style="margin: 0 0px;">OpenRLHF</a>,
1258+
<a href="https://github.com/ray-project/ray" target="_blank" style="margin: 0 0px;">Ray</a>,
1259+
<a href="https://huggingface.co/collections/Qwen/qwen25-vl" target="_blank" style="margin: 0 0px;">Qwen2.5-VL</a>,
1260+
<a href="https://github.com/tulerfeng/Video-R1" target="_blank" style="margin: 0 0px;">Video-R1</a>,
1261+
<a href="https://github.com/TIGER-AI-Lab/Pixel-Reasoner" target="_blank" style="margin: 0 0px;">Pixel-Reasoner</a>,
1262+
<a href="https://github.com/deepseek-ai/DeepSeek-R1" target="_blank" style="margin: 0 0px;">DeepSeek-R1</a>,
1263+
<a href="https://huggingface.co/datasets/OpenGVLab/MVBench" target="_blank" style="margin: 0 0px;">MVBench</a>,
1264+
<a href="https://github.com/MME-Benchmarks/Video-MME" target="_blank" style="margin: 0 0px;">Video-MME</a>,
1265+
<a href="https://videommmu.github.io/" target="_blank" style="margin: 0 0px;">Video-MMMU</a>
12541266
</p>
12551267
</div>
12561268
</div>
@@ -1282,7 +1294,7 @@ <h1 class="title is-1 mathvista_other" id="citation">Citation</h1>
12821294
@article{tang2025mmperspective,
12831295
title={Video-R4: Reinforcing Text-Rich Video Reasoning with Visual Rumination},
12841296
author={Tang, Yolo Yunlong and Shimada, Daiki and Hua, Hang and Huang, Chao and Bi, Jing and Feris, Rogerio and Xu, Chenliang},
1285-
journal={arXiv preprint arXiv:2511.xxxxx},
1297+
journal={arXiv preprint arXiv:2511.17490},
12861298
year={2025}
12871299
}
12881300
</code></pre>

0 commit comments

Comments
 (0)