熱門話題
#
Bonk 生態迷因幣展現強韌勢頭
#
有消息稱 Pump.fun 計劃 40 億估值發幣,引發市場猜測
#
Solana 新代幣發射平臺 Boop.Fun 風頭正勁
RL 對數字非常敏感,上次 torch compile 導致一些運行崩潰,現在是 vllm v1

8月12日 11:23
moving from vllm v0 to v1 made our async rl training crash! read how we fixed it
we recently migrated from v0 to v1 as part of a larger refactor of prime-rl to make it easier-to-use, more performant and naturally async. we confirmed correct training dynamics on many smaller-scale runs, but hit a wall when trying to reproduce a larger scale run that ran without problems prior to the refactor. Specifically, training DeepSeek-R1-Distill-Qwen-1.5B on single-turn math problems from our INTELLECT-2 math dataset at 8k context with two-step off-policy delay would crash fatally roughly 400 steps into the training

6.74K
熱門
排行
收藏