Building Chinese Reward Models from Scratch: A Practical Guide to CheemsBench and CheemsPreference Why Do We Need Dedicated Chinese Reward Models? In the development of large language models (LLMs), reward models (RMs) act as “value referees” that align AI outputs with human preferences. However, current research faces two critical challenges: Language Bias: 90% of existing studies focus on English, leaving Chinese applications underserved Data Reliability: Synthetic datasets dominate current approaches, failing to capture authentic human preferences The Cheems project – a collaboration between the Institute of Software (Chinese Academy of Sciences) and Xiaohongshu – introduces the first comprehensive framework for …