MoReBench: Evaluating Procedural and Pluralistic Moral Reasoning in Language Models, More than Outcomes

Yu Ying Chiu*1,2, Michael S. Lee*3,
Rachel Calcott4, Brandon Handoko3, Paul de Font-Reaulx5, Paula Rodriguez3, Chen Bo Calvin Zhang3, Ziwen Han†3, Udari Madhushani Sehwag3, Yash Maurya,
Christina Knight3, Harry Lloyd6, Florence Bacus4,
Mantas Mazeika7, Bing Liu3, Yejin Choi8, Mitchell Gordon9, Sydney Levine2,4,9
1 University of Washington 2 New York University 3 Scale AI 4 Harvard University 5 University of Michigan
6 UNC Chapel Hill 7 Center for AI Safety 8 Stanford University 9 MIT
*Indicates Equal Contribution   Work while done at Scale AI
Pipeline of MoReBench

Abstract

As AI systems progresses, we rely more on them to make decisions with us and for us. To ensure that such decisions are aligned with human values, it is imperative for us to understand not only what decisions they make but also how they come to those decisions. Reasoning language models, which provide both final responses and (partially transparent) intermediate thinking traces, present a timely opportunity to study AI procedural reasoning. Unlike math and code problems which often have objectively correct answers, moral dilemmas are an excellent testbed for process-focused evaluation because they allow for multiple defensible conclusions. To do so, we present MoReBench: 1,000 moral scenarios, each paired with a set of rubric criteria that experts consider essential to include (or avoid) when reasoning about the scenarios. MoReBench contains over 23 thousand criteria including identifying moral considerations, weighing trade-offs, and giving actionable recommendations to cover cases on AI advising humans moral decisions as well as making moral decisions autonomously. Separately, we curate MoReBench-Theory: 150 examples to test whether AI can reason under five major frameworks in normative ethics. Our results show that scaling laws and existing benchmarks on math, code, and scientific reasoning tasks (fail to) predict models' abilities to perform moral reasoning. Models also show partiality towards specific moral frameworks (e.g., Benthamite Act Utilitarianism and Kantian Deontology), which might be side effects of popular training paradigms. Together, these benchmarks advance process-focused reasoning evaluation towards safer and more transparent AI.


Dataset Description

Datasets description

Interactive Model Comparison

Compare how different AI models reason through moral dilemmas

🎯
Select a dilemma and two models to compare

Choose from the dropdowns above to begin your analysis


Leaderboard

coming soon...

BibTeX

@misc{chiu2025morebenchevaluatingproceduralpluralistic,
        title={MoReBench: Evaluating Procedural and Pluralistic Moral Reasoning in Language Models, More than Outcomes}, 
        author={Yu Ying Chiu and Michael S. Lee and Rachel Calcott and Brandon Handoko and Paul de Font-Reaulx and Paula Rodriguez and Chen Bo Calvin Zhang and Ziwen Han and Udari Madhushani Sehwag and Yash Maurya and Christina Q Knight and Harry R. Lloyd and Florence Bacus and Mantas Mazeika and Bing Liu and Yejin Choi and Mitchell L Gordon and Sydney Levine},
        year={2025},
        eprint={2510.16380},
        archivePrefix={arXiv},
        primaryClass={cs.CL},
        url={https://arxiv.org/abs/2510.16380}, 
  }