Preprint
Article

This version is not peer-reviewed.

MetaThink: Empowering Large Reasoning Models with Adaptive Self-Correction at Inference Time

Submitted:

09 March 2026

Posted:

10 March 2026

You are already at the latest version

Abstract
Large Reasoning Models (LRMs) face a fundamental challenge in balancing efficient "fast thinking" with accurate "slow thinking," often struggling to adaptively trigger deeper reasoning without incurring significant computational overhead. This paper introduces \( \textit{MetaThink (MT)} \), a novel inference-time adaptive refinement framework designed to imbue LRMs with conditional self-correction capabilities, without requiring any additional training. \( \textit{MetaThink} \) operates by an initial "fast thinking" phase, followed by a lightweight self-monitoring mechanism that assesses confidence through uncertainty markers. When low confidence or potential errors are detected, a refinement token triggers a targeted "slow thinking" phase, guided by domain-specific prompts. This allows the model to introspectively review and correct its reasoning, culminating in a more accurate final answer. Our comprehensive evaluation across diverse and challenging benchmarks—spanning mathematical reasoning, code generation, and scientific problem-solving tasks—demonstrates that \( \textit{MetaThink} \) consistently achieves substantial and robust improvements in Pass@1 accuracy. Crucially, these gains are realized while maintaining competitive or even improved inference efficiency, outperforming existing inference-time baselines. Our findings underscore that \( \textit{MetaThink} \) offers an effective, training-free approach to enhance the reliability and accuracy of LRMs in complex reasoning tasks by striking a superior balance between performance and efficiency.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated