Preprint
Article

This version is not peer-reviewed.

HFR-Prompt: Hierarchical Feedback Reasoning Prompting for Enhanced Large Language Model Comment Feedback Prediction

Submitted:

16 March 2026

Posted:

17 March 2026

You are already at the latest version

Abstract
The accurate prediction of feedback from user comments is essential yet challenging, often limited by the nuanced semantics that traditional Natural Language Processing and existing Large Language Model prompts struggle to capture. We propose the Hierarchical Feedback Reasoning Prompting (HFR-Prompt) framework to address this. HFR-Prompt guides Large Language Models through a multi-stage, logically progressive analysis comprising Initial Tendency Assessment, Fine-grained Feedback Type Identification, and Result Integration and Explanation Generation. Each successive stage builds upon the contextual understanding established by the previous one. Extensive experiments on a substantial dataset demonstrate that HFR-Prompt significantly outperforms strong LLM baselines and standard prompting techniques in terms of accuracy, Macro-F1 score, and crucial explanation consistency. While introducing a computational overhead, HFR-Prompt sets a new standard for interpretable and accurate comment feedback prediction, validating the efficacy of structured, hierarchical reasoning in complex LLM applications.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated