The accurate prediction of feedback from user comments is essential yet challenging, often limited by the nuanced semantics that traditional Natural Language Processing and existing Large Language Model prompts struggle to capture. We propose the Hierarchical Feedback Reasoning Prompting (HFR-Prompt) framework to address this. HFR-Prompt guides Large Language Models through a multi-stage, logically progressive analysis comprising Initial Tendency Assessment, Fine-grained Feedback Type Identification, and Result Integration and Explanation Generation. Each successive stage builds upon the contextual understanding established by the previous one. Extensive experiments on a substantial dataset demonstrate that HFR-Prompt significantly outperforms strong LLM baselines and standard prompting techniques in terms of accuracy, Macro-F1 score, and crucial explanation consistency. While introducing a computational overhead, HFR-Prompt sets a new standard for interpretable and accurate comment feedback prediction, validating the efficacy of structured, hierarchical reasoning in complex LLM applications.