Large language models (LLMs) are advancing intelligent writing systems from local text continuation and language polishing toward long-form structured text generation. However, directly generating full-length academic paper drafts remains challenging due to unclear research objectives, unstable discourse structures, insufficient long-text coherence, and the lack of explicit quality control mechanisms. To address this long-form structured generation task, we propose MetricDraft, a metric-driven framework for academic paper draft generation. The framework organizes the drafting process as a closed-loop pipeline comprising research ideation clarification, structural anchoring, section-by-section generation, quality assessment, and feedback-driven revision. Its key components include adversarial research ideation clarification, staged structural anchoring, the PRISM structured metric system, progressive context injection with section-type-aware guided generation (PCI+STAGG), and a metric-feedback-driven generation–evaluation co-optimization mechanism. Experimental results demonstrate that MetricDraft achieves significantly higher composite quality scores compared to one-shot generation, summary-based context passing, and context-accumulation-only baselines, with differences reaching statistical significance. Furthermore, PRISM exhibits moderate-to-high positive correlations with expert ratings, providing preliminary evidence that it can serve as an auxiliary evaluation reference for draft quality diagnosis and iterative revision. This work reformulates academic writing as an adjustable, assessable, and iteratively optimizable long-form structured text generation problem, offering methodological insights for human–AI collaborative writing and intelligent text generation system design.