Aiming at the core limitations of single large language models in complex task solving, including coarse task decomposition, cumulative long-chain reasoning errors, and the lack of explicit cross-agent collaboration, this paper proposes a large language model-driven multi-agent collaborative method. A hierarchical and role-based agent architecture is designed to separate task decomposition, specialized reasoning, result verification, and decision fusion, thereby enabling modular task solving and closed-loop orchestration over the full execution process. In addition, an efficient semantic communication mechanism is introduced to transmit compressed reasoning states across agents without breaking intermediate logical dependencies. A dynamic feedback iteration module is further employed to adjust routing strategy, collaboration intensity, and reasoning paths in real time according to subtask progress and verification outcomes. Comparative experiments on mathematical reasoning, multi-step planning, and complex information integration show that, relative to a single large language model, the proposed method improves the average completion rate by 21.3%, reduces the long-chain reasoning error rate by 18.7%, and reaches 92.6% cross-agent decision consistency. These results demonstrate that structured collaboration substantially improves robustness and accuracy for complex task solving and provides a practical technical path for diverse intelligent systems.