Large Language Models (LLMs) have demonstrated remarkable capabilities in reasoning tasks, yet their ability to execute long-horizon processes with sustained accuracy remains a fundamental challenge. This survey provides a comprehensive examination of reasoning in LLMs, spanning from foundational prompting techniques to emerging massively decomposed agentic processes. We first establish a taxonomy that categorizes reasoning approaches into three primary paradigms: prompting-based methods including Chain-of-Thought, Tree of Thoughts, and Graph of Thoughts; training-based methods encompassing reinforcement learning from human feedback, process reward models, and self-taught reasoning; and multi-agent systems that leverage decomposition and collaborative error correction. We analyze the persistent error rate problem that prevents LLMs from scaling to extended sequential tasks, where recent experiments demonstrate performance collapse beyond a few hundred dependent steps. We then examine MAKER, a breakthrough framework that achieves over one million LLM steps with zero errors through extreme task decomposition combined with multi-agent voting schemes. Our analysis reveals that massively decomposed agentic processes represent a promising paradigm shift from relying solely on improving individual model capabilities toward orchestrating ensembles of focused microagents. We synthesize empirical findings across major benchmarks including GSM8K, MATH, MMLU, and PlanBench, and identify critical open challenges including compositional generalization, error propagation mitigation, and the computational costs of inference-time scaling. This survey aims to provide researchers and practitioners with a unified perspective on the landscape of LLM reasoning and illuminate pathways toward solving problems at organizational and societal scales.