For decades, artificial intelligence transformed digital domains while the physical world, with its unforgiving dynamics and infinite variability, remained largely beyond reach. That boundary is now dissolving: humanoid robots work full shifts on assembly lines, autonomous vehicles make millions of safety-critical decisions daily, and robotic surgical platforms have performed over twenty million procedures worldwide. The convergence of foundation models, high-fidelity simulation, and embodied control is producing Physical AI, machines that perceive, reason, and act reliably in open environments. However, existing surveys treat individual components of this stack in isolation, covering vision-language-action architectures, world models, or sim-to-real transfer separately, and none traces the full pipeline from sensor to deployment or grounds analysis in commercial outcomes. Here we synthesise the end-to-end Physical AI technology stack: multimodal sensing and fusion; edge hardware and accelerators; world modeling and simulation; vision-language-action models; learning paradigms from reinforcement to imitation; and deployment infrastructure spanning safety assurance, fleet learning, and governance. Our analysis reveals three cross-cutting findings. First, foundation-model generalisation, not task-specific engineering, is the economic lever that separates scalable deployments from those that stall at pilot stage. Second, digital twins have become prerequisites rather than accelerants: no deployment at fleet scale proceeds without high-fidelity virtual rehearsal. Third, regulatory and assurance barriers, not algorithmic limitations, now gate the transition from pilot to production across every domain. We document these patterns through commercial case studies in manufacturing, logistics, autonomous mobility, agriculture, and healthcare, and propose a four-phase maturity taxonomy characterising adoption from research prototype to fleet-scale operation. We conclude with six coupled research challenges and a roadmap anchored to regulatory milestones through 2030, offering researchers, industry leaders, and policymakers a practical map for scaling embodied intelligence from controlled environments to the complexity of the real world.