Large language models (LLMs) are rapidly transforming knowledge work, yet their implications for fundamental work design theory remain underexplored. This study examines how LLM integration affects the Job Characteristics Model (JCM), a foundational framework linking work design to employee outcomes. Using hierarchical linear modeling with a comprehensive simulated dataset (N = 10,000 knowledge workers across 30 organizational contexts), we analyze five major LLM architectures (GPT-4o, o1-preview, Gemini 1.5 Pro, Claude 3.5 Sonnet, and open-weight models) under varying implementation conditions. Results demonstrate that LLM augmentation substantially enhances core job characteristics—particularly skill variety (+0.95 SD on average, ranging to +1.15 SD for multimodal architectures), task significance (+0.63 SD), and feedback quality (+0.71 SD)—with architecture-specific patterns emerging based on reasoning capabilities, multimodal integration, and customization options. The motivating potential score (MPS) increased by approximately 61% on average (from baseline M=106 to M=170), with effects moderated by growth need strength (GNS), override authority levels, and advanced AI features. Multi-architecture portfolios achieved 23% higher MPS gains than single-architecture implementations (η² = 0.19, p < 0.001) but required 127% greater implementation investment. Trajectory analyses revealed sustained improvements over 24 months, with high-GNS workers showing accelerating benefits while low-GNS workers plateaued after 12 months. These findings suggest LLMs can fundamentally enrich work design when thoughtfully implemented, though benefits depend critically on architecture selection, worker characteristics, and organizational support structures. We propose an expanded theoretical framework integrating AI capabilities into JCM constructs and discuss implications for human-AI work design.