Background: Experimental research on workplace motivation faces significant practical and ethical constraints. Random assignment to motivational interventions, manipulation of organizational contexts, and testing across diverse populations are often infeasible in field settings.Objective: This paper investigates whether AI-simulated worker personas can serve as a complementary methodological tool for early-stage exploration of motivational interventions. We examine this question using beneficiary impact and job crafting interventions in a simulated fundraising context.Method: We generated 240 diverse worker personas using three large language models (Claude, GPT-4, Llama), varying systematically in personality traits, cultural backgrounds, age, and prior experiences. Personas were randomly assigned to three conditions: (1) control training, (2) beneficiary impact intervention, or (3) job crafting reflection intervention. We measured motivation, anticipated performance, and response quality through both quantitative ratings and qualitative text analysis.Results: Across all three LLMs, beneficiary impact and job crafting conditions showed substantially higher motivation ratings than control (Cohen's d = 3.15 and 3.97 respectively, using within-condition residual SD as standardizer). AI simulations showed consistent patterns across model architectures, with LLM choice explaining only 3-4% of variance. Individual differences moderated intervention effects in theoretically predictable ways, with collectivist personas and those high in agreeableness showing stronger responses. Qualitative analysis revealed distinct psychological mechanisms and generated testable hypotheses about intervention processes.Conclusions: AI-simulated personas can rapidly generate hypotheses and enable iterative exploration of motivational interventions. The method shows promise for early-stage intervention design, mechanism exploration, and boundary condition testing. However, important questions remain about whether AI-generated effect sizes, moderator patterns, and psychological processes accurately reflect human responses. We provide recommendations for appropriate uses of AI simulation and identify critical needs for human validation research.