Submitted:
01 March 2026
Posted:
02 March 2026
You are already at the latest version
Abstract
The rapid integration of large language models (LLMs) into organizational workflows raises fundamental questions about the long-term effects of AI assistance on human creative capabilities. This article provides a comprehensive theoretical extension of Sun et al.'s (2025) field experiment, which demonstrated that LLM assistance enhances creative output during use but diminishes subsequent independent creativity—particularly for individuals with lower metacognitive ability. We develop the metacognitive paradox framework to explain this phenomenon: AI tools designed to augment human creativity may inadvertently suppress the very cognitive processes that sustain independent creative capacity. Drawing on metacognition theory, cognitive offloading research, automation and human factors literatures, and the componential model of creativity, we articulate the mechanisms through which LLM assistance affects creative cognition, specify temporal dynamics and boundary conditions, and generate testable propositions for future research. We explicitly compare our framework against alternative theoretical explanations—including cognitive load theory, motivational accounts, and expertise development perspectives—and provide methodological guidance for testing our propositions. Our analysis reveals that the relationship between AI assistance and human creativity is neither uniformly beneficial nor detrimental but contingent upon individual metacognitive capabilities, patterns of tool use, task characteristics, and organizational context. We conclude with implications for theory, research methodology, organizational practice, and the ethical dimensions of AI-augmented work.
