Agent-based computational economics (ACE) relies on hand-coded heuristics or stylized utility maximization to model firm behavior. We propose using large language models (LLMs) as economic agents---firms that reason in natural language about investment decisions under realistic market conditions. We design a simulation framework in which GPT-based agents, representing heterogeneous manufacturing firms, decide whether to adopt CNN-based visual inspection technology across five rounds with declining costs. We test whether emergent aggregate behavior aligns with propositions from production theory regarding scale economies in adoption, declining adoption thresholds, and capital-labor substitution. Across 150 firm-round decisions (30 firms \( \times \) 5 rounds), decision-level logistic regressions show that time strongly predicts adoption (\( \hat{\beta}_{\text{round}} = 0.88 \), \( p < 0.001 \); marginal effect: \( +13 \) percentage points per round), while firm size has no statistically significant effect (\( \hat{\beta}_{\log q} = -0.11 \), \( p = 0.55 \)). Among adopters, larger firms exhibit greater proportional QA labor displacement (\( \hat{\beta}_{\log q} = -0.05 \), \( p = 0.007 \)), consistent with scale-dependent substitution elasticity. Robustness experiments across three prompt personas and six temperature settings reveal dramatic persona sensitivity---rational optimizers reach 100\% adoption by Round~4 while risk-averse owners never adopt---interpretable as distinct behavioral types (risk neutrality, moderate risk aversion, high loss aversion). We identify behavioral phenomena not predicted by standard theory---status quo bias, production disruption anxiety, and reasoning heterogeneity (97 distinct decision factors)---suggesting LLM agents capture bounded rationality absent from stylized models. Our open-source framework provides a tool for hypothesis generation and behavioral prior elicitation for agent-based computational economics.