The increasing use of large language models (LLMs) in enterprises creates a need for the effective selection between lower-cost models and more advanced ones. The aim of the article is to propose a multicriteria decision-making framework for prompt routing to LLMs in an enterprise environment, taking into account organizational preferences regarding cost, response quality, business risk, response time, standardization, and creativity. The study adopts a design-and-evaluation approach. In the design phase, a mechanism was developed in which prompts are assessed according to managerial routing criteria, weighted using the AHP method, and then directed to either a lower-cost or a more powerful model using the SAW method. In the evaluation phase, the solution was tested on a dataset of 100 business prompts and compared with two benchmark strategies: always cheap and always strong. The article’s contribution includes framing LLM routing as a managerial decision-support problem, operationalizing managerial routing criteria, and proposing evaluation metrics such as sufficiency rate, average cost per prompt, cost per sufficient response, and incremental cost of sufficiency gain. The results indicate that the proposed solution improves the cost–quality trade-off, while maintaining an acceptable level of response sufficiency and limiting the cost of query handling.