Large language models (LLMs) have demonstrated logical reasoning abilities, but their inferences remain non-traceable and lack formal guarantees. We introduce eXa-LM, a controlled natural language (CNL) interface between LLMs and first-order logic solvers. Based on a Controlled Natural Language, our approach aims to create an explicit, verifiable, and interpretable bridge between text and formal logic. It relies on three main components: (1) a reformulation prompt that constrains the LLM to produce a set of facts and rules in CNL, (2) the semantic analyzer eXaSem translating this CNL into a Prolog program made of extended Horn clauses, and (3) the logic engine eXaLog, which integrates a second-order meta-interpreter capable of inferring ontological properties.We evaluate eXa-LM on three standard benchmarks—PrOntoQA, ProofWriter and FOLIO—comparing it to GPT-4o baselines including Standard prompting, Chain-of-Thought, Logic-LM, LINC, and LLM-TP. Results show that eXa-LM matches or exceeds recent neuro-symbolic systems while providing full traceability of reasoning and intrinsic explainability. On FOLIO, eXa-LM achieves 92.9% accuracy, a +5.5 point gain over LLM-TP, the strongest competing GPT-4o-based method in our comparison.This approach demonstrates the feasibility of a transparent neuro-symbolic reasoning pipeline in which LLMs produce not direct inferences but formally controlled linguistic representations. eXa-LM opens the way to neuro-symbolic architectures that are safer, verifiable and extensible, ultimately integrating hypothetical, abductive or inductive reasoning. Program and data will be made publicly available upon publication.