Adapting language models to specialized domains remains challenging under limited computational resources. We introduce CoDES (Context-efficient Domain Ensemble System), a framework that improves small language model performance through domain-specific fine-tuning and weighted parameter ensembling. CoDES combines parameter-efficient adaptation via Low-Rank Adaptation (LoRA) with completion-only supervision, and merges two fine-tuned models through weighted parameter averaging to improve robustness and accuracy. We evaluate CoDES on two biomedical question answering benchmarks, MedMCQA and MedQA. On MedMCQA, the ensemble achieves 74.8\% accuracy, approaching a 72B-parameter model (77.1\%) while consuming 2.5 times less energy. Consistent improvements on MedQA further demonstrate the framework's generalizability across datasets and examination styles. Taken together, these results show that targeted domain adaptation combined with model ensembling provides a practical pathway for deploying competitive language model systems under realistic resource constraints.