The scarcity of high-quality, labeled audio data for legal proceedings remains a significant barrier to developing robust speech-to-text and speaker diarization systems for the judiciary. This paper in- troduces Deepcounsel, a high-fidelity synthetic speech dataset simulating courtroom environments. Utilizing a multi-agent system powered by the Gemini 2.5 Pro model, we orchestrated complex interactions between eleven distinct roles, including judges, attor- neys, witnesses, and court staff. By leveraging native multimodal generation, Deepcounsel provides a diverse range of legal termi- nology, emotional prosody, and multi-speaker overlaps. Our results demonstrate that synthetic datasets generated via multi-agent Large Language Models (LLMs) can serve as a viable proxy for training specialized legal AI models where real-world data is restricted by privacy laws.