Three-dimensional medical image segmentation is critical for clinical applications, yet expert annotations are costly, driving the need for semi-supervised learning. Current semi-supervised methods struggle with robustly integrating diverse network architectures and managing pseudo-label quality, especially in complex three-dimensional scenarios. We propose Dynamic Multi-Expert Diffusion Segmentation (DMED-Seg), a novel framework for semi-supervised three-dimensional medical image segmentation. DMED-Seg leverages a Diffusion Expert for global contextual understanding and a Convolutional Expert for fine-grained local detail extraction. A key innovation is the Dynamic Fusion Module, a lightweight Transformer that adaptively integrates multi-scale features and predictions from both experts based on their confidence. Complementing this, Confidence-Aware Consistency Learning enhances pseudo-label quality for unlabeled data using DFM-derived confidence, while Inter-expert Feature Alignment fosters synergistic learning between experts through contrastive loss. Extensive experiments on multiple public three-dimensional medical datasets demonstrate DMED-Seg consistently achieves superior performance across various labeled data ratios, outperforming state-of-the-art methods. Ablation studies confirm the efficacy of each proposed component, highlighting DMED-Seg as a highly effective and practical solution for three-dimensional medical image segmentation.