Large language model (LLM)-based multi-agent systems have demonstrated remarkable capabilities in collaborative task solving. Although the mechanisms that facilitate seamless cooperation, such as shared contexts, role assignments, and iterative message passing, present significant risks of unintentional information disclosure. We present MIN-Trust, a trust orchestration framework that enforces Minimum Necessary Information (MNI) constraints, an operationalization of the data minimization principle for inter-agent communication—while maintaining task effectiveness. Our approach introduces an MNI-Gate that automatically classifies and filters information into essential, summarized, or pointer-referenced subsets before transmission. Additionally, we propose a Trust-Gated Channel (TGC) that counterintuitively increases verification requirements rather than relaxing information access as inter-agent trust elevates. Through experiments on four collaborative tasks using public benchmarks, we demonstrate that MIN-Trust reduces sensitive information exposure by 67.8% compared to baseline multi-agent frameworks while maintaining 93.3% of task success rates. Our evidence traceability mechanism achieves 84.2% claim-to-source attribution, significantly outperforming conventional approaches. These results suggest that privacy-preserving multi-agent collaboration is achievable under synthetic benchmark conditions with moderate performance trade-offs.