Large Language Model (LLM)-based multi-agent systems have demonstrated strong capabilities in collaborative task-solving. However, a practical challenge emerges in extended collaboration: role drift, where agents gradually deviate from their designated responsibilities. This phenomenon manifests as boundary violations (e.g., a planner writing code), redundant work, conflicting decisions, and futile debates, ultimately degrading system performance. In this paper, we present RoleFix, a lightweight framework for detecting and repairing role drift in multi-agent collaboration. Our approach introduces: (1) a structured protocol requiring agents to declare their role, commitments, and dependencies at each turn; (2) a hybrid drift detector combining rule-based checks with LLM-based semantic judgment; and (3) a self-repair mechanism inspired by verbal reinforcement learning that triggers reflection, role reassignment, and execution resumption. Experiments on software engineering and research workflow tasks demonstrate that RoleFix reduces role drift incidents by 67.4% and improves task completion rates by 23.8 percentage points compared to baseline multi-agent systems, while introducing only 8.3% latency overhead.