As IoT and wireless sensor networks (WSNs) increasingly rely on federated intrusion detection, the ability to remove a client’s contribution from a trained model without full retraining has become an important requirement. However, existing federated unlearning methods are not well suited to transformer-based intrusion detection systems, particularly when the unlearning trajectory may be manipulated and multiple removal requests must be processed under severe class imbalance. We present ARFU-IDS, a transformer-oriented and adversary-aware federated unlearning framework. ARFU-IDS combines attention- head attribution, dual-path layer criticality probing, trajectory verification, and conflict- aware scheduling. Specifically, the proposed Attention-Head Attribution Graph localizes removal-sensitive heads in transformer layers, Dual-Path Layer Criticality Probing sepa- rates task-critical layers from adversary-influenced layers, Manipulation-Resistant Iterative Verification with Audit validates whether the unlearning trajectory follows the expected optimization path, and a conflict-graph scheduler supports concurrent client removal while preserving rare-category performance. Experiments on UNSW-NB15, CICIoT2023, and IoTID20 show that ARFU-IDS achieves 87.1% Macro-F1 and 77.6% rare-category recall on UNSW-NB15, reduces the attack success rate to 8.2% at f = 0.1 and 9.7% at f = 0.2, and shortens concurrent unlearning latency by 43.4% compared with sequential FU-IDS. These findings suggest that ARFU-IDS offers a practical framework to robust federated unlearning in transformer-based IDSs for IoT and sensor-network environments.