CE-VFAL
Vertical Federated Learning (VFL) involves multiple participants collaborating to train machine learning models on distinct feature sets from the same data samples. This training paradigm with distributed updating focuses on secure and efficient communication. Nevertheless, the trained models exhibit heightened vulnerability to adversarial attacks during inference, which can provoke misclassification. Adversarial Training (AT), which involves exposing models to intentionally crafted misleading examples during training, is widely regarded as the most effective method for enhancing model robustness. However, the significant communication costs entailing such example generation within the VFL context pose an open challenge to developing a Vertical Federated Adversarial Learning (VFAL) framework. To this end, we introduce a Communication-Efficient Vertical Federated Adversarial Learning framework, named CE-VFAL. The proposed framework incorporates the lazy propagation principle, confining most propagations to client models during adversarial updates, thereby minimizing frequent client-server interactions. Moreover, CE-VFAL seamlessly integrates Zeroth Order Optimization (ZOO) into the communication phase, effectively reducing communication load by transmitting the loss difference derived from the raw and perturbed embeddings for multiple point estimation. Furthermore, rigorous theoretical analysis demonstrates the sublinear convergence rate by containing the errors caused by multi-source approximate gradients. Extensive experiments corroborate the robust performance while significantly reducing communication costs.
@inproceedings{CEVFAL2026TianxingMan,
}