Abstract: This study proposes a novel distributed online gradient descent algorithm incorporating a time-decaying forgetting-factor (FF) mechanism. The core innovation lies in introducing a ...
Abstract: Distributed minimax optimization is essential for robust federated learning, offering resiliency against the variability in data distribution. Most previous works focus only on learning ...