Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

Abdullah Basar Akbay, Cihan Tepedelenlioglu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.

Original languageEnglish (US)
Title of host publication56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Pages1238-1242
Number of pages5
ISBN (Electronic)9781665459068
DOIs
StatePublished - 2022
Event56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022 - Virtual, Online, United States
Duration: Oct 31 2022Nov 2 2022

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
Volume2022-October
ISSN (Print)1058-6393

Conference

Conference56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
Country/TerritoryUnited States
CityVirtual, Online
Period10/31/2211/2/22

ASJC Scopus subject areas

  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents'. Together they form a unique fingerprint.

Cite this