TY - GEN
T1 - Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents
AU - Akbay, Abdullah Basar
AU - Tepedelenlioglu, Cihan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.
AB - This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.
UR - http://www.scopus.com/inward/record.url?scp=85150161806&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150161806&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF56349.2022.10051928
DO - 10.1109/IEEECONF56349.2022.10051928
M3 - Conference contribution
AN - SCOPUS:85150161806
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1238
EP - 1242
BT - 56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
Y2 - 31 October 2022 through 2 November 2022
ER -