tensorflowmachine-learningtensorflow-federatedfederated-learning

Training the global and local model in federated learning


While I am studying Federated Learning, I have some questions that popped up in my mind that needed some clarification.

  1. We first have defined clients, each client will be split into training and testing sets. The training data are used to train the local models. Now, what testing data are used for? are they used to test the global model? or to test each local model?
  2. when training the global model, we first calculate the resulted weight of each local model, and then send it to the global model. In modeling the local clients, is there any validity check on the model itself before sending to the global model or it is sent anyway and then it will be updated by the global model.

Are there any papers explaining these points?


Solution

    1. Testing data is used to check your model accuracy. This can be useful for both local model and global model. However, since the objective of the federated learning is to build a unique global model, I would use the test data with the global model. There are, however, some approaches in which the local models'accuracy against a test set are used to give a weight to the local model before the "fusion" into the global model. This is sometimes reffered to as weighted FedAvG (federated averaging)
    2. In a "controlled" Federated Learning scenario, there is no reason to check each local model before being sent to the master. However, in realistic scenario, there are a lot of considerations regarding security that should be considered, therefore you might need something more robust than a simple "validity check"