tensorflow-federatedfederated-learning

Norm clipping technique in TFF


I'm training a DP federated learning model using the "DP-FedAvg" algorithm, which is based on below paper:

Learning Differentially Private Recurrent Language Models

The paper proposes two norm clipping techniques "flat clipping" and "per-layer clipping", then performs the experiments using "per-layer clipping".

In case of TFF, when attaching a DP-query and an aggregation process to the federated model, which clipping technique is implemented by default? Is there a way to specify the clipping technique used?


Solution

  • You can get a basic recommended setup by using tff.learning.dp_aggregator, and use it as

    iterative_process = tff.learning.build_federated_averaging_process(
        ...,
        model_update_aggregation_factory=tff.learning.dp_aggregator(...))
    

    For guidance of how to use it with in learning algorithms in general, see tutorial: Tuning recommended aggregations for learning.

    The default clipping method used corresponds to "flat clipping", as termed in the paper you link to. However, the clipping norm is not fixed, but automatically adapted based on values seen in previous rounds of training. For details, see documentation and the paper Differentially Private Learning with Adaptive Clipping.

    If you want to use a fixed clipping norm my_clip_norm, you can look at the implementation and see what components can be modified. I believe you should be able to simply use:

    tff.aggregators.DifferentiallyPrivateFactory.gaussian_fixed(..., clip=my_clip_norm)

    If you wanted to use some form of per-layer clipping, you would need to write your own aggregator. Implementation of tff.aggregators.DifferentiallyPrivateFactory could be a good start, and see also tutorial Implementing Custom Aggregations.