Google Article
Introducing TensorFlow Federated
مارچ 06, 2019
Posted by Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist)

There are an estimated 3 billion smartphones in the world, and 7 billion connected devices. These phones and devices are constantly generating new data. Traditional analytics and machine learning need that data to be centrally collected before it is processed to yield insights, ML models and ultimately better products. This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn’t it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what’s been learned?

TensorFlow Federated (TFF) is an open source framework for experimenting with machine learning and other computations on decentralized data. It implements an approach called Federated Learning (FL), which enables many participating clients to train shared ML models, while keeping their data locally. We have designed TFF based on our experiences with developing the federated learning technology at Google, where it powers ML models for mobile keyboard predictions and on-device search. With TFF, we are excited to put a flexible, open framework for locally simulating decentralized computations into the hands of all TensorFlow users.
TensorFlow Federated enables developers to express and simulate federated learning systems. Pictured here, each phone trains the model locally (A). Their updates are aggregated (B) to form an improved shared model (C).
To illustrate the use of FL and TFF, let’s start with one of the most famous image datasets: MNIST. The original NIST dataset, from which MNIST was created, contains images of 810,000 handwritten digits, collected from 3,600 volunteers — and our task is to build an ML model that will recognize the digits. The traditional way we’d go about it is to apply an ML algorithm to the entire dataset at once. But what if we couldn’t combine all that data together — for example, because the volunteers did not agree to uploading their raw data to a central server?

With TFF, we can express an ML model architecture of our choice, and then train it across data provided by all writers, while keeping each writer’s data separate and local. We show how to do that below with TFF’s Federated Learning (FL) API, using a version of the NIST dataset that has been processed by the Leaf project to separate the digits written by each volunteer.
# Load simulation data.
source, _ = tff.simulation.datasets.emnist.load_data()
def client_data(n):
  dataset = source.create_tf_dataset_for_client(source.client_ids[n])
  return mnist.keras_dataset_from_emnist(dataset).repeat(10).batch(20)

# Wrap a Keras model for use with TFF.
def model_fn():
  return tff.learning.from_compiled_keras_model(
      mnist.create_simple_keras_model(), sample_batch)

# Simulate a few rounds of training with the selected client devices.
trainer = tff.learning.build_federated_averaging_process(model_fn)
state = trainer.initialize()
for _ in range(5):
  state, metrics = trainer.next(state, train_data)
  print (metrics.loss)
You can see the rest in the federated MNIST classifications tutorial.

In addition to the FL API, TFF comes with a set of lower-level primitives, which we call the Federated Core (FC) API. This API enables the expression of a broad range of computations over a decentralized dataset. Training an ML model with federated learning is one example of a federated computation; evaluating it over decentralized data is another.

Let’s take a look at the FC API with a simple example. Suppose we have an array of sensors capturing temperature readings, and want to compute the average temperature across these sensors, without uploading their data to a central location. With FC API, we can express a new data type, specifying its underlying data (tf.float32) and where that data lives (on distributed clients).
READINGS_TYPE = tff.FederatedType(tf.float32, tff.CLIENTS)
And then specify a federated average function over that type.
@tff.federated_computation(READINGS_TYPE)
def get_average_temperature(sensor_readings):
  return tff.federated_average(sensor_readings)
After the federated computation is defined, TFF represents it in a form that could be run in a decentralized setting. TFF’s initial release includes a local-machine runtime that simulates the computation being executed across a set of clients holding the data, with each client computing their local contribution, and the centralized coordinator aggregating all the contributions. From the developer’s perspective, though, the federated computation can be seen as an ordinary function, that happens to have inputs and outputs that reside in different places (on individual clients and in the coordinating service, respectively).
An illustration of the get_average_temperature federated computation expression.
Expressing a simple variant of the Federated Averaging algorithm is also straightforward using TFF’s declarative model:
@tff.federated_computation(
  tff.FederatedType(DATASET_TYPE, tff.CLIENTS),
  tff.FederatedType(MODEL_TYPE, tff.SERVER, all_equal=True),
  tff.FederatedType(tf.float32, tff.SERVER, all_equal=True))
def federated_train(client_data, server_model, learning_rate):
  return tff.federated_average(
      tff.federated_map(local_train, [
          client_data,
          tff.federated_broadcast(server_model),
          tff.federated_broadcast(learning_rate)]))
With TensorFlow Federated, we are taking a step towards making the technology accessible to a wider audience, and inviting community participation in developing federated learning research on top of an open, flexible platform. You can try out TFF in your browser, with just a few clicks, by walking through the tutorials. There are many ways to get involved: you can experiment with existing FL algorithms on your models, contribute new federated datasets and models to the TFF repository, add implementations of new FL algorithms, or extend existing ones with new features.

Over time, we’d like TFF runtimes to become available for the major device platforms, and to integrate other technologies that help protect sensitive user data, including differential privacy for federated learning (integrating with TensorFlow Privacy) and secure aggregation. We look forward to developing TFF together with the community, and enabling every developer to use federated technologies.

Ready to get started? Please visit https://www.tensorflow.org/federated/ and try out TFF today!

Acknowledgments
Creating TensorFlow Federated was a team effort. Special thanks to Brendan McMahan, Keith Rush, Michael Reneer, and Zachary Garrett, who all made significant contributions.
Next post
Introducing TensorFlow Federated

Posted by Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist)

There are an estimated 3 billion smartphones in the world, and 7 billion connected devices. These phones and devices are constantly generating new data. Traditional analytics and machine learning need that data to be centrally collected before it is processed to yield insights, ML models and ultimately better produ…