Differential Privacy Office Hours

Apply for access

Intended use

Inputs and outputs:

  • Users provide: A well-defined machine learning problem, proposed model architecture, and evaluation metrics. Customers who also have data, or expected data schema, will likely gain more from this engagement, but there is no expectation of data sharing.
  • Users receive: Advice on how to use Tensorflow Privacy to train the model in a manner that offers differential privacy.

Industries and functions

This experiment is aimed towards customers who wish to better understand the privacy of their models, and how to measure and improve it. Customers who wish to train and deploy models based on sensitive training data (e.g. health records, personal email, personal photos, etc.) without compromising the privacy of the data should be especially interested in this experiment.

As part of the application to participate in this experiment, we will ask you about your use case, data types, and/or other relevant questions to ensure that the experiment is a good fit for you.

What data/models do I need?

There are no strict requirements on the data and models needed to use Tensorflow Privacy, and we encourage all interested users to contact us. However, in our experience, Tensorflow Privacy is (currently) most effective in the following settings:

  • More training data (ideally more than 10^5 or 10^6 examples).
  • Smaller models (ideally under 10^6 parameters).
  • Classification/regression rather than generative models.

What skills do I need?

As with all AI Workshop experiments, successful users are likely to be savvy with core AI concepts and skills in order to both deploy the experiment technology and interact with our AI researchers and engineers.

In particular, users of this experiment should have experience writing and training models using Tensorflow.