As part of our pursuit to democratize AI, Google Cloud AI Workshop offers customers, partners, researchers, and developers the opportunity to experiment with cutting-edge AI innovations. Our AI researchers and engineers are building new concepts, new techniques, and new applications at the furthest extents of science and technology — and want you to try them out.
The end goal? To connect research and practice so that we can address enterprise AI challenges together.
What is AI Workshop?
A gallery of experiments
At the core of AI Workshop are the exploratory technologies that our researchers and engineers have chosen to share with you. Browse through them below.
A forum for conversation
AI Workshop unites the AI researchers and engineers with the users who innovate with their AI products. By interacting more directly, these groups produce new ideas, ask hard questions, and provide honest feedback.
An evolving platform
AI Workshop is not stagnant. We aim to launch new content regularly and rapidly. Our researchers and engineers update and improve their experiments based on your feedback and tests. We expect that some of these experiments will graduate to become new products and that some won’t — and we’re OK with that.
What is an experiment?
Experiments are novel technologies that represent some of the latest work from our researchers and engineers, who are focused on the biggest challenges in AI and machine learning. These experiments reflect the magnitude of those ambitions.
Works in progress
Experiments are not products. They may still be rough around the edges, and are supported directly by our research teams. We will do our very best to ensure a great experience, but there is no guarantee of availability, reliability, performance, or robustness against bias. They might be changed in backward-incompatible ways. They are not subject to any SLA or deprecation policy. There is no commitment that they will become products or product features in the future. We strongly recommend that they NOT be used in production environments or for essential workflows.
From APIs and services to tools, experiments may take many forms. They will treat different (or many) types of data and may be applicable to a range of industries and functions.
Free to use
We do not charge for participation in the experiments themselves. However, there may be associated charges if you use other Cloud services as part of your workflow. This may include, but is not limited to: storage, compute, or other Cloud AI products (such as API calls).
Who can use an experiment?
In general, more technically advanced users will reap the most benefit from experiments. They will be able to set up, run, evaluate, and troubleshoot our more sophisticated experiments.
Working on frontiers
Given their early-stage nature, experiments are best suited for customers who are also working in experimental stages rather than in production environments. If you're exploring new uses of data, building prototypes and proofs of concept, and developing academic research, you could be a match for AI Workshop.
Willing to help
All participants will have to register as Google Cloud customers in order to gain access to the Experiments. Usage of the experiments is governed by the Pre-GA Terms and Confidential Information provisions (as "Google Confidential information") of your Google Cloud Platform License Agreement.
PubMed Semantic Search
The PubMed Semantic Search experiment offers an API to submit natural language queries and receive relevant publications and highlighted evidence from the PubMed index, which includes over 30 million papers. The tool leverages state-of-the-art natural language understanding to process complex biomedical queries and help scientists and researchers quickly arrive at relevant papers and answers.
Abstractive Document Summarizations
Formulate natural language summaries of text documents. Submit a text document (will be truncated at ~800 words), and receive back a summary of ~200 words.
Hard-attention Explanations for Image Classification
Customers can use an API that will classify an image while also providing explanations for which parts of the image contributed to the classification. Previous efforts to build these kinds of explanations suffered from either brittle explanations or low classification performance.
Labeling Images with Semi-supervised Learning
Submit a few labeled images (as low as 5 images per class) and a large amount of unlabeled images (hundreds of thousands, or more), and receive back predicted labels for all unlabeled images.
Automated Image Captions and Descriptions
This experiment leverages Google's state of the art multimodal understanding models to conceptually describe images. These descriptions could be used for applications enabling scenarios where fluent and accurate natural language description of an image is important.
Use cutting-edge NLU models from the comfort of Google Sheets. Write responses, select a model, select a ranking method, and send a query. Useful for bot making, games, and other semantic experiments.
Semi-Supervised Learning with graphs
This experiment takes in a graph with a small percentage of labeled instances (seeds) and propagates labels to all unlabeled items, based on user-provided similarity weights between related items. It can be especially useful when customers have a graph of their data (or can easily generate one), but find labeling each node expensive or time consuming.
Natural Language Emotion Classification
Understanding the emotional content of text can provide valuable insights about users or the content, especially in areas such as customer feedback, reviews, customer support, and product branding. This experiment classifies text by emotions such as joy, amusement, gratitude, surprise, disapproval, sadness, anger, and confusion.
Augmenting Data with Semantic Features
With this experiment, customers can convert sparse data (such as URLs, queries, or texts) into dense features that are semantically meaningful, easy to debug, and easy to interpret. Submit a dataset with sparse features and labels, and we return a lookup table that can be used to augment other datasets with semantic features generated from the same sparse features.
Generating Specialized Knowledge Graphs
Knowledge graphs are valuable artifacts, but constructing one for a particular domain can be prohibitively expensive. This project aims to provide an automated means of producing a knowledge graph of entities/topics/concepts simply by analyzing a corpus of documents uploaded by the user.
Learning Effective Loss Functions
Every time someone trains a machine learning model, they need to pick a loss function to minimize over a set of training data. This experiment provides an efficient and data-driven way to learn an effective loss function, either on-the-fly during training, or based on results of complete training runs (as in traditional hyperparameter tuning).
Federated Learning Office Hours
An emerging approach to machine learning, called federated learning, enables machine learning on decentralized datasets. This approach can help to protect data privacy while also improving local speed and performance. This experiment allows you to speak with Google experts about your work in federated learning, including the use of the open-source TensorFlow Federated library.
Summarizing Multiple Short Texts
This experiment allows customers to train a Summarizer - a model that ingests short pieces of text (like restaurant reviews), and outputs a canonical summary of all of those pieces of text (like a single summary review of that restaurant).
Differential Privacy Office Hours
Machine learning models can memorize data, which can bring about privacy challenges. But there exist techniques to measure such memorization and train models with greater levels of privacy. This experiment allows users to consult with Google experts on how to train machine learning models such that they are differentially private, using the open-source TensorFlow Privacy library.
Counting Discrete Actions in Videos
With this experiment, customers can count how many times a specific action occurs in one or more video clips. Customers bring a video clip with one cycle of the action, as well as (unlabeled) clips where this action (may) occur. For each clip they receive a count of the number of times the action occurs.
Video Label Propagator
With this experiment, customers can label each frame in many related videos by only labeling frames in one video. Our system propagates the labels from one video to all the other videos.
Label Error Detection for Images
With this experiment, customers bring their labeled training images, and we run it through a series of quality check tools. We return a list of training images that are potentially mislabeled.
Semantic Similarity for Natural Language
For a given piece of text, find the most similar or related text items among a list of candidates. Built from correlations in natural language usage, this experiment helps to connect items based on meaning and usage rather than simple keywords.
Mixed Integer Linear Program Solver
Solves mixed integer linear programs, which are systems of equations that optimize continuous and integer variables given a set of constraints. Google uses this solver every day for large-scale and business-critical optimization challenges. Applications include assignment, scheduling, packing, and flow problems.
Image Classification with Confidence Scores
AI systems can be a lot more useful when we know how much we can rely on a certain prediction. This experiment generates an AI model that returns both the predicted class and a well-calibrated confidence score of the prediction.
Interpretable Image Classification with Prototypes
We can better explain model predictions by surfacing the most similar items from training data. Customers bring their data, and we return a classifier that not only predicts the output, but also related examples (prototypes) that explain the decision.
Augmented Learning for Image Classification
Improve image classification model performance in cases of insufficient data, and/or incomplete or imperfect labels. Bring your as-is training data and receive access to two CNN models: one trained with just your data, and a second with our augmentation learning technique.
Turbo Image Filter
This experiment leverages Google Vision API and embedding models to filter through images for objects of interest. Potential applications include accelerating the data labeling process, and swiftly sorting or archiving images.