Creative analysis at scale with Google Cloud and machine learning

This article is intended for advertising professionals who are interested in learning how to automate creative insights by using the machine learning (ML) and data warehousing capabilities of Google Cloud Platform (GCP). The article explores how to take advantage of cloud technology in order to quickly receive creative insights, effectively measure the quality of those creatives, and then tailor your advertising efforts accordingly. Learn how to extract and process visual metadata from ads (your creatives) at scale so that you can better understand which images and videos resonate with your customers.

This article outlines a system to:

  • Use the machine learning APIs for parsing creatives.
  • Implement a Cloud Pub/Sub pipeline to enable processing at scale.
  • Query advertising data and image content for analytics.

The system described here can help you:

  • Enable custom analysis of raw data by internal and external data science teams.
  • Display macro-view of creative elements and relevant statistics across a network, advertiser, or campaign for interactive analysis.
  • Provide scalable analyses using machine learning models that you can share with stakeholders who are focused on optimizing their creatives.

This article assumes that you are using Google Marketing Platform for your advertising.

Introduction

Advertisers increasingly focus on targeting and attribution, even though 75% of a creative's impact is related to creative quality. Regardless of this statistic, most analytics efforts still focus on when and where the creative was run instead of focusing on its content. This focus is not due to lack of effort, but rather how difficult it is to analyze creatives.

For you to begin receiving any insights from the data, the process requires that you manually tag individual creatives. Therefore, you are able to analyze only a select set of creatives, which is expensive and time consuming. In addition, the upload process for creative assets doesn't lend itself to a consistent taxonomy that includes the creative ID, so it's impossible to look at performance metrics and connect them back to the creative.

Ideally, you want to be able to answer questions like the following:

  • Does showing the brand logo within the first 5 seconds of a video increase brand recall?
  • Do creatives with natural, outdoor images like trees or beaches outperform creatives with city images?
  • What keywords occur in your creatives and how do they impact metrics?
  • Do creatives with multiple happy faces lead to higher engagement?

Today, the only way to answer these questions is to look at each creative and manually provide answers. This solution is not scalable or quick. The solution presented here rethinks this process in a way that allows analysts to process all the creatives in a few days. This solution dramatically increases the breadth and depth of the data available that, with manual tagging, would have been infeasible or prohibitively expensive.

This article outlines a way to automate these creative insights at scale using GCP technologies such as Cloud Pub/Sub, App Engine, Cloud Vision, BigQuery, and Data Studio.

Data pipeline for creative insights

The data pipeline runs primarily on App Engine, using Cloud Pub/Sub as the message queue service to process many creatives at a time. This pipeline also reads and writes to Cloud Datastore as a way of coordinating parallel tasks across various App Engine workers during the pipeline phases.

In the pipeline, creative images and videos are fetched from Google Marketing Platform and copied to Cloud Storage. The images and videos are also run through the Cloud Vision and Video Intelligence APIs. The raw data coming out of those services is then written to BigQuery. To help you understand how to implement such a pipeline, the next section looks at each component in greater detail.

Automating a solution:

The proposed solution has three stages:

  1. Take advantage of existing machine learning capabilities through the Cloud Vision and Video Intelligence APIs.
  2. Use Cloud Pub/Sub to create a scalable pipeline to process the creatives at your disposal.
  3. Store, query, and visualize the resulting insights by using BigQuery and Data Studio.

Step 1: Take advantage of machine learning APIs

Consider the following example creative:

ad for a skin balm

In Campaign Manager, you get the following ad-related information:

  • Creative ID
  • Creative size
  • Advertising metrics such as click-through rate (CTR). A full list of metrics is available in Campaign Manager.
Creative ID Creative name Creative size Impressions Clicks Click-through rate
12345 cbalm_lipgloss_300x300.png 300x300 80,734,829 174,696 0.2%

Currently, naming conventions are your best option for extracting information about the content of the creative itself. By contrast, this article uses the Cloud Vision API to automatically extract key features from this creative as shown in the following figure:

extracting key features from a creative by using the Cloud Vision API

To understand the nuances of the content, you can try this extraction for yourself by uploading your creative image.

In an instant, you've used an API to analyze the content of the creative, and you can begin to understand how the quality of these creatives impacts ad recall and performance. It becomes much easier for you to ask questions of the dataset.

Step 2: Set up Cloud Pub/Sub to automate the creative pipeline

To automate the creative pipeline at scale, you can take advantage of Cloud Pub/Sub. This service is a stream analytics solution that ingests event streams and delivers them to BigQuery for analysis in its data warehouse. Relying on Cloud Pub/Sub for delivery of event data frees you to efficiently process thousands of creatives. The following diagram shows an outline of the pipeline setup used to implement the creative analysis.

Technology stack and architecture

Technology stack and architecture

The preceding architecture diagram refers to the following GCP components.

GCP component Pipeline role
Cloud Pub/Sub Asynchronous job and message queue.

In this instance, we suggest creating 4 topics to manage creatives, jobs, creative metadata, and Vision API data. Each of these topics has associated subscribers that are kicked off when a new input is seen.
Cloud Storage Best used for media file storage of the images and videos used in the pipeline.
App Engine Computing resources to process business logic.
Cloud Datastore Allows for basic telemetry on jobs, coordination of actions across pipeline steps, and message aggregation.
Stackdriver Logging Logging resource for health check, debugging, and status.
BigQuery Datastore for resulting pipeline outputs and feeds into Data Studio for analysis.

Key considerations for the pipeline

  • The start and end of the pipeline are monolithic units of work. The middle of the pipeline processes creatives in parallel, where each Cloud Pub/Sub message carries a single creative.
  • Jobs are treated as long-running processes (can take hours/days per job).
  • The pipeline is run ad hoc, so new data in Google Marketing Platform isn't updated until the next run.
  • The pipeline requires appropriate permissions for the service account associated with the GCP project running this infrastructure, both from the Google Marketing Platform properties (for example, Campaign Manager) and from the source and destination GCP projects for services such as Cloud Storage and BigQuery.
  • This pipeline allows for a single creative process or batch updates.

Step 3: Visualize creative insights with Data Studio and BigQuery

For preliminary analysis, you can initially use the raw data from the Cloud Vision and Video Intelligence APIs directly. For mathematical modeling and machine learning, however, we recommend that you feature-engineer this data into a suitable format. This solution uses a modified one-hot approach, expressing each feature of the creative as a number from 0 through 1. The following table shows some examples of how this data gets converted.

API data for each creative One-hot representation
Set of plain-text labels describing what content was seen in the creative (for example, beach or car), along with a confidence score (0 to 1) for each label The feature set is the entire dictionary of labels across all creatives; each creative gets a 0 if that label was not present, and the confidence score (0 to 1) if it was.
Logo size in pixels

Location of top-left corner in pixels
One feature for the logo's relative size, which is calculated as (logo width × height) / (creative width × height).

Four features, one for each quadrant, with 1 if the logo was in that quadrant and 0 otherwise.
List of all faces in the creative One feature for the number of faces, normalized from 0 through 1 with a logistic function such that 0 represents no faces, 0.5 represents 1 or 2 faces, and 0.99 represents a large crowd.

When these steps are completed, you can use BigQuery to pull together information about a creative from the Cloud Vision API (Label, Score, Color, Prominence) along with the data from Google Marketing Platform, specifically Campaign Manager (Creative ID, Click-through rate). Here's an example table of information.

Creative ID Label Score Color Prominence Click-through rate
12345 pirate 0.7 red 0.8 0.2%
hat 0.8 white 0.1
67890 torso 0.6 white 0.4 0.3%
horse 0.9 sky blue 0.4
  • Now you can turn this data into creative insights. And you can easily answer the questions posed at the beginning of this article.

Start by asking: Do creatives with natural, outdoor images like trees or beaches outperform creatives with city images? Using anonymized data from a car manufacturer, the following graph plots the number of occurrences of a label against the creative's performance index.

graph that plots the number of occurrences of a label against the creative's performance index

  • In the upper-right quadrant of the graph, the labels engineering, blueprint, and schematic occur frequently and perform well. The results suggest that showing engineering diagrams and blueprints tells the customer that the car is a precise and well-engineered product.
  • In the bottom-right quadrant, the labels executive and dress shoes occur frequently but perform poorly. You might choose to avoid these types of creatives in the future.
  • In the upper-left quadrant, the labels journey, forest, people, body, and sunglasses don't occur frequently, but they outperform their peers. These labels show opportunities that you might have overlooked. For example, journey + forest might evoke ads with a winding road and breathtaking scenery, and people + body + sunglasses might suggest that the human model is important. Contrast those labels with dress shoes + executive, and you can see that a casual image works better for this brand's ad campaign.

  • Now ask yourself another question: What keywords occur in your creatives and how do they impact metrics? Using the same data from BigQuery, you can create the following graph, which plots the number of occurrences for certain keywords against the performance of their corresponding creatives. As the following graph shows, creatives with the keywords now and sale tend to do better than creatives with exclusive or off.

    creatives with keywords "now" and "sale" tend to do better than creatives with "exclusive" or "off"

    Using these insights, you can better understand the impact of all of your creatives and can make better decisions around creative deployment and testing.

What's next

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...