Infrastructure options for ad servers (part 3)

This article covers the features and products of Google Cloud Platform (GCP) that you can use when building your ad-serving platform.

This article is part of the following series::

See the overview for ad-tech terminology used throughout this series.

Ad serving is the process of displaying an ad (typically, the most relevant one) from one of many advertisers to a customer on a publisher's property. The property might be a website, an app, or a game.

The complexity of an ad server depends on the features that it offers. In industry terms, an ad server is a tool for publishers and advertisers to manage campaigns, ads, and ad trafficking. An ad server doesn't merely display an ad the way that a billboard displays a static ad on the roadside. Displaying ads is the core function, but ad-tech platforms such as Google Ad Manager have more functions and offer more benefits than just showing customers a unique ad.

This article looks at how to approach building a robust ad-serving platform that includes the following primary functions:

  • Managing campaigns, accounts, billing, and reporting.
  • Selecting the most relevant ad.
  • Displaying the ad to the targeted user.
  • Managing events such as impressions, clicks, or conversions.
  • Publishing relevant operations to analytic data stores.

Ad servers typically process tens of thousands of ad requests per second, with each response sent within a few milliseconds. Being able to respond to so many requests so quickly means that availability, scalability, and low latency are all key requirements of an ad-serving solution. And because no single server solution can meet all of these requirements, this article looks at the implications of designing a distributed system.

There are two different types of ad servers:

  • Sell-side: These servers enable publishers to maximize their ad revenue by managing their advertisers directly from the ad server's UI. A sell-side server often hosts the ads, but can also host the references to an ad. Some ad servers will provide a self-service UI for buyers as well.
  • Buy-side: These servers enable advertisers, marketing departments, or ad agencies to manage their ad updates. Instead of providing the publishers with the actual ads, these platform users provide a snippet of code that communicates with the buy-side ad server to retrieve the ad.

The following diagram depicts the architecture of one possible ad-server system.

Possible implementation of an ad-server system

The main entry points to the ad-server platform are served by Cloud Load Balancing:

  • To request the ad.
  • To fetch the creative. The ads are fetched from the nearest Cloud CDN cache.
  • To track events such as impressions or the (unique) user's actions/clicks.

Requests to the load balancer are logged through HTTP(S) load balancing logging and monitoring or custom code running on the collectors, and are published to Cloud Pub/Sub before being processed by Cloud Dataflow for analytics and user profiling.

The workers that serve the requests leverage:

  • Budget information.
  • (Unique) user profiles.
  • Campaign details.
  • Counters.
  • Trained models to make selections.

Some of the above will need adapting to your specific requirements:

  • You might not want multiple, different databases for the ad selection, and you might be willing to compromise on hierarchical or relational databases for simplicity. In that case, you could use Cloud Bigtable or Cloud Spanner.
  • You might need sub-millisecond read latency when processing a bid request, and be able to accept additional operational overhead. In that case, you could use third-party regional in-memory databases with local replication where possible.
  • You might want to use the event collectors set up on preemptible VMs to minimize costs. If you don't need to do any online processing, you could capture your events using Stackdriver Logging and analyze them with BigQuery.

Managing campaigns

To manage campaigns, advertisers need a user frontend that consists at minimum of a web frontend along with a user interface (UI). Advertisers also need to provide some reporting capabilities. For recommended options, see user frontend (in part 1).

The shared hierarchy of resources set up through this UI includes advertisers, campaigns, budgets, creatives, and billing. When you create this hierarchy, the system records a set of rules that form part of the ad selection decision process. This data is stored in the metadata management store (in part 1).

Most ad servers handle billions of ad requests per day. It is inadvisable to have this load directly on the database that stores the metadata that is applied to these advertiser rules (unless you decide to use Cloud Spanner). Instead, consider one of the options covered by the heavy-read storing patterns (in part 1).

Selecting the most relevant ad

Receiving and parsing ad requests

Requests are sent through an ad tag that is placed on the publisher's property. Ad tags look like this:

<script src="[URL_TO_YOUR_AD_SERVER]?key=value"></script>

When the tag is loaded, it triggers an ad request to the ad server. The request contains information such as HTTP headers, user agent, page context, IP address, targeting information, possibly a user identifier, and ad details, including its size.

The ad server must provide an HTTP endpoint [URL_TO_YOUR_AD_SERVER] to receive and process these requests. Recommended options are detailed in handling requests (in part 1).

Profiling the user

Different ad servers have their own ways of selecting ads. This logic is not within the scope of this article. However, understanding the user is key to displaying relevant ads. This solution assumes that an advanced user taxonomy is a requirement.

An ad server working with many publishers is likely to recognize the same user on different properties. The (unique) user identifier could be a web cookie or a mobile device ID that the user can replace.

The ad server can enrich the information provided by the ad request (IP, page information) with this user identifier to search a data store. The ad server must be able to search for a user ID through millions of rows across possibly terabytes of data. The ad server must return an answer in milliseconds and must also minimize management overhead.

(Unique) user profile store (in part 1) provides an overview. Although, you can use any of the options mentioned in that article, we recommend using Cloud Bigtable for ad serving because it:

  • Is a wide-column NoSQL database that can store petabytes of data.
  • Retrieves rows in single-digit milliseconds.
  • Supports regional replication.
  • Is fully managed.

Selecting campaigns and ads

The ad selection process is performed in a few steps, as described in ad selection (in part 1):

  1. Profile the user by using the (unique) user profile store (in part 1).
  2. Select the matching campaigns and ads by using a copy of the metadata management store data (in part 1). Copies are made by using one of the heavy-read storing patterns (in part 1).
  3. Filter the relevant campaigns and ads by using the context stores (in part 1).
  4. Choose an ad.

After the ad server selects campaigns and ads that match user segments, it validates them against the context store data values—for example, frequency capping, blacklists, or exhausted budget. The ad server then selects the best ad out of the remaining campaigns and ads. The logic of that final selection is entirely up to you. For example, you can weigh campaigns, selecting the one with the highest CPM, or the one with the largest remaining budget. You could also calculate the ad's CTR potential, or combine parameters for a blended weighting.

Some of the more advanced selection systems use machine learning to recommend ads on a per-user or per-segment basis. Machine learning workflows are not detailed in this article, but you can read more about building machine learning capabilities (in part 2).

Serving the selected ad to the targeted user

Up to this point, the detailed steps could be considered as belonging to the publisher-side ad-serving workload. After the ad is selected, however, the ad server returns a link or a piece of HTML code, which can point to either:

  • A location on the publisher's ad server.
  • An external location that might belong to the advertiser. Such an ad server is considered a buy-side ad server, and implements a model known as third-party ad serving (3PAS). With this location, advertisers can update their ads without having to contact or communicate with the publisher's ad server. This location also enables them to manage their own tracking.

Whether marketers prefer to host their own creatives or have them hosted on a publisher's ad server, the processes leading up to the serving of the ad remain the same.

Storing creatives

Creatives are media files such as videos or images. Storing these items requires an object store that is both scalable and highly available.

Cloud Storage is the recommended store. It is built to host petabytes of raw or unstructured data. It also offers options for backup and archiving. Using just Cloud Storage, you can manage the lifecycle of your creatives moving them from hot to cold storage to reduce costs with Nearline and Coldline.

Delivering creatives

Even though object storage such as Cloud Storage is globally available, it tends to add networking latency due to distance. Also, object storage can be more expensive than serving ads through a content delivery network (CDN).

Because ad pixels and creatives are often public content, you can use one of two GCP solutions for caching content in Cloud Storage:

  • Making the objects public: With cache control, Cloud Storage can use the existing Google infrastructure to cache content, but with limited CDN-like features and no way to force a creative to expire globally.

  • Pairing Cloud Load Balancing and Cloud Storage: Content hosted on Cloud Storage can use Cloud Load Balancing with Cloud CDN enabled. Compared to Cloud Storage egress, Cloud CDN offers discounted networking pricing, and it offers signed URLs support, cache keys, and invalidation.

    The following diagram illustrates this second solution.

    Pairing Cloud Load Balancing and Cloud Storage

For performance comparisons against other providers, take a look at these third-party Cedexis reports.

Managing ad events

Different types of events are useful to the system, as in the following examples:

  1. Ad request: Upon receiving an ad request from a cooking website for user_ABC, the system can improve the user_ABC segments by adding something like Cooking > Indian cuisine or some other piece of information that reflects user interest.
  2. Ad impression: In a CPM model, an ad is shown to a targeted user. The system records the impression so the remaining budget can be updated.
  3. Ad click: A user clicks on an ad. This action can influence ad-optimization results, especially if several (unique) users click the same ad within a set period of time.
  4. Conversion: A user clicks an ad and performs an expected action on the advertiser's property, such as buying something or signing up.

When handling events, finding the right balance between price, data freshness, and operational overhead is important at every step of the process, especially when:

  • Collecting and ingesting high volumes and velocities of events that comes from impressions, clicks, conversions, and ad requests.
  • Processing events in real time or offline in order to extract values and calculate counters such as budgets, caps, and click-through rates.
  • Exporting results to various stores in real time or offline in order to reconcile billing or enforce proper serving operations.

For more details, see event lifecycle (in part 2).

We recommend the following architecture for capturing and processing real-time events:

Architecture for capturing and processing real-time
events

With this architecture:

  • Impressions and clicks make requests to an HTTP endpoint to static code hosted on Cloud Storage or to a web server collector hosted on Google Kubernetes Engine (GKE).
  • Request events are logged through the load balancer HTTP logs, or the events captured by the collectors are published to Cloud Pub/Sub.
  • Cloud Dataflow subscribes to the Cloud Pub/Sub topic and processes the events.
  • Cloud Dataflow then exports the raw and/or processed events to BigQuery for analytics and to the intelligence (context) database to update the remaining budgets.

The choice between hosting static code on Cloud Storage or GKE collectors depends on your requirements:

  • If you must perform additional backend actions, use GKE.
  • If you are concerned about operational overhead, use Cloud Storage.
  • If you are okay with occasionally having requests retried, but need to reduce the costs of running GKE or Compute Engine collectors, use preemptible VMs, as shown in the compute platform section (in part 1).

What's next

Was this page helpful? Let us know how we did:

Send feedback about...