Jump to Content
Developers & Practitioners

I/O Adventure Google Cloud architecture

July 7, 2022
https://storage.googleapis.com/gweb-cloudblog-publish/images/A9YucjGLu5HUHVf.max-2600x2600.png
Valentin Deleplace

Developer Advocate

How Google Cloud infrastructure powers the Google I/O Adventure online conference experience

Since 2020, many conferences have moved online – either fully or partially – as organizers, presenters, and attendees all reimagine how we approach our life and our work. Google I/O Adventure is a virtual conference experience that brings some of the best parts of real-world events into the digital world.


Inside I/O Adventure, event attendees can see product demos, chat with Googlers and other attendees, earn virtual swag, engage with the developer community, create a personal avatar, and look for easter eggs.


This post details how we’re using Google Cloud to power the I/O Adventure experience.


The frontend consists of static assets that would be sufficient for the attendees to enjoy the experience solo, in a sort of “offline mode”.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image9_OWYUq4i.max-2000x2000.png

The graphics and animations in the browser are rendered using popular libraries: React, PixieJS, GSAP, and Spine.


If the experience offered nothing more than this “offline mode” containing only static assets and links to external resources, then a minimal web server would be sufficient for the backend.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image3_UJzRcZX.max-1300x1300.png

Of course it’s more fun to be immersed in the same world as other attendees, and to interact with them by text chat and voice chat! 

https://storage.googleapis.com/gweb-cloudblog-publish/images/image13_kzjfKuT.max-2000x2000.png

For this multiplayer online experience, we needed a more sophisticated backend, with game servers deployed as stateful pods in Google Kubernetes Engine (GKE).

https://storage.googleapis.com/gweb-cloudblog-publish/images/image2_r5xdMWJ.max-1400x1400.png

The conference world map is large, with 12 different zones:

https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_4gA6Akt.max-2000x2000.png

Each zone of the  map is powered by a different, independent pod:

https://storage.googleapis.com/gweb-cloudblog-publish/images/image4_j4AGmIA.max-2000x2000.png

This means that any given attendee is connected to a single game server, depending on their zone (their location in the virtual world).


When there is high traffic, attendees are dispatched to one of several shards for each zone:
https://storage.googleapis.com/gweb-cloudblog-publish/images/image10_2BOAedO.max-2000x2000.png

For I/O’22 we decided to overprovision by launching many shard servers before the event started, ready to be used immediately when needed. The scalability strategy was simply to fill a shard with attendees until the capacity threshold was reached, and then to start using the next, empty shard.


Shard servers are stateful. Each of them powers its own small world autonomously, requiring minimal communication with the other shards. The state of the shard (such as the current position of its attendees) is maintained in memory by the shard server executable, which is written in Go.


The shard servers share some information (for example, the number of attendees connected to a given shard) with a central server, which is responsible for routing new attendees to a shard. This information is maintained in a global Memorystore for Redis instance.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image6_r7s7our.max-2000x2000.png

Once an attendee’s client browser has been assigned to a shard, it establishes a WebSocket connection with the shard server and communicates bidirectionally throughout the experience, sending attendee actions and receiving environment state updates.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image8_DCSQsnb.max-1300x1300.png

Each GKE Node has a local Redis instance used to communicate with a Voice server.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image7_MzPHOlo.max-1800x1800.png
https://storage.googleapis.com/gweb-cloudblog-publish/images/image14_3sJJxL3.max-1700x1700.png

To simplify the architecture, all of the servers are located in the same Google Cloud region (us-central1). This design choice provides low-latency communication among all of the server components. It also means that attendees in Europe, Africa, Asia, Oceania, and South America will connect to distant overseas servers, which is fine because it’s still acceptable to have up to several hundreds of milliseconds of latency for interactions like other attendee movements and text chat messages.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image11_TmUMnR8.max-1900x1900.png

To access I/O Adventure, attendees need to log in with a Google account. For this, we use the Firebase Authentication service. All avatars are customizable with hats, skin color, hand accessories, and so on. These avatar features, as well as game progress and completed quests, form a attendee profile that we store in a Firestore document database. Optionally, attendees can link their Google Developer Profile to their avatar, providing relevant information for their badge, which is visible to other attendees.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image12_gSTzDjp.max-1300x1300.png

In addition to I/O Adventure’s core servers and components, the conference experience also leverages dynamic elements, including:

  • Technical sessions (YouTube)

  • Flutter coding challenge (DartPad)

  • In-game experiences (for Material Design, Google Cloud, Google Pay, etc.)

  • Links to open more content (interactive experiences, codelabs, or documentation) in a new browser tab


Most of these integrations are handled directly in I/O Adventure’s frontend (i.e. in the browser), decoupled from the core server architecture.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image5_7cCz65c.max-2000x2000.png

Conclusion

Building the I/O Adventure web experience was a huge effort led by the Googler Tom Greenaway and the Google I/O team, and built by the talented designers and developers from the Set Snail studio.


It was a success! The servers, all fully hosted on Google Cloud, handled the load gracefully, and the social media coverage was very positive. It turns out people love swag, even in virtual form!

Posted in