Jump to Content
Google Cloud

A little light reading: New, interesting and hands-on stories from around Google

June 18, 2019
Google Cloud Content & Editorial

There’s much more going on in the wide world of Google than cloud computing alone, so we’ve rounded up some recent favorite stories to share with you. Take a look at what’s happening in our developer community and in the AI lab, and find some projects to tackle for fun and skill-building.

Build a machine learning model (in less than an hour)
If you’re interested in AI and machine learning, but haven’t dived into the details yet, check out this session from Google I/O ‘19, where developer advocate Sara Robinson built an AI model from scratch on stage. The talk is intended for ML beginners, experts, and anyone in between. You’ll get a brief high-level overview of what ML is (essentially, matrix multiplication), and what types of Google products can help you add ML to your apps. The session will take you through coding, training, and deploying a model using a public BigQuery dataset of StackOverflow questions.

Let your code fly free
The Flutter framework offers a UI toolkit for developers to build web, mobile and desktop apps from a single code base. Flutter started with the goal of making iOS-Android cross-platform development easier, but the focus has expanded beyond mobile. This open-source project, developed at Google, now powers the Google Home Hub. Last month, the first technical preview of Flutter for web development arrived for early adopters to try it out, particularly for interactive content.  

File under easy listening (and speaking)
“Translatotron” may sound like a friendly traveling robot, but it’s actually an experimental speech-to-speech translation system that works differently from the systems developed over the past few decades. These systems usually translate the source speech to text, then translate it into the target language, then generate speech from that text into the target language. This works well, but Translatotron doesn’t divide the task into separate stages, which means it avoids compounding errors between recognition and translation, and does faster inference. It works by inputting and outputting spectrograms, using a trained neural vocoder for waveform translation and speaker encoder to maintain voice character. Check out the full post for details and audio clips demonstrating the system.

See how huge image datasets come together
Last month, Google AI engineers released Google-Landmarks-v2, a new landmark recognition dataset that follows up on last year’s Google-Landmarks. That one was the largest available at the time, but this one is even bigger, with more than 5 million images, twice that of the first version. To really advance research on instance-level recognition (recognizing specific instances of an object such as a landmark) and image retrieval, it’s important to have ever-larger datasets to add variety and train better systems. So this new dataset brings more diversity of images and greater challenges for technologists and tools. Creating the dataset involved crowdsourcing the labeling of landmarks within the photographer community, and using public institution photos. Make sure to check out the accompanying Kaggle competitions, one on image retrieval and one on image recognition.

Make a do-it-yourself cloud at home
Forget building a treehouse or hanging a flat-screen TV: Here’s a tutorial for you to build a smart home cloud to connect all your devices securely. The device cloud described here uses GCP components, including Firebase, to make a serverless setup to see when devices are offline, provision them to individual users, and more. You’ll get a look along the way into Cloud IoT Core for linking devices, plus Cloud Functions to move data between Cloud IoT Core and Firebase.

That’s a wrap for this edition. Let us know what you’re reading!

Posted in