Finding Pete’s Dragon with Google Cloud Vision API
Product Manager, Google Cloud Platform
Director of Google ZOO
In a world where seeing is believing, people of all ages are looking for new ways to interact with their favorite stories and characters. And machine learning presents an opportunity to make this a reality.
Disney’s "Pete's Dragon" arrives in U.S. theaters in 3D this Friday. To promote the film, Disney collaborated with Google and MediaMonks to create "Dragon Spotting," a digital experience that uses Google Cloud Vision API to bring the magic of "Pete's Dragon" to life.
Via dragonspotting.com, people set out on a quest to find Elliot using their smartphones. They’re prompted to seek common items, such as couches, bicycles or trees, near their homes or around the neighborhood. Once they find the quest object, they can view Elliot in augmented reality through the lens of their Android mobile device. Users with iOS devices can have the same kind of experience by taking and uploading images of items prompted on the site to see if Elliot is hiding nearby.
To work, the mobile website needed to recognize everyday objects from a mobile camera with a high degree of accuracy. Accessing a simple REST API, the game uses Cloud Vision API’s Label Detection feature to identify objects in the user’s field of vision, dubbed “entities.” The API returns the list of entities identified within the image. The website then checks if the desired object is in the list of entities returned from the API. For example, if the user needs to identify “couch,” the website checks against a list of possible responses: “chair, futon, couch, sofa.” As soon as the recognized entity matches the desired object, Elliot is revealed!
Disney’s creative application of Google’s Cloud Vision API shows how machine learning can enable developers to build innovative and engaging experiences for marketing campaigns.
Don’t miss your chance to see Elliot and check out the site! You can also click here to learn more about Cloud Vision API and test it for yourself. We look forward to seeing how you build the next generation of applications that can see, hear and understand the world.