Jump to Content
AI & Machine Learning

How OPPO enhances AI capabilities on mobile devices with Google Vertex AI

May 4, 2023
Hongyu Li

Senior Algorithm Engineer, OPPO

Leslie Li

Head of AI Platform, OPPO

Consumers today have more options than ever, which means businesses need to be dedicated to bringing the best-possible device performance to end users. At leading mobile device manufacturer OPPO, we’re constantly exploring ways to make better use of the latest technologies, including cloud and AI. One example is our AndesBrain strategy, which aims to make end devices smarter by integrating cloud tools with mobile hardware in the development process of AI models on mobile devices.

OPPO adopted this strategy because we believe in the potential of AI capabilities on mobile devices. On one hand, running AI models on end devices can better protect user privacy by keeping user data on mobile hardware, instead of sending them to the cloud. On the other hand, the computing capabilities of mobile chips are rapidly increasing to support more complex AI models. By linking cloud platforms with mobile chips for AI model training, we can leverage cloud computing resources to develop high-performance machine learning models that adapt to different mobile hardware.

In 2022, OPPO started implementing the AI engineering strategy on StarFire, which is our self-developed machine learning platform that merges cloud with end devices and serves, forming one of the six capabilities of AndesBrain. Through StarFire, we’re able to take advantage of various advanced cloud technologies to meet our development needs. To facilitate the AI model development process and enhance AI capabilities on mobile devices, we’ve collaborated with Google Cloud and Qualcomm Technologies to embed the Google Cloud Vertex AI Neural Architecture Search (Vertex AI NAS) on a smartphone for the first time. Let’s explore what we learned. 

Challenges of developing AI models on mobile devices

One major bottleneck of developing AI models on mobile devices is the limited computing capabilities of mobile chips compared to computer chips. Before using Vertex AI NAS, OPPO’s engineers mainly used two methods to develop AI models that can be supported by mobile devices. One is simplifying the neural networks trained on cloud platforms through network pruning or model compressing to make them suitable for mobile chips. The other is adopting lighter neural network architectures built on technologies like depthwise separable convolutions. 

These two methods come with three challenges:

  1. Long development time: To see if an AI model can smoothly run on a mobile device, we need to repeatedly run tests and manually adjust the model according to the hardware characteristics. As each mobile device has different computing capabilities and memory, the customization of AI models requires significant labor costs and leads to long development time.

  2. Lower accuracy: Due to their limited computing capabilities, mobile devices only support lighter AI models. However, after AI models trained on cloud platforms are pruned or compressed, the accuracy rate of the models decreases. We might be able to develop an AI model with a 95% accuracy rate in a cloud environment, but it won’t be able to run on end devices.

  3. Performance compromisation: For each AI model on mobile devices, we need to reach a balance among accuracy rate, latency, and power consumption. High accuracy rate, low latency, and low power consumption can’t be achieved at the same time. As a result, performance compromises are inevitable.

Advantages of Vertex AI NAS for AI model development

The neural architecture search technology was first developed by the Google Brain team in 2017 to create AI trained to optimize the performance of neural networks according to developers’ needs. By automatically discovering and designing the best architecture for a neural network for a specific task, the neural architecture search technology enables developers to more easily achieve better AI model performance.

Vertex AI NAS is currently the only fully-managed neural architecture search service available on a public cloud platform. As OPPO’s machine learning platform StarFire is cloud-based, we can easily connect Vertex AI NAS with our platform to develop AI models. On top of that, we chose to adopt Vertex AI NAS for on-device AI model development because of the following three advantages:

  1. Automated neural network design: As mentioned, developing AI models on mobile devices can be labor intensive and time consuming. Because the neural network design is automated through Vertex AI NAS, we can greatly reduce development time and easily adapt an AI model to different mobile chips.

  2. Custom reward parameters: Vertex AI NAS supports custom reward parameters, which is rare among the NAS tools. This means that we can freely add the search constraints that we need our AI models to be optimized for. By leveraging this feature, we have added power as a search constraint and successfully lowered the energy consumption of our AI model on mobile devices by 27%. 

  3. No need to compress AI models for mobile devices: Based on the real-time rewards sent back from the connected mobile chips, Vertex AI NAS can directly design a neural network architecture suitable for mobile devices. The end result can be run on end devices without being further processed, which saves time and effort for AI model adaptation.

How OPPO uses Vertex AI NAS to enhance energy efficiency of AI models on mobile devices

Lowering power consumption is key to providing excellent user experience for AI models on mobile devices, particularly the computing intensive models related to multimedia and image processing. If an AI model consumes too much power, mobile devices can overheat and quickly drain their battery life. That is why the primary aim of using Vertex AI NAS for OPPO is to boost energy efficiency of AI processing on mobile devices.

To achieve this goal, we first added power as a custom search constraint to Vertex AI NAS, which only supports latency and memory rewards by default. This way, Vertex AI NAS can search neural networks based on the rewards of power, latency, and memory, letting us reduce power consumption of our AI models while reaching our desired levels of latency and memory consumption.

Then, we connected the StarFire platform with Vertex AI NAS through Cloud Storage. At the same time, StarFire is linked with a smartphone equipped with Qualcomm’s Snapdragon 8 Gen 2 chipset through the SDK provided by Qualcomm. Under this structure, Vertex AI NAS can constantly send the latest neural network architecture via Cloud Storage to StarFire, which then exports the model to the chipset for testing. The test results are sent back to Vertex AI NAS again through StarFire and Cloud Storage, allowing it to conduct the next round of architecture search based on the rewards. 

This process was repeated until we achieved our target. In the end, we realized a 27% reduction in power of our AI model and a 40% reduction in computing latency, while maintaining the same level of accuracy rate before the optimization.

https://storage.googleapis.com/gweb-cloudblog-publish/images/oppo.max-1000x1000.jpg

Broadening the application range

The first successful AI model optimization through Vertex AI NAS is truly exciting for us. We plan to deploy this energy efficient AI model on our future smartphones, and implement the same model training process supported by Vertex AI NAS in the algorithm development of our other AI products. Besides power, we also hope to add other reward parameters, such as bandwidth and operator friendliness, as search constraints to Vertex AI NAS for more comprehensive model optimization.

Vertex AI NAS has significantly facilitated the optimization of our AI capabilities on smartphones, and we believe that there is still great potential to explore. We will continue collaborating with Google Cloud to expand our use of Vertex AI NAS. For the developers who are interested in adopting Vertex AI NAS, we advise targeting the most relevant hardware reward parameters before launching the development process, and becoming familiar with the ways to build search spaces if custom search constraints are needed.


Special thanks to Yuwei Liu, Senior Hardware Engineer at OPPO, for contributing to this post.
Posted in