The following tutorials step through the various stages of a Vertex AI Neural Architecture Search run. To help you integrate your docker with the Neural Architecture Search service, we encourage you to run these tutorials, not only read through them. This process will then help you to integrate your own docker with the Neural Architecture Search service.
These tutorials are lightweight and only require a few cloud CPUs to run, which your project should have by default without any additional quota needs.
- Run baseline training on Google Cloud using your docker.
- Create search spaces.
- Run an architecture search on Google Cloud.
- Add latency constraint to search: FLOPS or device based.
After running the tutorials, you should read the best practices and suggested workflow before running the first Neural Architecture Search search.