OUR CERTIFICATIONS

Sample Questions

These questions are examples of the type of questions you might see on the exams. These are provided to help familiarize you with the exam format. They include a descriptive scenario and a question that typically asks you to choose the appropriate course of action. To answer a question, you may need to evaluate diagrams, code snippets, or case studies. Some questions are brief, like those included here. Others are more complex and will take longer to read and answer.
You are designing a mobile chat application and want to guarantee that a message can only be read by the recipient. What should you do?
  • Encrypt the message client side using block-based encryption with a shared key.
  • Tag messages client side with the originating user identifier and the destination user.
  • Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
  • Use public key infrastructure (PKI) to encrypt the message client side using the recipient’s public key.
Your testing tools have identified the following code snippets with high execution time. Which code snippet's visible code can you refactor to remove a design flaw causing a high execution time?

A.

public class CloudSqlServlet extends HttpServlet {

    @Override
    public void doGet(HttpServletRequest req, HttpServletResponse resp)
        throws IOException, ServletException
    {
  
        String sql = "SELECT user_ip, timestamp FROM visits "
            + "LIMIT ? OFFSET ?"
        
        url = System.getProperty("ae-cloudsql.cloudsql-database-url");
        Class.forName("com.mysql.jdbc.GoogleDriver");
    
        Connection conn = DriverManager.getConnection(url);
        PreparedStatement prepared = conn.prepareStatement(sql);
    
        ArrayList userIps = new ArrayList();
        for (int i = 0; i < 100; ++i) {
            prepared.setInt(1, 100);
            prepared.setInt(2, i * 100);
            ResultSet rs = prepared.executeQuery(sql);
            while (rs.next()) {
                userIps.add(rs.getString("user_ip"));
           }
        }
     
        StringBuilder out = new StringBuilder();
        for (String userIp : userIps) {
            out.append(userIp);
        }
        resp.getWriter().print(out.toString());
    }
}
                        

B.

public class FibServlet extends HttpServlet {
    
    @Override
    public void doGet(HttpServletRequest req, HttpServletResponse resp)
        throws IOException, ServletException
    {
        int n = Integer.parseInt(req.getParameter("n"));
        resp.getWriter().print(fibonacci(n).toString());
    }
    
    private HashMap fibCache = new ConcurrentHashMap();
    private Integer fibonacci(int n) {
        if (n > 100) { return -1; }
        if (n == 0) { return 0; }
        if (n == 1) { return 1; }


        Integer f = fibCache.get(n);
        if (f == null) {
            f = fibonacci(n - 1) + fibonacci(n - 2);
            fibCache.put(n, f);
        }
        return f;
    }
}
                        

C.

public class FileUploader extends HttpServlet {
    
    @Override
    public void doGet(HttpServletRequest req, HttpServletResponse resp)
        throws IOException, ServletException
    {
        String bucketName = req.getRequestURI();

        // Get fresh file listing
        List<String> filenames = getFileNamesInGcs(bucketName);
        String files = "<table> "
        + "<thead>"
        + "<tr><td>FileName</td></tr>"
        + "</thead>"
         + "<tbody>"
        for (String filename: filenames) {
            files += "<tr><td>" + filename + "</td></tr>"
        }
        resp.getWriter().print(files);
    }
}
                        

D.

public class BigQueryServlet extends HttpServlet {

    @Override
    public void doPost(HttpServletRequest req, HttpServletResponse resp)
        throws IOException, ServletException
    {
        Bigquery bigquery = createAuthorizedClient();

        // Print out available datasets in the "publicdata" project to the console
        listDatasets(bigquery, "publicdata");

        // Start a Query Job
        String querySql = "SELECT TOP(word, 50), COUNT(*) FROM "
            + "publicdata:samples.shakespeare";
        JobReference jobId = startQuery(bigquery, PROJECT_ID, querySql);
        resp.getWriter().print(jobId.toString());
    }
}
                        
You are starting to collect terabytes of data in BigQuery in one table. You are looking to reduce costs and improve performance of queries. What should you do?
  • Create a table for each month of data and then UNION the tables in queries.
  • Create a partitioned table and use the _PARTITIONTIME pseudo column
  • Copy the data to Google Cloud Storage and split into daily files. Then use federated data sources.
  • Process the data using Dataflow and push the data to Google Cloud Storage.
You are architecting a solution to process data coming from multiple sources. You are using a large Apache Kafka cluster to handle your ingest requirements, but still need to identify the backend systems to process the data. Your company wants to make programmatic decisions in real-time as the data arrives. What should you do?
  • Load the data into a Google Cloud Dataproc Hadoop cluster and query it with Hive.
  • Stream the data directly into Google BigQuery for analysis.
  • Create a streaming Google Cloud Dataflow pipeline to process the data.
  • Load the data into Cloud Bigtable for analysis via the HBase API.