Google Certified Professional Cloud Architect Q&A
These questions are combined in order to prepare to the exam. All answers are not 100% guaranteed (there is no official list Q&A from Google) but were used to prepare (takes about 5 hrs with googling of contexts for some questions) and successfully pass real exam (in under 45 minutes while exam is limited to 120 minutes) which contains 50 questions.
Other sites either have non-verified answers or have long discussions on which answer is correct. This page accumulates the best answers as the outcome of such discussions.
Question #1
Your company's test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on-premises servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible.
Which cloud infrastructure should you recommend?
- A. Google Compute Engine unmanaged instance groups and Network Load Balancer
- B. Google Compute Engine managed instance groups with auto-scaling
- C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test
- D. Google App Engine with Google StackDriver for logging
B. Google Compute Engine managed instance groups with auto-scaling
Question #2
A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run properly on Google Cloud Platform.
What should you do?
- A. Help the engineer to convert his websocket code to use HTTP streaming
- B. Review the encryption requirements for websocket connections with the security team
- C. Meet with the cloud operations team and the engineer to discuss load balancer options
- D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
C. Meet with the cloud operations team and the engineer to discuss load balancer options
Question #3
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
- A. "¢ Append metadata to file body "¢ Compress individual files "¢ Name files with serverName "" Timestamp "¢ Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
- B. ¢ Batch every 10,000 events with a single manifest file for metadata "¢ Compress event files and manifest file into a single archive file "¢ Name files using serverName "" EventSequence "¢ Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
- C. "¢ Compress individual files "¢ Name files with serverName "" EventSequence "¢ Save files to one bucket "¢ Set custom metadata headers for each object after saving
- D. "¢ Append metadata to file body "¢ Compress individual files "¢ Name files with a random prefix pattern "¢ Save files to one bucket
D. "¢ Append metadata to file body "¢ Compress individual files "¢ Name files with a random prefix pattern "¢ Save files to one bucket
The names should not go in ascending sequence: Need to make them random
Question #4
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network's origin.
What should you do?
- A. Search for Create VM entry in the Stackdriver alerting console
- B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
- C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
- D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list.
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
Question #5
You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instance in a different project in the US-East region.
What steps must you take?
- A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
- B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
- C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region
- D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
Question #6
Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance.
How should you configure the storage?
- A. Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.
- B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.
- C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.
- D. Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage
C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.
A. Requires further work on retrieving the snapshot and can affect disk performance. B. You cannot attach local SSD if an instance is already running. Total time would be 'backup to SSD' + 'copy data to GCS'. Also if you stop the VM instance before the dump is transferred to GCS, the dump will be lost. C. Recommended by Google, faster than B (because single action). https://cloud.google.com/storage/docs/gcs-fuse D. You cannot configure any RAID config on persistent disks. You can only choose Zonal or Regional (Raid 1).Question #7
You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary cloud services that run on Google Compute Engine with Cloud Bigtable.
Which three requirements should they include? Choose 3 answers.
- A. Ensure that the load tests validate the performance of Cloud Bigtable
- B. Create a separate Google Cloud project to use for the load-testing environment
- C. Schedule the load-testing tool to regularly run against the production environment
- D. Ensure all third-party systems your services use is capable of handling high load
- E. Instrument the production services to record every transaction for replay by the load-testing tool
- F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
B. Create a separate Google Cloud project to use for the load-testing environment
E. Instrument the production services to record every transaction for replay by the load-testing tool
F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
Question #8
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin.
What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
- A. Org viewer, project owner
- B. Org viewer, project viewer
- C. Org admin, project browser
- D. Project owner, network admin
B. Org viewer, project viewer
Provide least required permission. The team should not be able to change anything, only browse.
Question #9
Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced.
Which two actions can you take? Choose 2 answers.
- A. Ensure every code check-in is peer reviewed by a security SME
- B. Use source code security analyzers as part of the CI/CD pipeline
- C. Ensure you have stubs to unit test all interfaces between components
- D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline
- E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
B. Use source code security analyzers as part of the CI/CD pipeline
E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
Question #10
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?
- A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name "" -size 10
- B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10
- C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
- D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster --enable- autoscaling --min-nodes=1 --max-nodes=10
Question #11
Your marketing department wants to send out a promotional email campaign. The development team wants to minimize direct operation management. They project a wide range of possible customer responses, from 100 to 500,000 click-through per day. The link leads to a simple website that explains the promotion and collects user information and preferences
Which infrastructure should you recommend? Choose 2 answers
- A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
- B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
- C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
- D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
Question #12
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? Choose 2 answers.
- A. Compute Engine with containers
- B. Google Kubernetes Engine with containers
- C. Google App Engine Standard Environment
- D. Compute Engine with custom instance types
- E. Compute Engine with managed instance groups
B. Google Kubernetes Engine with containers
C. Google App Engine Standard Environment
Question #13
One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes to the application data.
How can you design your logging system to verify authenticity of your logs?
- A. Write the log concurrently in the cloud and on premises
- B. Use a SQL database and limit who can modify the log table
- C. Digitally sign each timestamp and log entry and store the signature
- D. Create a JSON dump of each log entry and store it in Google Cloud Storage
D. Create a JSON dump of each log entry and store it in Google Cloud Storage
Question #14
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
- A. Configure a new load balancer for the new version of the API
- B. Reconfigure old clients to use a new endpoint for the new API
- C. Have the old API forward traffic to the new API based on the path
- D. Use separate backend pools for each API path behind the load balancer
D. Use separate backend pools for each API path behind the load balancer
Question #15
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
- A. Load data into Google BigQuery
- B. Insert data into Google Cloud SQL
- C. Put flat files into Google Cloud Storage
- D. Stream data into Google Cloud Datastore
A. Load data into Google BigQuery
Question #16
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? Choose 3 answers.
- A. Port the application code to run on Google App Engine
- B. Integrate Cloud Dataflow into the application to capture real-time metrics
- C. Instrument the application with a monitoring tool like Stackdriver Debugger
- D. Select an automation framework to reliably provision the cloud infrastructure
- E. Deploy a continuous integration tool with automated testing in a staging environment
- F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
A. Port the application code to run on Google App Engine
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
Question #17
A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?
import news
from flask import Flask, redirect, request
from flask.ext.api import status
from google.appengine.api import users
app = Flask(_name_)
sessions = {}
@app.route('/')
def homepage():
user = users.get_current_user()
if not user:
return 'Invalid login', status.HTTP_401_UNAUTHORIZED
if user not in sessions:
sessions[user] = {'viewed': []}
news_articles = news.get_new_news(user, sessions[user]['viewed'])
sessions[user]['viewed'] +- [n['id'] for n in news_articles]
return news.render(news_articles)
if _name_ == '_main_':
app.run()
- A. The session variable is local to just a single instance
- B. The session variable is being overwritten in Cloud Datastore
- C. The URL of the API needs to be modified to prevent caching
- D. The HTTP Expires header needs to be set to -1 stop caching
A. The session variable is local to just a single instance
Question #18
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.
What should you do?
- A. Direct them to download and install the Google StackDriver logging agent
- B. Send them a list of online resources about logging best practices
- C. Help them define their requirements and assess viable logging tools
- D. Help them upgrade their current tool to take advantage of any new features
C. Help them define their requirements and assess viable logging tools
Question #19
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's web hosting platform. Improvement to the QA/Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? Choose 2 answers.
- A. Introduce a green-blue deployment model
- B. Replace the QA environment with canary releases
- C. Fragment the monolithic platform into microservices
- D. Reduce the platform's dependency on relational database systems
- E. Replace the platform's relational database systems with a NoSQL database
A. Introduce a green-blue deployment model
C. Fragment the monolithic platform into microservices
Question #20
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? Choose 2 answers.
- A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
- B. Use the - -auto-delete flag on all persistent disks and terminate the VM
- C. Apply VM CPU utilization label and include it in the BigQuery billing export
- D. Use Google BigQuery billing export and labels to associate cost to groups
- E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM
- F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
D. Use Google BigQuery billing export and labels to associate cost to groups
Question #21
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations.
Which database type should you use?
- A. Flat file
- B. NoSQL
- C. Relational
- D. Blobstore
B. NoSQL
Question #22
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?
- A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
- B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
- C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
- D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
Question #23
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery.
What should you do to fix the script?
- A. Install the latest BigQuery API client library for Python.
- B. Run your script on a new virtual machine with the BigQuery access scope enabled.
- C. Create a new service account with BigQuery access and execute your script with that user.
- D. Install the bq component for gcloud with the command gcloud components install bq.
B. Run your script on a new virtual machine with the BigQuery access scope enabled.
Question #24
Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords
What authentication strategy should they use?
- A. Use G Suite Password Sync to replicate passwords into Google.
- B. Federate authentication via SAML 2.0 to the existing Identity Provider.
- C. Provision users in Google using the Google Cloud Directory Sync tool.
- D. Ask users to set their Google password to match their corporate password.
B. Federate authentication via SAML 2.0 to the existing Identity Provider.
Question #25
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?
- A. Google Cloud Dataproc.
- B. Google Cloud Dataflow.
- C. Google Container Engine with Bigtable.
- D. Google Compute Engine with Google BigQuery.
B. Google Cloud Dataflow.
Question #26
Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update.
What strategy should you take?
- A. Work with your ISP to diagnose the problem.
- B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application.
- C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment.
- D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem.
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment.
Question #27
A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?
- A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
- B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine.
- C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux.
- D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk.
- E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service.
A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
Question #28
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.
How should you design your architecture?
- A. Create a tokenizer service and store only tokenized data.
- B. Create separate projects that only process credit card data.
- C. Create separate subnetworks and isolate the components that process credit card data.
- D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data.
- E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor.
A. Create a tokenizer service and store only tokenized data.
Question #29
You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?
- A. Google Cloud SQL.
- B. Google Cloud Bigtable.
- C. Google Cloud Storage.
- D. Google Cloud Datastore.
B. Google Cloud Bigtable.
Question #30
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?
- A. Write a lifecycle management rule in XML and push it to the bucket with gsutil.
- B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil.
- C. Schedule a cron script using gsutil ls ""lr gs://backups/** to find and remove items older than 90 days.
- D. Schedule a cron script using gsutil ls ""l gs://backups/** to find and remove items older than 90 days and schedule it with cron.
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil.
Question #31
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?
- A. Google Cloud Dataflow.
- B. Google Cloud Dataproc.
- C. Google Compute Engine.
- D. Google Kubernetes Engine.
B. Google Cloud Dataproc.
Question #32
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?
- A. Increase the virtual machine's memory to 64 GB.
- B. Create a new virtual machine running PostgreSQL.
- C. Dynamically resize the SSD persistent disk to 500 GB.
- D. Migrate their performance metrics warehouse to BigQuery.
- E. Modify all of their batch jobs to use bulk inserts into the database.
C. Dynamically resize the SSD persistent disk to 500 GB.
Question #33
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?
- A. Google BigQuery.
- B. Google Cloud SQL.
- C. Google Cloud Bigtable.
- D. Google Cloud Storage.
C. Google Cloud Bigtable.
Question #34
Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.
What should you do?
- A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.
- B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
- C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones.
- D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200% of expected load.
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
Question #35
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
COPY ./src
RUN apt-get update && apt-get install -y python python-pip
RUN pip install -r requirements.txt
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality. Which two actions should you take? Choose 2 answers.
- A. Remove Python after running pip.
- B. Remove dependencies from requirements.txt.
- C. Use a slimmed-down base image like Alpine Linux.
- D. Use larger machine types for your Google Container Engine node pools.
- E. Copy the source after he package dependencies (Python and pip) are installed.
C. Use a slimmed-down base image like Alpine Linux.
E. Copy the source after he package dependencies (Python and pip) are installed.
Question #36
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.
What should you do?
- A. Deploy fewer changes to production.
- B. Deploy smaller changes to production.
- C. Increase the load on your test and staging environments.
- D. Deploy changes to a small subset of users before rolling out to production.
D. Deploy changes to a small subset of users before rolling out to production.
Question #37
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases.
What should you do?
- A. Set timeouts on your application so that you can fail requests faster.
- B. Send custom metrics for each of your requests to Stackdriver Monitoring.
- C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high.
- D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice.
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice.
Question #38
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.
What should you do?
- A. Use a different database.
- B. Choose larger instances for your database.
- C. Create snapshots of your database more regularly.
- D. Implement routinely scheduled failovers of your databases.
B. Choose larger instances for your database.
Question #39
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.
Which approach should you use?
- A. Grant the security team access to the logs in each Project.
- B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery.
- C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
- D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.
Question #40
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?
- A. Google Cloud Dedicated Interconnect.
- B. Google Cloud VPN connected to the data center network.
- C. A NAT and TLS translation gateway installed on-premises.
- D. A Google Compute Engine instance with a VPN server installed connected to the data center network.
A. Google Cloud Dedicated Interconnect.
Question #41
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.
What should you do?
- A. Create custom Google Stackdriver alerts and send them to the auditor.
- B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor.
- C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view.
- D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket.
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor.
Question #42
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?
- A. In the source code.
- B. In an environment variable.
- C. In a secret management system.
- D. In a config file that has restricted access through ACLs.
C. In a secret management system.
Question #43
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment. You want to advocate for the adoption of Google Cloud Deployment Manager.
What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers.
- A. Cloud Deployment Manager uses Python.
- B. Cloud Deployment Manager APIs could be deprecated in the future.
- C. Cloud Deployment Manager is unfamiliar to the company's engineers.
- D. Cloud Deployment Manager requires a Google APIs service account to run.
- E. Cloud Deployment Manager can be used to permanently delete cloud resources.
- F. Cloud Deployment Manager only supports automation of Google Cloud resources.
C. Cloud Deployment Manager is unfamiliar to the company's engineers.
F. Cloud Deployment Manager only supports automation of Google Cloud resources.
Question #44
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:
1. Be based on open-source technology for cloud portability
2. Dynamically scale compute capacity based on demand
3. Support continuous software delivery
4. Run multiple segregated copies of the same application stack
5. Deploy application bundles using dynamic templates
6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
- A. Google Kubernetes Engine, Jenkins, and Helm.
- B. Google Kubernetes Engine and Cloud Load Balancing.
- C. Google Kubernetes Engine and Cloud Deployment Manager.
- D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing.
D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing.
Question #45
You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted.
What should you do?
- A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
- B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service.
- C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.
- D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url.
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.
Question #46
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier.
How should you configure the network?
- A. Add each tier to a different subnetwork.
- B. Set up software based firewalls on individual VMs.
- C. Add tags to each tier and set up routes to allow the desired traffic flow.
- D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.
Question #47
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.
Which three actions should you take? Choose 3 answers.
- A. Use Stackdriver Logging to search for the module log entries.
- B. Read the debug GCE Activity log using the API or Cloud Console.
- C. Use gcloud or Cloud Console to connect to the serial console and observe the logs.
- D. Identify whether a live migration event of the failed server occurred, using in the activity log.
- E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics.
- F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen.
A. Use Stackdriver Logging to search for the module log entries.
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs.
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics.
Question #48
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup.
Which two steps should you take? Choose 2 answers.
- A. Load logs into Google BigQuery.
- B. Load logs into Google Cloud SQL.
- C. Import logs into Google Stackdriver.
- D. Insert logs into Google Cloud Bigtable.
- E. Upload log files into Google Cloud Storage.
A. Load logs into Google BigQuery.
E. Upload log files into Google Cloud Storage.
Question #49
You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, and investigation could take up to a week.
What should you do?
- A. Log in to a server, and iterate on the fox locally.
- B. Revert the source code change, and rerun the deployment pipeline.
- C. Log into the servers with the bad code change, and swap in the previous code.
- D. Change the instance group template to the previous one, and delete all instances.
D. Change the instance group template to the previous one, and delete all instances.
Question #50
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
- A. Multiple Organizations with multiple Folders.
- B. Multiple Organizations, one for each department.
- C. A single Organization with Folders for each department.
- D. A single Organization with multiple projects, each with a central owner.
C. A single Organization with Folders for each department.
Question #51
You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack trace.
What should you do?
- A. Upload missing JAR files and redeploy your application.
- B. Digitally sign all of your JAR files and redeploy your application.
- C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1.
B. Digitally sign all of your JAR files and redeploy your application.
Question #52
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.
What should you do?
- A. Tag messages client side with the originating user identifier and the destination user.
- B. Encrypt the message client side using block-based encryption with a shared key.
- C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.
- D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.
Question #53
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication.
What should they do?
- A. Configure their replication to use UDP.
- B. Configure a Google Cloud Dedicated Interconnect.
- C. Restore their database daily using Google Cloud SQL.
- D. Add additional VPN connections and load balance them.
- E. Send the replicated transaction to Google Cloud Pub/Sub.
B. Configure a Google Cloud Dedicated Interconnect.
Question #54
Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis.
What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
- A. Hash all data using SHA256.
- B. Encrypt all data using elliptic curve cryptography.
- C. De-identify the data with the Cloud Data Loss Prevention API.
- D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers.
D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers.
Question #55
You are using Cloud Shell and need to install a custom utility for use in a few weeks.
Where can you store the file so it is in the default execution path and persists across sessions?
- A. ~/bin
- B. Cloud Storage
- C. /google/scripts
- D. /usr/local/bin
A. ~/bin
Question #56
You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20Gbps. You want to follow Google-recommended practices.
How should you set up the connection?
- A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
- B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
- C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect.
- D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
Question #57
You are analyzing and defining business processes to support your startup's trial usage of GCP, and you don't yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices.
What should you do?
- A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
- B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
- C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.
- D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
Question #58
You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified deploying to production.
What should you do?
- A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back.
- B. Use Spinnaker to deploy builds to production and run tests on production deployments.
- C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout.
- D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
Question #59
You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs.
What should you do?
- A. Grant your colleague the IAM role of project Viewer.
- B. Perform a rolling restart on the instance group.
- C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys.
- D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys.
C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys.
Question #60
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI DSS-compliant.
Which of the following is most accurate?
- A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
- B. Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.
- C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
- D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
Question #61
Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time. You want to use Google-recommended practices to detect anomalies in your company data.
What should you do?
- A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.
- B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
- C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.
- D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.
B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
Question #62
Google Cloud Platform resources are managed hierarchically using organization, folders, and projects.
When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?
- A. The effective policy is determined only by the policy set at the node.
- B. The effective policy is the policy set at the node and restricted by the policies of its ancestors.
- C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors.
- D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors.
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors.
Question #63
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premise systems remain reachable during this period.
How should you organize your networking in Google Cloud?
- A. Use the same IP range on Google Cloud as you use on-premises.
- B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises.
- C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises.
- D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises.
C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises.
Question #64
You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore.
What should you do?
- A. Point gcloud datastore create-indexes to your configuration file.
- B. Upload the configuration file the App Engine's default Cloud Storage bucket, and have App Engine detect the new indexes.
- C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file.
- D. Create an HTTP request to the built-in python module to send the index configuration file to your application.
A. Point gcloud datastore create-indexes to your configuration file.
Question #65
You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage.
What should you do?
- A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
- B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
- C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
- D. Deploy the application on two Compute Engine instance groups, each in separate project and a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
Question #66
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public Internet.
What should you do?
- A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
- B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.
- C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
- D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
Question #67
You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance.
How should you install the software?
- A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
- B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil.
- C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud.
- D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
Question #68
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices.
What should you do?
- A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
- B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
- C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.
- D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
Question #69
You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo- service. You need to perform an update to the application with minimal downtime to the application.
What should you do?
- A. Use kubectl set image deployment/echo-deployment <new-image>
- B. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster.
- C. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ""f <yaml-file>
- D. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ""f <yaml-file>
A. Use kubectl set image deployment/echo-deployment <new-image>
Question #70
Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users' access roles?
- A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
- B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
- C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
- D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
Question #71
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account.
How should you have users upload images?
- A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.
- B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
- C. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity.
- D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.
B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
Question #72
Your web application must comply with the requirements of the European Union's General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application.
What should you do?
- A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides "pass-on" compliance when you use native features.
- B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
- C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
- D. Define a design for the security of data in your web application that meets GDPR requirements.
D. Define a design for the security of data in your web application that meets GDPR requirements.
Question #73
You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data center outage in any of the zones within a GCP region.
What should you do?
- A. Configure a Cloud SQL instance with high availability enabled.
- B. Configure a Cloud Spanner instance with a regional instance configuration.
- C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.
- D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Question #74
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application.
What should you do?
- A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
- B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
- C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
- D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
Question #75
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan incorporates the business goal of cost optimization. Your team has deployed two GCP projects successfully to date.
What should you do?
- A. Allocate budget for team training. Set a deadline for the new GCP project.
- B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
- C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
- D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Question #76
You are designing an application for use only during business hours. For the minimum viable product release, you'd like to use a managed product that automatically "scales to zero" so you don't incur costs when there is no activity.
Which primary compute resource should you choose?
- A. Cloud Functions
- B. Compute Engine
- C. Kubernetes Engine
- D. AppEngine flexible environment
A. Cloud Functions
Question #77
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore.
What should you do?
- A. Create the Key object for each Entity and run a batch get operation.
- B. Create the Key object for each Entity and run multiple get operations, one operation for each entity.
- C. Use the identifiers to create a query filter and run a batch query operation.
- D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity.
A. Create the Key object for each Entity and run a batch get operation.
Question #78
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys.
What should you do?
- A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
- B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
- C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
- D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
Question #79
Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency.
How should they capture the KPIs?
- A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
- B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
- C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
- D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
Question #80
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application.
Which set of steps should you take?
- A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases.
- B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
- C. Perform the following: 1. Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to "IfNotPresent" in the staging namespace, and then promote it to the production namespace after testing.
- D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with "latest". 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to "Always". Restart the pods to automatically deploy new production releases.
C. Perform the following: 1. Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to "IfNotPresent" in the staging namespace, and then promote it to the production namespace after testing.
Question #81
Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management.
What should you do?
- A. Use the Admin Directory API to authenticate against the Active Directory domain controller.
- B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
- C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
- D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
Question #82
You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the cause of the issue.
Which approach can you take?
- A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
- B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
- C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
- D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
Question #83
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability.
What should you do?
- A. Create a read replica instance in a different region.
- B. Create a failover replica instance in a different region.
- C. Create a read replica instance in the same region, but in a different zone.
- D. Create a failover replica instance in the same region, but in a different zone.
D. Create a failover replica instance in the same region, but in a different zone.
Question #84
Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application's performance.
What should you do?
- A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template.
- B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.
- C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
- D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
Question #85
Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don't want to rely on static IP addresses or subnets because the app can autoscale.
How should you restrict communications?
- A. Use separate VPCs to restrict traffic.
- B. Use firewall rules based on network tags attached to the compute instances.
- C. Use Cloud DNS and only allow connections from authorized hostnames.
- D. Use service accounts and configure the web application particular service accounts to have access.
B. Use firewall rules based on network tags attached to the compute instances.
Question #86
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds.
What are the correct steps to meet your requirements?
- A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
- B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master.
- C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag.
- D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
Question #87
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data.
What is the Google-recommended tool for such applications?
- A. Cloud Spanner, because it is globally distributed.
- B. Cloud SQL, because it is a fully managed relational database.
- C. Cloud Firestore, because it offers real-time synchronization across devices.
- D. BigQuery, because it is designed for large-scale processing of tabular data.
D. BigQuery, because it is designed for large-scale processing of tabular data.
Question #88
You have deployed an application to Kubernetes Engine, and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post-mortem.
What should you do?
- A. Use gcloud sql instances restart.
- B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role.
- C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud SQL.
- D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.
C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud SQL.
Question #89
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage.
What is the Google- recommended way for your application to authenticate to the required Google Cloud services?
- A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
- B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.
- C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
- D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
Question #90
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.
How should you deploy the VPN?
- A. Use VPC Network Peering between the VPC and the on-premises network.
- B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
- C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
- D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
Question #91
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed. You want to optimize storage and follow Google-recommended practices.
What should you do?
- A. Configure the expiration time for your tables at 45 days.
- B. Make the tables time-partitioned, and configure the partition expiration at 45 days.
- C. Rely on BigQuery's default behavior to prune application logs older than 45 days.
- D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days.
B. Make the tables time-partitioned, and configure the partition expiration at 45 days.
Question #92
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPUload.
What should you do?
- A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
- B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command.
- C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command.
- D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
Question #93
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.
What should you do?
- A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
- B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
- C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
- D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
Question #94
Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.
How should you design to meet Google best practices?
- A. Provisioning preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
- B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs that are not HIPAA-compliant.
- C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
- D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs that are not HIPAA-compliant.
Question #95
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.
What should you do?
- A. Engage with a security company to run web scrapers that look your users' authentication data om malicious websites and notify you if any if found.
- B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
- C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
- D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
Question #96
Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month.
What should you do?
- A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user.
- B. In the BigQuery interface, execute a query on the JOBS table to get the required information.
- C. Use "bq show" to list all jobs. Per job, use "bq ls" to list job information and get the required information.
- D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
Question #97
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for VMs in the instance group.
What should you do?
- A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies.
- B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.
- C. Use Puppet to create the managed instance group and install the OS package dependencies.
- D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.
Question #98
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables. You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
- A. Create a group per country. Add analysts to their respective country-groups. Create a single group "all_analysts", and add all country-groups as members. Grant the "all-analysts" group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
- B. Create a group per country. Add analysts to their respective country-groups. Create a single group "all_analysts", and add all country-groups as members. Grant the "all-analysts" group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group.
- C. Create a group per country. Add analysts to their respective country-groups. Create a single group "all_analysts", and add all country-groups as members. Grant the "all-analysts" group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country-group.
- D. Create a group per country. Add analysts to their respective country-groups. Create a single group "all_analysts", and add all country-groups as members. Grant the "all-analysts" group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
A. Create a group per country. Add analysts to their respective country-groups. Create a single group "all_analysts", and add all country-groups as members. Grant the "all-analysts" group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
Question #99
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
- A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
- B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle- managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
- C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
- D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle- managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
Question #100
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
- A. StatefulSets.
- B. Role-based access control.
- C. Container environment variables.
- D. Persistent Volumes.
A. StatefulSets.
Question #101
You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.
What should you do?
- A. Customize the cache keys to omit the protocol from the key.
- B. Shorten the expiration time of the cached objects.
- C. Make sure the HTTP(S) header "Cache-Region" points to the closest region of your users.
- D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.
A. Customize the cache keys to omit the protocol from the key.
Question #102
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
- A. All admin and VM system logs are automatically collected by Stackdriver.
- B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.
- C. Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.
- D. Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.
B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.
Question #103
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.
What should you do?
- A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
- B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
- C. Deploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the update and current applications.
- D. Deploy the update as a new App Engine application, and use Google's global HTTP load balancing to split traffic between the new and current applications.
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
Question #104
All compute Engine instances in your VPC should be able to connect to an Active Directory server on specific ports. Any other traffic emerging from your instances is not allowed. You want to enforce this using VPC firewall rules.
How should you configure the firewall rules?
- A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.
- B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances.
- C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances.
- D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 1000 to block all traffic for all instances.
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.
Question #105
Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.
What should the customer do to improve their model's results over time?
- A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
- B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
- C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.
- D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
Question #106
A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically.
How should you deploy to GKE?
- A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
- B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
- C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load balance the HTTPS traffic.
- D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
Question #107
You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in- transit encryption based on Google best practices.
What should you do?
- A. Create a cross-region load balancer with URL Maps.
- B. Create an HTTPS load balancer with URL maps.
- C. Create appropriate instance groups and instances. Configure SSL proxy load balancing.
- D. Create a global forwarding rule. Configure SSL proxy balancing.
B. Create an HTTPS load balancer with URL maps.
Question #108
You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429.
How should you handle these types of errors?
- A. Use gRPC instead of HTTP for better performance.
- B. Implement retry logic using a truncated exponential backoff strategy.
- C. Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.
- D. Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.
B. Implement retry logic using a truncated exponential backoff strategy.
Question #109
You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-recommended practices and native capabilities within GCP.
What should you do?
- A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests.
- B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
- C. Use gcloud scripts to automate service provisioning. Use Activity Logs monitor and debug your tests.
- D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
Question #110
Your company creates rendering software which users can download from the company website. Your company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.
How should you store the files?
- A. Save the files in a Multi-Regional Cloud Storage bucket.
- B. Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region.
- C. Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
- D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
Question #111
Your company acquired a healthcare startup and must retain its customers' medical information for up to 4 more years, depending on when it was created. Your corporate policy is to securely retain this data, and then delete it as soon as regulations allow.
Which approach should you take?
- A. Store the data in Google Drive and manually delete records as they expire.
- B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
- C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
- D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
Question #112
You are deploying a PHP App Engine Standard service with SQL as the backend. You want to minimize the number of queries to the database.
What should you do?
- A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL.
- B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results.
- C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called "cached-queries".
- D. Set the memcache service level to shared. Create a key called "cached-queries", and return database values from the key before using a query to Cloud SQL.
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL.
Question #113
You need to ensure reliability for your application and operations by supporting reliable task a scheduling for compute on GCP. Leveraging Google best practices.
What should you do?
- A. Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.
- B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
- C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances.
- D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Question #114
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company's mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company's needs?
- A. Compress and upload both archived files and files uploaded daily using the qsutil ""m option.
- B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
- C. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compares and upload files daily using the gsutil ""m option.
- D. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
Question #115
You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
- A. Cloud Pub/Sub alone.
- B. Cloud Pub/Sub to Cloud DataFlow.
- C. Cloud Pub/Sub to Stackdriver.
- D. Cloud Pub/Sub to Cloud SQL.
B. Cloud Pub/Sub to Cloud DataFlow.
Question #116
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources.
What Google domain and project structure should you recommend?
- A. Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.
- B. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.
- C. Create a single G Suite account to manage users with each stage of each application in its own project.
- D. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.
C. Create a single G Suite account to manage users with each stage of each application in its own project.
Question #117
A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly.
What three steps should you take to diagnose the problem? Choose 3 answers.
- A. Delete the virtual machine (VM) and disks and create a new one.
- B. Delete the instance, attach the disk to a new VM, and investigate.
- C. Take a snapshot of the disk and connect to a new machine to investigate.
- D. Check inbound firewall rules for the network the machine is connected to.
- E. Connect the machine to another network with very simple firewall rules and investigate.
- F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.
C. Take a snapshot of the disk and connect to a new machine to investigate.
D. Check inbound firewall rules for the network the machine is connected to.
F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.
Question #118
JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data.
What service account key-management strategy should you recommend?
- A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).
- B. Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.
- C. Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs.
- D. Deploy a custom authentication service on GCE/Google Kubernetes Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.
A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).
Question #119
JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals.
Which metrics should you track?
- A. Error rates for requests from Asia.
- B. Latency difference between US and Asia.
- C. Total visits, error rates, and latency from Asia.
- D. Total visits and average latency for users from Asia.
- E. The number of character sets present in the database.
D. Total visits and average latency for users from Asia.
Question #120
The migration of JencoMart's application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput.
What are three potential bottlenecks? Choose 3 answers.
- A. A single VPN tunnel, which limits throughput.
- B. A tier of Google Cloud Storage that is not suited for this task.
- C. A copy command that is not suited to operate over long distances.
- D. Fewer virtual machines (VMs) in GCP than on-premises machines.
- E. A separate storage layer outside the VMs, which is not suited for this task.
- F. Complicated internet connectivity between the on-premises infrastructure and GCP.
A. A single VPN tunnel, which limits throughput.
B. A tier of Google Cloud Storage that is not suited for this task.
F. Complicated internet connectivity between the on-premises infrastructure and GCP.
Question #121
JencoMart wants to move their User Profiles database to Google Cloud Platform.
Which Google Database should they use?
- A. Cloud Spanner.
- B. Google BigQuery.
- C. Google Cloud SQL.
- D. Google Cloud Datastore.
D. Google Cloud Datastore.
Question #122
Mountkirk Games wants you to design their new testing strategy.
How should the test coverage differ from their existing backends on the other platforms?
- A. Tests should scale well beyond the prior approaches.
- B. Unit tests are no longer required, only end-to-end tests.
- C. Tests should be applied after the release is in the production environment.
- D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.
A. Tests should scale well beyond the prior approaches.
Question #123
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a through testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way.
How should you design the process?
- A. Create a scalable environment in GCP for simulating production load.
- B. Use the existing infrastructure to test the GCP-based backend at scale.
- C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
- D. Create a set of static environments in GCP to test different levels of load "" for example, high, medium, and low.
A. Create a scalable environment in GCP for simulating production load.
Question #124
Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
✑ Services are deployed redundantly across multiple regions in the US and Europe
✑ Only frontend services are exposed on the public internet
✑ They can provide a single frontend IP for their fleet of services
✑ Deployment artifacts are immutable
Which set of products should they use?
- A. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine.
- B. Google Cloud Storage, Google App Engine, Google Network Load Balancer.
- C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer.
- D. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager.
C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer.
Question #125
Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times.
What should they investigate first?
- A. Verify that the database is online.
- B. Verify that the project quota hasn't been exceeded.
- C. Verify that the new feature code did not introduce any performance bugs.
- D. Verify that the load-testing team is not running their tool against production.
B. Verify that the project quota hasn't been exceeded.
Question #126
Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.
What should you do to isolate development environments from staging and production?
- A. Create a project for development and test and another for staging and production.
- B. Create a network for development and test and another for staging and production.
- C. Create one subnetwork for development and another for staging and production.
- D. Create one project for development, a second for staging and a third for production.
D. Create one project for development, a second for staging and a third for production.
Question #127
Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements.
Which combination of Google technologies will meet all of their requirements?
- A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL.
- B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery.
- C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow.
- D. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow.
- E. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc.
B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery.
Question #128
For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.
Which two steps should be part of their migration plan? (Choose two.)
- A. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
- B. Write a schema migration plan to denormalize data for better performance in BigQuery.
- C. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.
- D. Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.
- E. Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.
A. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
B. Write a schema migration plan to denormalize data for better performance in BigQuery.
Question #129
For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games.
Considering the Mountkirk Games business and technical requirements, what should you do?
- A. Create network load balancers. Use preemptible Compute Engine instances.
- B. Create network load balancers. Use non-preemptible Compute Engine instances.
- C. Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.
- D. Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.
D. Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.
Question #130
For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available.
Which two steps should they take? (Choose two.)
- A. Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.
- B. Begin packaging their game backend artifacts in container images and running them on Kubernetes Engine to improve the availability to scale up or down based on game activity.
- C. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.
- D. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.
- E. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.
B. Begin packaging their game backend artifacts in container images and running them on Kubernetes Engine to improve the availability to scale up or down based on game activity.
C. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.
Question #131
For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform's resilience to changes in mobile network latency.
What should you do?
- A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.
- B. Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.
- C. Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.
- D. Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.
A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.
Question #132
For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games.
Considering the business and technical requirements, what should you do?
- A. Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.
- B. Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.
- C. Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.
- D. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.
D. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.
Question #133
For this question, refer to the Mountkirk Games case study.
Which managed storage option meets Mountkirk's technical requirement for storing game activity in a time series database service?
- A. Cloud Bigtable.
- B. Cloud Spanner.
- C. BigQuery.
- D. Cloud Datastore.
A. Cloud Bigtable.
Question #134
For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API. You want to follow Google-recommended practices.
How should you design the backend?
- A. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.
- B. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.
- C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.
- D. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.
C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.
Question #135
TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the field will have a catastrophic failure. You want to allow analysts to centrally query the vehicle data.
Which architecture should you recommend?
A:
B:
C:
D:
A.
Question #136
The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework.
Which method should they use?
- A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.
- B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.
- C. Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public.
- D. Use Google Container Engine with a Django Python container. Focus on an API for the public.
- E. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.
A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.
Question #137
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data.
What should you do?
- A. Build or leverage an OAuth-compatible access control system.
- B. Build SAML 2.0 SSO compatibility into your authentication system.
- C. Restrict data access based on the source IP address of the partner systems.
- D. Create secondary credentials for each dealer that can be given to the trusted third party.
A. Build or leverage an OAuth-compatible access control system.
Question #138
TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour.
How should you design the data ingestion?
- A. Vehicles write data directly to GCS.
- B. Vehicles write data directly to Google Cloud Pub/Sub.
- C. Vehicles stream data directly to Google BigQuery.
- D. Vehicles continue to write data using the existing system (FTP).
B. Vehicles write data directly to Google Cloud Pub/Sub.
Question #139
You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer's wait time for parts. You decided to focus on reduction of the 3 weeks aggregate reporting time.
Which modifications to the company's processes should you recommend?
- A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
- B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
- C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
- D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
Question #140
Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
- A. Opex/capex allocation, LAN changes, capacity planning.
- B. Capacity planning, TCO calculations, opex/capex allocation.
- C. Capacity planning, utilization measurement, data center expansion.
- D. Data Center expansion, TCO calculations, utilization measurement.
B. Capacity planning, TCO calculations, opex/capex allocation.
Question #141
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.
What should you do?
- A. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.
- B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket.
- C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.
- D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.
D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.
Question #142
TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location, its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US, Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data.
What is the most cost-effective way to run this job?
- A. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
- B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
- C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job.
- D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the job.
D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the job.
Question #143
TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs.
What should they do?
- A. Have the vehicle's computer compress the data in hourly snapshots, and store it in a Google Cloud Storage (GCS) Nearline bucket.
- B. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.
- C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.
- D. Have the vehicle's computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket.
D. Have the vehicle's computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket.
Question #144
Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation.
Which two architectures should you consider? (Choose two.)
- A. Treat every micro service call between modules on the vehicle as untrusted.
- B. Require IPv6 for connectivity to ensure a secure address space.
- C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
- D. Use a functional programming language to isolate code execution cycles.
- E. Use multiple connectivity subsystems for redundancy.
- F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.
A. Treat every micro service call between modules on the vehicle as untrusted.
C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
Question #145
Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field.
How can you accomplish this goal?
- A. Have you engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.
- B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
- C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.
- D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.
D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.
Question #146
For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery.
What should you do?
- A. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
- B. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
- C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
- D. Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.
C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
Question #147
For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.
Which two actions should you take?
- A. Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Standard", and Action: "Set to Coldline", and create a second GCS life-cycle rule with Age: "365", Storage Class: "Coldline", and Action: "Delete".
- B. Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Coldline", and Action: "Set to Nearline", and create a second GCS life-cycle rule with Age: "91", Storage Class: "Coldline", and Action: "Set to Nearline".
- C. Create a Cloud Storage lifecycle rule with Age: "90", Storage Class: "Standard", and Action: "Set to Nearline", and create a second GCS life-cycle rule with Age: "91", Storage Class: "Nearline", and Action: "Set to Coldline".
- D. Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Standard", and Action: "Set to Coldline", and create a second GCS life-cycle rule with Age: "365", Storage Class: "Nearline", and Action: "Delete".
A. Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Standard", and Action: "Set to Coldline", and create a second GCS life-cycle rule with Age: "365", Storage Class: "Coldline", and Action: "Delete".
Question #148
For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth.
Considering the TerramEarth business and technical requirements, what should you do?
- A. Replace the existing data warehouse with BigQuery. Use table partitioning.
- B. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.
- C. Replace the existing data warehouse with BigQuery. Use federated data sources.
- D. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine preemptible instance with 32 CPUs.
A. Replace the existing data warehouse with BigQuery. Use table partitioning.
Question #149
For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost.
What should you do?
- A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
- B. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.
- C. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.
- D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.
A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
Question #150
For this question, refer to the TerramEarth case study.
Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?
- A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
- B. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
- C. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.
- D. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.
A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
Question #151
For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?
- A. Google Kubernetes Engine with an SSL Ingress.
- B. Cloud IoT Core with public/private key pairs.
- C. Compute Engine with project-wide SSH keys.
- D. Compute Engine with specific SSH keys.
B. Cloud IoT Core with public/private key pairs.
Question #152
The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP). The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects.
What can they do?
- A. Grant the operations engineer access to use Google Cloud Shell.
- B. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
- C. Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.
- D. Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.
B. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
Question #153
At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center.
How should he proceed?
- A. Create a cron script using gsutil to copy the files to a Coldline Storage bucket.
- B. Create a cron script using gsutil to copy the files to a Regional Storage bucket.
- C. Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.
- D. Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.
A. Create a cron script using gsutil to copy the files to a Coldline Storage bucket.
Question #154
Dress4Win has asked you to recommend machine types they should deploy their application servers to.
How should you proceed?
- A. Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.
- B. Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.
- C. Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the desired performance is reached.
- D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor performance, and scale the machine types up until the desired performance is reached.
D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor performance, and scale the machine types up until the desired performance is reached.
Question #155
As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load.
They want to ensure that:
* The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day
* Their administrators are notified automatically when their application reports errors.
* They can filter their aggregated logs down in order to debug one piece of the application across many hosts
Which Google StackDriver features should they use?
- A. Logging, Alerts, Insights, Debug.
- B. Monitoring, Trace, Debug, Logging.
- C. Monitoring, Logging, Alerts, Error Reporting.
- D. Monitoring, Logging, Debug, Error Report.
C. Monitoring, Logging, Alerts, Error Reporting.
Question #156
Dress4Win would like to become familiar with deploying applications to the cloud by successfully deploying some applications quickly, as is. They have asked for your recommendation.
What should you advise?
- A. Identify self-contained applications with external dependencies as a first move to the cloud.
- B. Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.
- C. Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.
- D. Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.
C. Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.
Question #157
Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the cloud. They want to minimize downtime and performance impact to their on-premises solution during the migration.
Which approach should you recommend?
- A. Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud environment, and load into a new MySQL cluster.
- B. Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.
- C. Create a new MySQL cluster in the cloud, configure applications to begin writing to both on premises and cloud MySQL masters, and destroy the original cluster at cutover.
- D. Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud Datastore, and configure applications to read/write to Cloud Datastore at cutover.
B. Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.
Question #158
Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy.
What should they do?
- A. Install the Stackdriver agent on all of the legacy web servers.
- B. In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule.
- C. Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https:// cloud.google.com/monitoring).
- D. Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring- UptimeChecks (https://cloud.google.com/monitoring).
B. In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule.
Question #159
As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in.
Which configuration should Dress4Win use?
- A. Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.
- B. Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.
- C. Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.
- D. Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.
A. Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.
Question #160
Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs.
Which additional testing methods should the developers employ to prevent an outage?
- A. They should enable Google Stackdriver Debugger on the application code to show errors in the code.
- B. They should add additional unit tests and production scale load tests on their cloud staging environment.
- C. They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.
- D. They should add canary tests so developers can measure how much of an impact the new release causes to latency.
B. They should add additional unit tests and production scale load tests on their cloud staging environment.
Question #161
You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority.
Which cloud services should you choose?
- A. Google Cloud Storage Coldline to store the data, and gsutil to access the data.
- B. Google Cloud Storage Nearline to store the data, and gsutil to access the data.
- C. Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.
- D. BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.
A. Google Cloud Storage Coldline to store the data, and gsutil to access the data.
Question #162
The current Dress4win system architecture has high latency to some customers because it is located in one data center. As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute its system architecture to multiple locations when Google cloud platform.
Which approach should they use?
- A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
- B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines managed by your operations team.
- C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
- D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance groups.
A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
Question #163
For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months.
How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?
- A. Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.
- B. Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.
- C. Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.
- D. Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.
D. Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.
Question #164
For this question, refer to the Dress4Win case study.
Considering the given business requirements, how would you automate the deployment of web and transactional data layers?
- A. Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.
- B. Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.
- C. Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.
- D. Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.
A. Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.
Question #165
For this question, refer to the Dress4Win case study.
Which of the compute services should be migrated as is and would still be an optimized architecture for performance in the cloud?
- A. Web applications deployed using App Engine standard environment.
- B. RabbitMQ deployed using an unmanaged instance group.
- C. Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode.
- D. Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types.
D. Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types.
Question #166
For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.
What should you do?
- A. Use Stackdriver Trace to create a trace list analysis.
- B. Use Stackdriver Monitoring to create a dashboard on the project's activity.
- C. Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.
- D. Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.
D. Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.
Question #167
For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.
Considering Dress4Win's business and technical requirements, what should you do?
- A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.
- B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Enable default storage encryption before storing files in Cloud Storage.
- C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Utilize Google's default encryption at rest when storing files in Cloud Storage.
- D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.
D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.
Question #168
For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.
What change in the on-premises architecture should you make?
- A. Replace RabbitMQ with Google Pub/Sub.
- B. Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.
- C. Resize compute resources to match predefined Compute Engine machine types.
- D. Containerize the micro services and host them in Google Kubernetes Engine.
D. Containerize the micro services and host them in Google Kubernetes Engine.
Question #169
Which of the following services provides real-time messaging?
- A. Cloud Pub/Sub.
- B. Big Query.
- C. App Engine.
- D. Datastore.
A. Cloud Pub/Sub.
Question #170
Which of the following tasks would Nearline Storage be well suited for?
- A. A mounted Linux file system.
- B. Image assets for a high traffic website.
- C. Frequently read files.
- D. Infrequently read data backups.
D. Infrequently read data backups.
Question #171
Which of the following products will allow you to administer your projects through a browser based command-line?
- A. Cloud Datastore.
- B. Cloud Command-line.
- C. Cloud Terminal.
- D. Cloud Shell.
D. Cloud Shell.
Question #172
Cloud SQL is based on which database engine?
- A. Microsoft SQL Server.
- B. MySQL.
- C. Oracle.
- D. Informix.
B. MySQL.
Question #173
Which of the following products will allow you to perform live debugging without stopping your application?
- A. App Engine Active Debugger (AEAD).
- B. Stackdriver Debugger.
- C. Code Inspector.
- D. Pause IT.
B. Stackdriver Debugger.
Question #174
Which of these options is not a valid Cloud Storage class?
- A. Glacier Storage.
- B. Nearline Storage.
- C. Coldline Storage.
- D. Regional Storage.
A. Glacier Storage.
Question #175
Regarding Cloud Storage, which option allows any user to access to a Cloud Storage resource for a limited time, using a specific URL?
- A. Open Buckets.
- B. Temporary Resources.
- C. Signed URLs.
- D. Temporary URLs.
C. Signed URLs.
Question #176
Of the options given, which is a NoSQL database?
- A. Cloud Datastore.
- B. Cloud SQL.
- C. All of the given options.
- D. Cloud Storage.
A. Cloud Datastore.
Question #177
Container Engine allows orchastration of what type of containers?
- A. Blue Whale.
- B. LXC.
- C. BSD Jails.
- D. Docker.
D. Docker.
Question #178
Regarding Cloud IAM, what type of role(s) are available?
- A. Basic roles and Compiled roles.
- B. Primitive roles and Predefined roles.
- C. Simple roles.
- D. Basic roles and Curated roles.
B. Primitive roles and Predefined roles.
Question #179
Which of the follow products will allow you to host a static website?
- A. Cloud SDK.
- B. Cloud Endpoints.
- C. Cloud Storage.
- D. Cloud Datastore.
C. Cloud Storage.
Question #180
Container Engine is built on which open source system?
- A. Swarm.
- B. Kubernetes.
- C. Docker Orchastrate.
- D. Mesos.
B. Kubernetes.
Question #181
Cloud Source Repositories provide a hosted version of which version control system?
- A. Git.
- B. RCS.
- C. SVN.
- D. Mercurial.
A. Git.
Question #182
Which of the following is an analytics data warehouse?
- A. Cloud SQL.
- B. Big Query.
- C. Datastore.
- D. Cloud Storage.
B. Big Query.
Question #183
Which service offers the ability to create and run virtual machines?
- A. Google Virtualization Engine.
- B. Compute Containers.
- C. VM Engine.
- D. Compute Engine.
D. Compute Engine.
Question #184
Which of the following is not helpful for mitigating the impact of an unexpected failure or reboot?
- A. Use persistent disks.
- B. Configure tags and labels.
- C. Use startup scripts to re-configure the system as needed.
- D. Back up your data.
B. Configure tags and labels.
Question #185
Single sign-on (SSO) with G Suite is based on _____?
- A. SAML2.
- B. JWT.
- C. Service accounts.
- D. JSON.
A. SAML2.
Question #186
Which tool allows you to sync data in your Google domain with Active Directory?
- A. Google Cloud Directory Sync (GCDS).
- B. Google Active Directory (GAD).
- C. Google Domain Sync Service.
- D. Google LDAP Sync.
A. Google Cloud Directory Sync (GCDS).
Question #187
Regarding Cloud Storage: which of the following allows for time-limited access to buckets and objects without a Google account?
- A. Signed URLs.
- B. gsutil.
- C. Single sign-on.
- D. Temporary Storage Accounts.
A. Signed URLs.
Question #188
Which of the following is a virtual machine instance that can be terminated by Compute Engine without warning?
- A. A preemptible VM.
- B. A shared-core VM.
- C. A high-cpu VM.
- D. A standard VM.
A. A preemptible VM.
Question #189
Regarding Compute Engine: What is a managed instance group?
- A. A managed instance group combines existing instances of different configurations into one manageable group.
- B. A managed instance group uses an instance template to create identical instances.
- C. A managed instance group creates a firewall around instances.
- D. A managed instance group is a set of servers used exclusively for batch processing.
B. A managed instance group uses an instance template to create identical instances.
Question #190
What type of firewall rule(s) does Google Cloud's networking support?
- A. deny.
- B. allow, deny & filtered.
- C. allow.
- D. allow & deny.
D. allow & deny.
Question #191
How are subnetworks different than the legacy networks?
- A. They're the same, only the branding is different.
- B. Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork.
- C. With subnetworks IP address allocation occurs at the global network level.
- D. Legacy networks are the preferred way to create networks.
B. Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork.
Question #192
Which of the following is not a valid metric for triggering autoscaling?
- A. Google Cloud Pub/Sub queuing.
- B. Average CPU utilization.
- C. Stackdriver Monitoring metrics.
- D. App Engine Task Queues.
D. App Engine Task Queues.
Question #193
Which of the following features makes applying firewall settings easier?
- A. Service accounts.
- B. Tags.
- C. Metadata.
- D. Labels.
B. Tags.
Question #194
What option does Cloud SQL offer to help with high availability?
- A. Point-in-time recovery.
- B. The AlwaysOn setting.
- C. Snapshots.
- D. Failover replicas.
D. Failover replicas.
Question #195
Regarding Compute Engine: when executing a startup script on a Linux server which user does the instance execute the script as?
- A. ubuntu.
- B. The Google provided "gceinstance" user.
- C. Whatever user you specify in the console.
- D. root.
D. root.
Question #196
Which of the follow methods will not cause a shutdown script to be executed?
- A. When an instance shuts down through a request to the guest operating system.
- B. A preemptible instance being terminated.
- C. An instances.reset API call.
- D. Shutting down via the cloud console.
C. An instances.reset API call.
Question #197
Which type of account would you use in code when you want to interact with Google Cloud services?
- A. Google group.
- B. Service account.
- C. Code account.
- D. Google account.
B. Service account.
Question #198
Which of the following is not an IAM best practice?
- A. Use primitive roles by default.
- B. Treat each component of your application as a separate trust boundary.
- C. Grant roles at the smallest scope needed.
- D. Restrict who has access to create and manage service accounts in your project.
A. Use primitive roles by default.
Question #199
Which of the following would not reduce your recovery time in the event of a disaster?
- A. Make it as easy as possible to adjust the DNS record to cut over to your warm standby server.
- B. Replace your warm standby server with a hot standby server.
- C. Use a highly preconfigured machine image for deploying new instances.
- D. Replace your active/active hybrid production environment (on-premises and GCP) with a warm standby server.
D. Replace your active/active hybrid production environment (on-premises and GCP) with a warm standby server.
Question #200
Which of the following is not a best practice for mitigating Denial of Service attacks on your Google Cloud infrastructure?
- A. Block SYN floods using Cloud Router.
- B. Isolate your internal traffic from the external world.
- C. Scale to absorb the attack.
- D. Reduce the attack surface for your GCE deployment.
A. Block SYN floods using Cloud Router.
Question #201
Which is the fastest instance storage option that will still be available when an instance is stopped?
- A. Local SSD.
- B. Standard Persistent Disk.
- C. SSD Persistent Disk.
- D. RAM disk.
C. SSD Persistent Disk.
Question #202
Which of these statements about Microsoft licenses is true?
- A. You can migrate your existing Microsoft application licenses to Compute Engine instances, but not your Microsoft Windows licenses.
- B. You can migrate your existing Microsoft Windows and Microsoft application licenses to Compute Engine instances.
- C. You cannot migrate your existing Microsoft Windows or Microsoft application licenses to Compute Engine instances.
- D. You can migrate your existing Microsoft Windows licenses to Compute Engine instances, but not your Microsoft application licenses.
B. You can migrate your existing Microsoft Windows and Microsoft application licenses to Compute Engine instances.
Question #203
Which database services support standard SQL queries?
- A. Cloud Bigtable and Cloud SQL.
- B. Cloud Spanner and Cloud SQL.
- C. Cloud SQL and Cloud Datastore.
- D. Cloud SQL.
B. Cloud Spanner and Cloud SQL.
Question #204
Which statement about IP addresses is false?
- A. You are charged for a static external IP address for every hour it is in use.
- B. You are not charged for ephemeral IP addresses.
- C. Google Cloud Engine supports only IPv4 addresses, not IPv6.
- D. You are charged for a static external IP address when it is assigned but unused.
A. You are charged for a static external IP address for every hour it is in use.
Question #205
Which Google Cloud Platform service requires the least management because it takes care of the underlying infrastructure for you?
- A. Container Engine.
- B. Cloud Engine.
- C. App Engine.
- D. Docker containers running on Cloud Engine.
C. App Engine.
Question #206
To ensure that your application will handle the load even if an entire zone fails, what should you do?
- A. Don't select the "Multizone" option when creating your managed instance group.
- B. Spread your managed instance group over two zones and overprovision by 100%.
- C. Create a regional unmanaged instance group and spread your instances across multiple zones.
- D. Overprovision your regional managed instance group by at least 50%.
B. Spread your managed instance group over two zones and overprovision by 100%.
Question #207
If you do not grant a user named Bob permission to access a Cloud Storage bucket, but then use an ACL to grant access to an object inside that bucket to Bob, what will happen?
- A. Bob will be able to access all of the objects inside the bucket because he was granted access to at least one object in the bucket.
- B. Bob will be able to access the object because bucket and object ACLs are independent of each other.
- C. Bob will not be able to access the object because he does not have access to the bucket.
- D. It is not possible to grant access to an object when it is inside a bucket for which a user does not have access.
B. Bob will be able to access the object because bucket and object ACLs are independent of each other.
Question #208
To set up a virtual private network between your office network and Google Cloud Platform and have the routes automatically updated when the network topology changes, what is the minimal number of each type of component you need to implement?
- A. 2 Cloud VPN Gateways and 1 Peer Gateway.
- B. 1 Cloud VPN Gateway, 1 Peer Gateway, and 1 Cloud Router.
- C. 2 Peer Gateways and 1 Cloud Router.
- D. 2 Cloud VPN Gateways and 1 Cloud Router.
B. 1 Cloud VPN Gateway, 1 Peer Gateway, and 1 Cloud Router.
Question #209
Which of the following statements about encryption on GCP is not true?
- A. Google Cloud Platform encrypts customer data stored at rest by default.
- B. Each encryption key is itself encrypted with a set of master keys.
- C. If you want to manage your own encryption keys for data on Google Cloud Storage, the only option is Customer-Managed Encryption Keys (CMEK) using Cloud KMS.
- D. Data in Google Cloud Platform is broken into subfile chunks for storage, and each chunk is encrypted at the storage level with an individual encryption key.
C. If you want to manage your own encryption keys for data on Google Cloud Storage, the only option is Customer-Managed Encryption Keys (CMEK) using Cloud KMS.
Question #210
Which database service requires that you configure a failover replica to make it highly available?
- A. Cloud Spanner.
- B. Cloud SQL.
- C. BigQuery.
- D. Cloud Datastore.
B. Cloud SQL.
Question #211
Which of these is not a principle you should apply when setting roles and permissions?
- A. Whenever possible, assign roles to groups instead of to individuals.
- B. Grant users the appropriate permissions to facilitate least privilege.
- C. Whenever possible, assign primitive roles rather than predefined roles.
- D. Audit all policy changes by checking the Cloud Audit Logs.
C. Whenever possible, assign primitive roles rather than predefined roles.
Question #212
Which of these is not a recommended method of authenticating an application with a Google Cloud service?
- A. Use the gcloud and/or gsutil commands.
- B. Request an OAuth2 access token and use it directly.
- C. Embed the service account's credentials in the application's source code.
- D. Use one of the Google Cloud Client Libraries.
C. Embed the service account's credentials in the application's source code.
Question #213
What are two different features that fully isolate groups of VM instances?
- A. Firewall rules and subnetworks.
- B. Networks and subnetworks.
- C. Subnetworks and projects.
- D. Projects and networks.
D. Projects and networks.
Question #214
Suppose you have a web server that is working properly, but you can't connect to its instance VM over SSH. Which of these troubleshooting methods can you use without disrupting production traffic? (Select 3 answers.)
- A. Create a snapshot of the disk and use it to create a new disk; then attach the new disk to a new instance.
- B. Use netcat to try to connect to port 22.
- C. Access the serial console output.
- D. Create a startup script to collect information.
A. Create a snapshot of the disk and use it to create a new disk; then attach the new disk to a new instance.
B. Use netcat to try to connect to port 22.
C. Access the serial console output.
Question #215
To configure Stackdriver to monitor a web server and let you know if it goes down, what steps do you need to take? (Select 2 answers.)
- A. Install the Stackdriver Logging Agent on the web server.
- B. Create an alerting policy.
- C. Install the Stackdriver Monitoring Agent on the web server.
- D. Create an uptime check.
B. Create an alerting policy.
D. Create an uptime check.
Question #216
Which of these tools can you use to copy data from AWS S3 to Cloud Storage? (Select 2 answers.)
- A. Cloud Storage Transfer Service.
- B. S3 Storage Transfer Service.
- C. Cloud Storage Console.
- D. gsutil.
A. Cloud Storage Transfer Service.
D. gsutil.
Question #217
What are two of the actions you can take to troubleshoot a virtual machine instance that won't start up at all? (Select 2 answers.)
- A. Increase the CPU and memory on the instance by changing the machine type.
- B. Validate that your disk has a valid file system.
- C. Examine your virtual machine instance's serial port output.
- D. Connect to your virtual machine instance using SSH.
B. Validate that your disk has a valid file system.
C. Examine your virtual machine instance's serial port output.
Question #218
Which statements about application load testing are true? (Select 2 answers.)
- A. You should test at the maximum load that you expect to encounter.
- B. You should test at 50% more than the maximum load that you expect to encounter.
- C. It is not necessary to test sudden increases in traffic since GCP scales seamlessly.
- D. Your load tests should include testing sudden increases in traffic.
A. You should test at the maximum load that you expect to encounter.
D. Your load tests should include testing sudden increases in traffic.
Question #219
Which of these statements about resilience testing are true? (Select 2 answers.)
- A. In a resilience test, your application should keep running with little or no downtime.
- B. To test the resilience of an autoscaling instance group, you can terminate a random instance within that group.
- C. In order for an application to survive instance failures, it should not be stateless.
- D. Resilience testing is the same as disaster recovery testing.
A. In a resilience test, your application should keep running with little or no downtime.
B. To test the resilience of an autoscaling instance group, you can terminate a random instance within that group.
Question #220
Which combination of Stackdriver services will alert you about errors generated by your applications and help you locate the root cause in the code?
- A. Monitoring, Trace, and Debugger.
- B. Monitoring and Error Reporting.
- C. Debugger and Error Reporting.
- D. Alerts and Debugger.
C. Debugger and Error Reporting.
Question #221
If you have configured Stackdriver Logging to export logs to BigQuery, but logs entries are not getting exported to BigQuery, what is the most likely cause?
- A. The Cloud Data Transfer Service has not been enabled.
- B. There isn't a firewall rule allowing traffic between Stackdriver and BigQuery.
- C. Stackdriver Logging does not have permission to write to the BigQuery dataset.
- D. The size of the Stackdriver log entries being exported exceeds the maximum capacity of the BigQuery dataset.
C. Stackdriver Logging does not have permission to write to the BigQuery dataset.
Question #222
You can use Stackdriver to monitor virtual machines on which cloud platforms?
- A. Google Cloud Platform, Microsoft Azure.
- B. Google Cloud Platform.
- C. Google Cloud Platform, Microsoft Azure, Amazon Web Services.
- D. Google Cloud Platform, Amazon Web Services.
D. Google Cloud Platform, Amazon Web Services.
Question #222
To minimize the risk of someone changing your log files to hide their activities, which of the following principles would help? (Select 3 answers.)
- A. Restrict usage of the owner role for projects and log buckets.
- B. Require two people to inspect the logs.
- C. Implement object versioning on the log-buckets.
- D. Encrypt the logs using Cloud KMS.
A. Restrict usage of the owner role for projects and log buckets.
B. Require two people to inspect the logs.
C. Implement object versioning on the log-buckets.
Question #223
If network traffic between one Google Compute Engine instance and another instance is being dropped, what is the most likely cause?
- A. The instances are on a network with low bandwidth.
- B. The TCP keep-alive setting is too short.
- C. The instances are on a default network with no additional firewall rules.
- D. A firewall rule was deleted.
D. A firewall rule was deleted.
Question #224
Which of the following practices can help you develop more secure software? (Select 3 answers.)
- A. Penetration tests.
- B. Integrating static code analysis tools into your CI/CD pipeline.
- C. Encrypting your source code.
- D. Peer review of code.
A. Penetration tests.
B. Integrating static code analysis tools into your CI/CD pipeline.
D. Peer review of code.
Question #225
Which two places hold information you can use to monitor the effects of a Cloud Storage lifecycle policy on specific objects? (Select 2 answers.)
- A. Cloud Storage Lifecycle Monitoring.
- B. Expiration time metadata.
- C. Access logs.
- D. Lifecycle config file.
B. Expiration time metadata.
C. Access logs.
Question #226
If you have object versioning enabled on a multi-regional bucket, what will the following lifecycle config file do?
{"lifecycle": { "rule": [ { "action": {"type": "Delete"}, "condition": { "age": 30, "isLive": true } }, { "action": { "type": "SetStorageClass", "storageClass": "COLDLINE"}, "condition": { "age": 365, "matchesStorageClass": ["MULTI_REGIONAL"] } } ] } }
- A. Archive objects older than 30 days (the second rule doesn't do anything).
- B. Delete objects older than 30 days (the second rule doesn't do anything).
- C. Archive objects older than 30 days and move objects to Coldline Storage after 365 days.
- D. Delete objects older than 30 days and move objects to Coldline Storage after 365 days.
B. Delete objects older than 30 days (the second rule doesn't do anything).
Question #227
Which of the following statements about Stackdriver Trace are true? (Select 2 answers.)
- A. Stackdriver Trace tracks the performance of the virtual machines running the application.
- B. Stackdriver Trace tracks the latency of incoming requests.
- C. Applications in App Engine automatically submit traces to Stackdriver Trace. Applications outside of App Engine need to use the Trace SDK or Trace API.
- D. To make an application work with Stackdriver Trace, you need to add instrumentation code using the Trace SDK or Trace API, even if the application is in App.
B. Stackdriver Trace tracks the latency of incoming requests.
C. Applications in App Engine automatically submit traces to Stackdriver Trace. Applications outside of App Engine need to use the Trace SDK or Trace API.
Question #227
You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?
- A. Google Cloud SQL.
- B. Google Cloud Bigtable.
- C. Google Cloud Storage.
- D. Google Cloud Datastore.
B. Google Cloud Bigtable.
Question #228
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?
- A. Google BigQuery.
- B. Google Cloud SQL.
- C. Google Cloud Bigtable.
- D. Google Cloud Storage.
C. Google Cloud Bigtable.