Spring Sale - 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dm70dm

Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) Questions and Answers

Questions 4

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

Options:

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Buy Now
Questions 5

Refer to the Altostrat Media case study for the following solution regarding API management and cost control.

Altostrat is using Apigee for API management and wants to ensure their APIs are protected from overuse and abuse. You need to implement an Apigee feature to control the total number of API calls for cost management. What should you do?

Options:

A.

Set up API key validation.

B.

Integrate OAuth 2.0 authorization.

C.

Configure Quota policies.

D.

Activate XML threat protection.

Buy Now
Questions 6

For this question, refer to the Mountkirk Games case study.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

Options:

A.

Create a scalable environment in GCP for simulating production load.

B.

Use the existing infrastructure to test the GCP-based backend at scale.

C.

Build stress tests into each component of your application using resources internal to GCP to simulate load.

D.

Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Buy Now
Questions 7

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

Options:

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Buy Now
Questions 8

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

Options:

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Buy Now
Questions 9

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud

infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video

encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted

after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you

do?

Options:

A.

Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for

analysis.

B.

Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.

C.

Use the gcloud recommender command to list the idle virtual machine instances.

D.

From the Google Console, identify which Compute Engine instances in the managed instance groups are

no longer responding to health check probes.

Buy Now
Questions 10

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

Options:

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Buy Now
Questions 11

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

Options:

A.

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Buy Now
Questions 12

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.

Apply a Cloud Armor security policy to external load balancers using a named IP list for Fastly.

B.

Apply a Cloud Armor security policy to external load balancers using the IP addresses that Fastly has published. C. Apply a VPC firewall rule on port 443 for Fastly IP address ranges.

C.

Apply a VPC firewall rule on port 443 for network resources tagged with scurceiplisr-fasrly.

Buy Now
Questions 13

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

Options:

A.

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Buy Now
Questions 14

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Buy Now
Questions 15

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

Options:

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

Buy Now
Questions 16

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

Options:

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Buy Now
Questions 17

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Buy Now
Questions 18

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

Options:

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

Buy Now
Questions 19

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

Options:

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Buy Now
Questions 20

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.

Add three new Cloud VPN connections

D.

Add a new Carrier Peering connection

Buy Now
Questions 21

TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

Options:

A.

Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.

B.

Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage

C.

Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.

D.

Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage

Buy Now
Questions 22

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

Options:

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

Buy Now
Questions 23

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

Options:

A.

Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.

B.

Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.

C.

Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage

bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.

D.

Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.

Buy Now
Questions 24

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

Options:

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Buy Now
Questions 25

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

Options:

A.

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.

B.

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.

C.

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.

D.

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Buy Now
Questions 26

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

Options:

A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.

B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.

C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.

D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.

Buy Now
Questions 27

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?

Options:

A.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

B.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.

C.

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

D.

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.

Buy Now
Questions 28

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.

Google Kubernetes Engine with an SSL Ingress

B.

Cloud IoT Core with public/private key pairs

C.

Compute Engine with project-wide SSH keys

D.

Compute Engine with specific SSH keys

Buy Now
Questions 29

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Buy Now
Questions 30

You company has a Kubernetes application that pulls messages from Pub/Sub and stores them in Firestore. Because the application is simple, it was deployed as a single pod. The infrastructure team has analyzed Pub/Sub metrics and discovered that the application cannot process the messages in real time. Most of them wait for minutes before being processed. You need to scale the elaboration process that is I/O-intensive. What should you do?

Options:

A.

Configure a Kubernetes autoscaling based on the subscription/push_request metric.

B.

Use the –enable- autoscaling flag when you create the Kubernetes cluster

C.

Configure a Kubernetes autoscaling based on the subscription/num_undelivered message metric.

D.

Use kubectl autoscale deployment APP_NAME –max 6 –min 2 –cpu- percent 50 to configure Kubernetes autoscaling deployment

Buy Now
Questions 31

Your company has a support ticketing solution that uses App Engine Standard. The project that contains the App Engine application already has a Virtual Private Cloud(VPC) network fully

connected to the company’s on-premises environment through a Cloud VPN tunnel. You want to enable App Engine application to communicate with a database that is running in the company’s on-premises environment. What should you do?

Options:

A.

Configure private services access

B.

Configure private Google access for on-premises hosts only

C.

Configure serverless VPC access

D.

Configure private Google access

Buy Now
Questions 32

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

Options:

A.

1. Install the Cloud Ops agent on all instances.

2. Create a sink to export logs into a partitioned BigQuery table.

3. Set a time_partitioning_expiration of 30 days.

B.

1. Install the Cloud Ops agent on all instances.

2. Create a sink to export logs into a regional Cloud Storage bucket.

3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.

4. Configure a retention policy at the bucket level to create a lock.

C.

1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery

table. 2. Set a time_partitioning_expiration of 30 days.

D.

1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket.

2. Create a sink to export logs into a regional Cloud Storage bucket.

3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.

Buy Now
Questions 33

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

Options:

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Buy Now
Questions 34

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

Options:

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Buy Now
Questions 35

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

Options:

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Buy Now
Questions 36

Your company has a Google Cloud project that uses BigQuery for data warehousing. The VPN tunnel between the on-premises environment and Google Cloud is configured with Cloud VPN. Your security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing. What should you do?

Options:

A.

Configure VPC Service Controls and configure Private Google Access for on-premises hosts.

B.

Create a service account, grant the BigQuery JobUser role and Storage Object Viewer role to the service account, and remove all other Identity and Access Management (1AM) access from the project.

C.

Configure Private Google Access.

D.

Configure Private Service Connect.

Buy Now
Questions 37

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords. What authentication strategy should they use?

Options:

A.

Use G Suite Password Sync to replicate passwords into Google.

B.

Federate authentication via SAML 2.0 to the existing Identity Provider.

C.

Provision users in Google using the Google Cloud Directory Sync tool.

D.

Ask users to set their Google password to match their corporate password.

Buy Now
Questions 38

Your company creates rendering software which users can download from the company website. Your

company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.

How should you store the files?

Options:

A.

Save the files in a Multi-Regional Cloud Storage bucket.

B.

Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region.

C.

Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.

D.

Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.

Buy Now
Questions 39

Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?

Options:

A.

Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.

B.

Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.

C.

Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space.

D.

Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.

Buy Now
Questions 40

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.

How should you configure users’ access roles?

Options:

A.

Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery

dataViewer on the projects that contain the data.

B.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and

BigQuery user on the projects that contain the data.

C.

Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.

D.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and

BigQuery jobUser on the projects that contain the data.

Buy Now
Questions 41

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Buy Now
Questions 42

For this question, refer to the Cymbal Retail case study. Cymbal's generative Al models require high-performance storage for temporary files generated during model training and inference. These files are ephemeral and frequently accessed and modified You need to select a storage solution that minimizes latency and cost and maximizes performance for generative Al workloads. What should you do?

Options:

A.

Use a Cloud Storage bucket in the same region as your virtual machines Configure lifecycle policies to delete files after processing

B.

Use Filestore to store temporary files

C.

Use performance persistent disks.

D.

Use Local SSDs attached to the VMs running the generative Al models

Buy Now
Questions 43

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency. What should you do?

Options:

A.

Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.

B.

Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.

C.

Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.

D.

Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

Buy Now
Questions 44

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which two steps should they take? (Choose two.)

Options:

A.

Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.

B.

Begin packaging their game backend artifacts in container images and running them on Kubernetes Engine to improve the availability to scale up or down based on game activity.

C.

Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.

D.

Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.

E.

Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

Buy Now
Questions 45

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

Options:

A.

Create network load balancers. Use preemptible Compute Engine instances.

B.

Create network load balancers. Use non-preemptible Compute Engine instances.

C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Buy Now
Questions 46

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk’s technical requirement for storing game activity in a time series database service?

Options:

A.

Cloud Bigtable

B.

Cloud Spanner

C.

BigQuery

D.

Cloud Datastore

Buy Now
Questions 47

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?

Options:

A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.

B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.

C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

Buy Now
Questions 48

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

Options:

A.

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.

Write a schema migration plan to denormalize data for better performance in BigQuery.

C.

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E.

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Buy Now
Questions 49

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API.

You want to follow Google-recommended practices. How should you design the backend?

Options:

A.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.

B.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.

C.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.

D.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

Buy Now
Questions 50

Refer to the Altostrat Media case study for the following solution.

Altostrat is concerned about sophisticated, multi-vector Distributed Denial of Service (DDoS) attacks targeting various layers of their infrastructure. DDoS attacks could potentially disrupt video streaming and cause financial losses. You need to mitigate this risk. What should you do?

Options:

A.

Set up VPC Service Controls to restrict access to sensitive resources and prevent data exfiltration.

B.

Configure Cloud Next Generation Firewall (NGFW) with custom rules to filter malicious traffic at the network level.

C.

Deploy Google Cloud Armor with pre-configured and custom rules for L3/L4 and L7 protection.

D.

Activate Security Command Center to monitor security posture and detect potential threats.

Buy Now
Questions 51

Altostrat's development team is using a microservices architecture for their application. You need to select the most suitable testing approach to ensure that individual microservices function correctly in isolation. What should you do?

Options:

A.

Run unit testing.

B.

Use load testing.

C.

Perform end-to-end testing.

D.

Execute integration testing.

Buy Now
Questions 52

Altostrat stores a large library of media content, including sensitive interviews and documentaries, in Cloud Storage. They are concerned about the confidentiality of this content and want to protect it from unauthorized access. You need to implement a Google-recommended solution that is easy to integrate and provides Altostrat with control and auditability of the encryption keys. What should you do?

Options:

A.

Configure Cloud Storage to use server-side encryption with Google-managed encryption keys. Create a bucket policy to restrict access to only authorized Google groups and required service accounts.

B.

Use Cloud Storage default encryption at rest. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

C.

Implement client-side encryption before uploading it to Cloud Storage. Store the encryption keys in a HashiCorp Vault instance deployed on Google Kubernetes Engine (GKE). Implement fine-grained access control to sensitive Cloud Storage buckets using IAM roles.

D.

Use customer-managed encryption keys (CMEK) for all Cloud Storage buckets storing sensitive media content. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

Buy Now
Questions 53

Refer to the Altostrat Media case study for the following solution regarding the performance analysis of their media processing pipeline.

Altostrat needs to analyze the performance of its media processing pipeline running on Java-based Cloud Run function. You need to select the most effective tool for the task. What should you do?

Options:

A.

Query logs in Cloud Logging.

B.

Analyze the data via Cloud Profiler.

C.

Instrument the code to use Cloud Trace.

D.

Inspect data from Snapshot Debugger.

Buy Now
Questions 54

Refer to the Altostrat Media case study for the following solutions regarding cost optimization for batch processing and microservices testing strategies.

Altostrat is experiencing fluctuating computational demands for its batch processing jobs. These jobs are not time-critical and can tolerate occasional interruptions. You want to optimize cloud costs and address batch processing needs. What should you do?

Options:

A.

Configure reserved VM instances

B.

Deploy spot VM instances.

C.

Set up standard VM instances.

D.

Use Cloud Run functions.

Buy Now
Questions 55

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution.

The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract

transform load (ETL) tool. What should you do?

Options:

A.

Use gsutil to batch move files in sequence.

B.

Use gsutil to batch copy the files in parallel.

C.

Use gsutil to extract the files as the first part of ETL.

D.

Use gsutil to load the files as the last part of ETL.

Buy Now
Questions 56

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily.

You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user’s

perspective. What should you do?

Options:

A.

Create CPU Utilization and Request Latency as service level indicators.

B.

Create GKE CPU Utilization and Memory Utilization as service level indicators.

C.

Create Request Latency and Error Rate as service level indicators.

D.

Create Server Uptime and Error Rate as service level indicators.

Buy Now
Questions 57

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

Options:

A.

Use Stackdriver Trace to create a trace list analysis.

B.

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Buy Now
Questions 58

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

Options:

A.

Replace RabbitMQ with Google Pub/Sub.

B.

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.

C.

Resize compute resources to match predefined Compute Engine machine types.

D.

Containerize the micro services and host them in Google Kubernetes Engine.

Buy Now
Questions 59

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.

Web applications deployed using App Engine standard environment

B.

RabbitMQ deployed using an unmanaged instance group

C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Buy Now
Questions 60

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Buy Now
Questions 61

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Buy Now
Questions 62

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

Options:

A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.

C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.

D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.

Buy Now
Questions 63

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

Options:

A.

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Buy Now
Questions 64

For this question, refer to the Dress4Win case study.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?

Options:

A.

Install the Stackdriver agent on all of the legacy web servers.

B.

In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule

C.

Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)

D.

Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring— UptimeChecks (https://cloud.google.com/monitoring)

Buy Now
Exam Name: Google Certified Professional - Cloud Architect (GCP)
Last Update: Mar 15, 2026
Questions: 333

PDF + Testing Engine

$49.5  $164.99

Testing Engine

$37.5  $124.99
buy now Professional-Cloud-Architect testing engine

PDF (Q&A)

$31.5  $104.99
buy now Professional-Cloud-Architect pdf
dumpsmate guaranteed to pass

24/7 Customer Support

DumpsMate's team of experts is always available to respond your queries on exam preparation. Get professional answers on any topic of the certification syllabus. Our experts will thoroughly satisfy you.

Site Secure

mcafee secure

TESTED 15 Mar 2026