Labour Day - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpm65

Professional-Cloud-Developer Google Certified Professional - Cloud Developer Questions and Answers

Questions 4

Which service should HipLocal use for their public APIs?

Options:

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Buy Now
Questions 5

You are deploying your application on a Compute Engine instance that communicates with Cloud SQL. You will use Cloud SQL Proxy to allow your application to communicate to the database using the service account associated with the application’s instance. You want to follow the Google-recommended best practice of providing minimum access for the role assigned to the service account. What should you do?

Options:

A.

Assign the Project Editor role.

B.

Assign the Project Owner role.

C.

Assign the Cloud SQL Client role.

D.

Assign the Cloud SQL Editor role.

Buy Now
Questions 6

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

Options:

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Buy Now
Questions 7

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

Options:

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Buy Now
Questions 8

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

Options:

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Buy Now
Questions 9

You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?

Options:

A.

Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.

B.

Use Bigtable, and assign the roles/bigtable.viewer role to the service account.

C.

Use Firestore in Native mode and assign the roles/datastore.user role to the service account.

D.

Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.

Buy Now
Questions 10

You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least privilege. What should you do?

Options:

A.

Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token.

B.

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access token.

C.

Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service account.

D.

Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new service account.

Buy Now
Questions 11

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

Options:

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Buy Now
Questions 12

You are creating an App Engine application that writes a file to any user's Google Drive.

How should the application authenticate to the Google Drive API?

Options:

A.

With an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to

obtain an access token for each user.

B.

With an OAuth Client ID with delegated domain-wide authority.

C.

With the App Engine service account and https://www.googleapis.com/auth/drive.file scope

that generates a signed JWT.

D.

With the App Engine service account with delegated domain-wide authority.

Buy Now
Questions 13

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

Options:

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Buy Now
Questions 14

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

Options:

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Buy Now
Questions 15

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

Options:

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Buy Now
Questions 16

You are developing a marquee stateless web application that will run on Google Cloud. The rate of the incoming user traffic is expected to be unpredictable, with no traffic on some days and large spikes on other days. You need the application to automatically scale up and down, and you need to minimize the cost associated with running the application. What should you do?

Options:

A.

Build the application in Python with Firestore as the database. Deploy the application to Cloud Run.

B.

Build the application in C# with Firestore as the database. Deploy the application to App Engine flexible environment.

C.

Build the application in Python with CloudSQL as the database. Deploy the application to App Engine standard environment.

D.

Build the application in Python with Firestore as the database. Deploy the application to a Compute Engine managed instance group with autoscaling.

Buy Now
Questions 17

The development teams in your company want to manage resources from their local environments. You have been asked to enable developer access to each team’s Google Cloud projects. You want to maximize efficiency while following Google-recommended best practices. What should you do?

Options:

A.

Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project ID.

B.

Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project Number.

C.

Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project ID.

D.

Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project Number.

Buy Now
Questions 18

Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept private on Google Cloud. You want to use managed services and follow Google-recommended best practices. What should you do?

Options:

A.

1. Enable Cloud SQL and Cloud Run in the same project.

2. Configure a private IP address for Cloud SQL. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Configure Cloud Run to use the connector to connect to Cloud SQL.

B.

1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project.

2. Configure a private IP address for the VM. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL.

C.

1. Use Cloud SQL and Cloud Run in different projects.

2. Configure a private IP address for Cloud SQL. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL.

D.

1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects.

2. Configure a private IP address for the VM. Enable private services access.

3. Create a Serverless VPC Access connector.

4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL

Buy Now
Questions 19

You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?

Options:

A.

Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing.

B.

Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing.

C.

Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing.

D.

Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.

Buy Now
Questions 20

Which service should HipLocal use to enable access to internal apps?

Options:

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Buy Now
Questions 21

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

Options:

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Buy Now
Questions 22

You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch. What should you do?

Options:

A.

Use canary deployment

B.

Use blue/green deployment

C.

Use rolling updates deployment

D.

Use A/B testing with traffic mirroring during deployment

Buy Now
Questions 23

You are using Cloud Run to host a global ecommerce web application. Your company's design team is creating a new color scheme for the web app. You have been tasked with determining whether the new color scheme will increase sales You want to conduct testing on live production traffic How should you design the study?

Options:

A.

Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color

schemes of your application Analyze the results to determine whether there is a statistically significant

difference in sales.

B.

Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment

is created and tested After testing is complete reroute all traffic to the new color scheme Analyze the

results to determine whether there is a statistically significant difference in sales.

C.

Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether

they increase for this group of users.

D.

Use an external HTTP(S) load balancer to mirror traffic to the new version of your application Analyze the

results to determine whether there is a statistically significant difference in sales.

Buy Now
Questions 24

You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do?

Options:

A.

Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API.

B.

Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio.

C.

Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks.

D.

Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.

Buy Now
Questions 25

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

Options:

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Buy Now
Questions 26

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

Options:

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Buy Now
Questions 27

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

Options:

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Buy Now
Questions 28

Which database should HipLocal use for storing user activity?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Buy Now
Questions 29

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

Options:

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Buy Now
Questions 30

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

Options:

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Buy Now
Questions 31

You configured your Compute Engine instance group to scale automatically according to overall CPU usage. However, your application’s response latency increases sharply before the cluster has finished adding up instances. You want to provide a more consistent latency experience for your end users by changing the configuration ot the instance group autoscaler. Which two configuration changes should you make? (Choose two.)

Options:

A.

Add the label “AUTOSCALE” to the instance group template.

B.

Decrease the cool-down period for instances added to the group.

C.

Increase the target CPU usage for the instance group autoscaler.

D.

Decrease the target CPU usage for the instance group autoscaler.

E.

Remove the health-check for individual VMs in the instance group.

Buy Now
Questions 32

You have an analytics application that runs hundreds of queries on BigQuery every few minutes using BigQuery API. You want to find out how much time these queries take to execute. What should you do?

Options:

A.

Use Stackdriver Monitoring to plot slot usage.

B.

Use Stackdriver Trace to plot API execution time.

C.

Use Stackdriver Trace to plot query execution time.

D.

Use Stackdriver Monitoring to plot query execution times.

Buy Now
Questions 33

You have an HTTP Cloud Function that is called via POST. Each submission’s request body has a flat, unnested JSON structure containing numeric and text data. After the Cloud Function completes, the collected data should be immediately available for ongoing and complex analytics by many users in parallel. How should you persist the submissions?

Options:

A.

Directly persist each POST request’s JSON data into Datastore.

B.

Transform the POST request’s JSON data, and stream it into BigQuery.

C.

Transform the POST request’s JSON data, and store it in a regional Cloud SQL cluster.

D.

Persist each POST request’s JSON data as an individual file within Cloud Storage, with the file name containing the request identifier.

Buy Now
Questions 34

You are creating a Google Kubernetes Engine (GKE) cluster and run this command:

Professional-Cloud-Developer Question 34

The command fails with the error:

Professional-Cloud-Developer Question 34

You want to resolve the issue. What should you do?

Options:

A.

Request additional GKE quota is the GCP Console.

B.

Request additional Compute Engine quota in the GCP Console.

C.

Open a support case to request additional GKE quotA.

D.

Decouple services in the cluster, and rewrite new clusters to function with fewer cores.

Buy Now
Questions 35

Your team is developing an application in Google Cloud that executes with user identities maintained by Cloud Identity. Each of your application’s users will have an associated Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the same user will retrieve published messages. You need to ensure that only authorized users can publish and subscribe to their own specific Pub/Sub topic and subscription. What should you do?

Professional-Cloud-Developer Question 35

Options:

A.

Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level.

B.

Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the project level.

C.

Grant the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions.

D.

Configure the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles.

Buy Now
Questions 36

You are developing an application that reads credit card data from a Pub/Sub subscription. You have written code and completed unit testing. You need to test the Pub/Sub integration before deploying to Google Cloud. What should you do?

Options:

A.

Create a service to publish messages, and deploy the Pub/Sub emulator. Generate random content in the publishing service, and publish to the emulator.

B.

Create a service to publish messages to your application. Collect the messages from Pub/Sub in production, and replay them through the publishing service.

C.

Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the messages from Pub/Sub in production, and publish them to the emulator.

D.

Create a service to publish messages, and deploy the Pub/Sub emulator. Publish a standard set of testing messages from the publishing service to the emulator.

Buy Now
Questions 37

Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data warehouse keeps 2 PBs of user data. Recently, your company expanded your user base to include EU users and needs to comply with these requirements:

Your company must be able to delete all user account information upon user request.

All EU user data must be stored in a single region specifically for EU users.

Which two actions should you take? (Choose two.)

Options:

A.

Use BigQuery federated queries to query data from Cloud Storage.

B.

Create a dataset in the EU region that will keep information about EU users only.

C.

Create a Cloud Storage bucket in the EU region to store information for EU users only.

D.

Re-upload your data using to a Cloud Dataflow pipeline by filtering your user records out.

E.

Use DML statements in BigQuery to update/delete user records based on their requests.

Buy Now
Questions 38

You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don't have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible What should you do?

Options:

A.

Host the application on Compute Engine, and configure Cloud Endpoints for your application.

B.

Host the application on Google Kubernetes Engine and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.

C.

Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication.

D.

Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates.

Buy Now
Questions 39

You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL Due to regulatory requirements: your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices'?

Options:

A.

Configure your Cloud Run service with a Cloud SQL connection.

B.

Configure your Cloud Run service to use a Serverless VPC Access connector

C.

Configure your application to use the Cloud SQL Java connector

D.

Configure your application to connect to an instance of the Cloud SQL Auth proxy

Buy Now
Questions 40

You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application

can scale horizontally, and each instance of your application needs to have a stable network identity and its

own persistent disk.

Which GKE object should you use?

Options:

A.

Deployment

B.

StatefulSet

C.

ReplicaSet

D.

ReplicaController

Buy Now
Questions 41

You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a

transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique

account number efficiently.

Which data structure should you use?

Options:

A.

A linked list

B.

A hash table

C.

A two-dimensional array

D.

A comma-delimited string

Buy Now
Questions 42

You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service?

Options:

A.

Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services

B.

Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh

C.

Install cert-manager on GKE to automatically renew the SSL certificates.

D.

Install Anthos Service Mesh, and enable mTLS in your Service Mesh.

Buy Now
Questions 43

You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.)

Options:

A.

Specify the resource limits and requests in the object specifications.

B.

Create a namespace for each team, and attach resource quotas to each namespace.

C.

Create a LimitRange to specify the default compute resource requirements for each namespace.

D.

Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.

E.

Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.

Buy Now
Questions 44

You need to copy directory local-scripts and all of its contents from your local workstation to a Compute

Engine virtual machine instance.

Which command should you use?

Options:

A.

gsutil cp --project “my-gcp-project” -r ~/local-scripts/ gcp-instance-name:~/

server-scripts/ --zone “us-east1-b”

B.

gsutil cp --project “my-gcp-project” -R ~/local-scripts/ gcp-instance-name:~/

server-scripts/ --zone “us-east1-b”

C.

gcloud compute scp --project “my-gcp-project” --recurse ~/local-scripts/ gcpinstance-

name:~/server-scripts/ --zone “us-east1-b”

D.

gcloud compute mv --project “my-gcp-project” --recurse ~/local-scripts/ gcpinstance-

name:~/server-scripts/ --zone “us-east1-b”

Buy Now
Questions 45

Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users’ session and shopping cart information while following Google-recommended best practices?

Options:

A.

Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.

B.

Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.

C.

Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.

D.

Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.

Buy Now
Questions 46

You need to migrate an internal file upload API with an enforced 500-MB file size limit to App Engine.

What should you do?

Options:

A.

Use FTP to upload files.

B.

Use CPanel to upload files.

C.

Use signed URLs to upload files.

D.

Change the API to be a multipart file upload API.

Buy Now
Questions 47

Your company’s corporate policy states that there must be a copyright comment at the very beginning of all source files. You want to write a custom step in Cloud Build that is triggered by each source commit. You need the trigger to validate that the source contains a copyright and add one for subsequent steps if not there. What should you do?

Options:

A.

Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository.

B.

Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files do not need to be committed back to the source repository.

C.

Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are written back to the Cloud Storage bucket.

D.

Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository.

Buy Now
Questions 48

Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture. How should you proceed with the migration?

Options:

A.

Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises Hadoop environment.

B.

Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Compute Engine instances.

C.

Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.

D.

Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.

Buy Now
Questions 49

You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?

Options:

A.

Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required.

B.

Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required.

C.

Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required.

D.

Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.

Buy Now
Questions 50

You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly

combine into a single table, removing duplicate rows from the result set.

What should you do?

Options:

A.

Use the JOIN operator in SQL to combine the tables.

B.

Use nested WITH statements to combine the tables.

C.

Use the UNION operator in SQL to combine the tables.

D.

Use the UNION ALL operator in SQL to combine the tables.

Buy Now
Questions 51

You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster The application

exposes an HTTP-based health check at /healthz. You want to use this health check endpoint to determine whether traffic should be routed to the pod by the load balancer.

Which code snippet should you include in your Pod configuration?

Professional-Cloud-Developer Question 51

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Buy Now
Questions 52

You work at a rapidly growing financial technology startup. You manage the payment processing application written in Go and hosted on Cloud Run in the Singapore region (asia-southeast1). The payment processing application processes data stored in a Cloud Storage bucket that is also located in the Singapore region.

The startup plans to expand further into the Asia Pacific region. You plan to deploy the Payment Gateway in Jakarta, Hong Kong, and Taiwan over the next six months. Each location has data residency requirements that require customer data to reside in the country where the transaction was made. You want to minimize the cost of these deployments. What should you do?

Options:

A.

Create a Cloud Storage bucket in each region, and create a Cloud Run service of the payment processing application in each region.

B.

Create a Cloud Storage bucket in each region, and create three Cloud Run services of the payment processing application in the Singapore region.

C.

Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run services of the payment processing application in the Singapore region.

D.

Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run revisions of the payment processing application in the Singapore region.

Buy Now
Questions 53

You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.)

Options:

A.

Push your source code to Artifact Registry.

B.

Submit a Cloud Build job to push the image.

C.

Use the pack build command with pack CLI.

D.

Include the --source flag with the gcloud run deploy CLI command.

E.

Include the --platform=kubernetes flag with the gcloud run deploy CLI command.

Buy Now
Questions 54

You are a developer at a large organization. You have an application written in Go running in a production Google Kubernetes Engine (GKE) cluster. You need to add a new feature that requires access to BigQuery. You want to grant BigQuery access to your GKE cluster following Google-recommended best practices. What should you do?

Options:

A.

Create a Google service account with BigQuery access. Add the JSON key to Secret Manager, and use the Go client library to access the JSON key.

B.

Create a Google service account with BigQuery access. Add the Google service account JSON key as a Kubernetes secret, and configure the application to use this secret.

C.

Create a Google service account with BigQuery access. Add the Google service account JSON key to Secret Manager, and use an init container to access the secret for the application to use.

D.

Create a Google service account and a Kubernetes service account. Configure Workload Identity on the GKE cluster, and reference the Kubernetes service account on the application Deployment.

Buy Now
Questions 55

You have recently instrumented a new application with OpenTelemetry, and you want to check the latency of your application requests in Trace. You want to ensure that a specific request is always traced. What should you do?

Options:

A.

Wait 10 minutes, then verify that Trace captures those types of requests automatically.

B.

Write a custom script that sends this type of request repeatedly from your dev project.

C.

Use the Trace API to apply custom attributes to the trace.

D.

Add the X-Cloud-Trace-Context header to the request with the appropriate parameters.

Buy Now
Questions 56

You are a developer at a financial institution You use Cloud Shell to interact with Google Cloud services. User data is currently stored on an ephemeral disk however a recently passed regulation mandates that you can no longer store sensitive information on an ephemeral disk. You need to implement a new storage solution for your user data You want to minimize code changes Where should you store your user data'?

Options:

A.

Store user data on a Cloud Shell home disk and log in at least every 120 days to prevent its deletion

B.

Store user data on a persistent disk in a Compute Engine instance

C.

Store user data m BigQuery tables

D.

Store user data in a Cloud Storage bucket

Buy Now
Questions 57

You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?

Options:

A.

Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.

B.

Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.

C.

Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.

D.

Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.

Buy Now
Exam Name: Google Certified Professional - Cloud Developer
Last Update: May 1, 2024
Questions: 254

PDF + Testing Engine

$56  $159.99

Testing Engine

$42  $119.99
buy now Professional-Cloud-Developer testing engine

PDF (Q&A)

$35  $99.99
buy now Professional-Cloud-Developer pdf
dumpsmate guaranteed to pass
24/7 Customer Support

DumpsMate's team of experts is always available to respond your queries on exam preparation. Get professional answers on any topic of the certification syllabus. Our experts will thoroughly satisfy you.

Site Secure

mcafee secure

TESTED 01 May 2024