Winter Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpm65

Professional-Data-Engineer Google Professional Data Engineer Exam Questions and Answers

Questions 4

All Google Cloud Bigtable client requests go through a front-end server ______ they are sent to a Cloud Bigtable node.

Options:

A.

before

B.

after

C.

only if

D.

once

Buy Now
Questions 5

You want to use a BigQuery table as a data sink. In which writing mode(s) can you use BigQuery as a sink?

Options:

A.

Both batch and streaming

B.

BigQuery cannot be used as a sink

C.

Only batch

D.

Only streaming

Buy Now
Questions 6

What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

Options:

A.

create a third instance and sync the data from the two storage types via batch jobs

B.

export the data from the existing instance and import the data into a new instance

C.

run parallel instances where one is HDD and the other is SDD

D.

the selection is final and you must resume using the same storage type

Buy Now
Questions 7

What are two of the characteristics of using online prediction rather than batch prediction?

Options:

A.

It is optimized to handle a high volume of data instances in a job and to run more complex models.

B.

Predictions are returned in the response message.

C.

Predictions are written to output files in a Cloud Storage location that you specify.

D.

It is optimized to minimize the latency of serving predictions.

Buy Now
Questions 8

What Dataflow concept determines when a Window's contents should be output based on certain criteria being met?

Options:

A.

Sessions

B.

OutputCriteria

C.

Windows

D.

Triggers

Buy Now
Questions 9

When you store data in Cloud Bigtable, what is the recommended minimum amount of stored data?

Options:

A.

500 TB

B.

1 GB

C.

1 TB

D.

500 GB

Buy Now
Questions 10

Which action can a Cloud Dataproc Viewer perform?

Options:

A.

Submit a job.

B.

Create a cluster.

C.

Delete a cluster.

D.

List the jobs.

Buy Now
Questions 11

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

Options:

A.

Store the common data in BigQuery as partitioned tables.

B.

Store the common data in BigQuery and expose authorized views.

C.

Store the common data encoded as Avro in Google Cloud Storage.

D.

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Buy Now
Questions 12

Which of these is not a supported method of putting data into a partitioned table?

Options:

A.

If you have existing data in a separate file for each day, then create a partitioned table and upload each file into the appropriate partition.

B.

Run a query to get the records for a specific day from an existing table and for the destination table, specify a partitioned table ending with the day in the format "$YYYYMMDD".

C.

Create a partitioned table and stream new records to it every day.

D.

Use ORDER BY to put a table's rows into chronological order and then change the table's type to "Partitioned".

Buy Now
Questions 13

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

Options:

A.

Export the data into a Google Sheet for virtualization.

B.

Create an additional table with only the necessary columns.

C.

Create a view on the table to present to the virtualization tool.

D.

Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Buy Now
Questions 14

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

Options:

A.

Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.

B.

Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

C.

Use the NOW () function in BigQuery to record the event’s time.

D.

Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Buy Now
Questions 15

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

Options:

A.

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Buy Now
Questions 16

Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do?

Options:

A.

Redefine the schema by evenly distributing reads and writes across the row space of the table.

B.

The performance issue should be resolved over time as the site of the BigDate cluster is increased.

C.

Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.

D.

Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.

Buy Now
Questions 17

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?

Options:

A.

Use Google Stackdriver Audit Logs to review data access.

B.

Get the identity and access management IIAM) policy of each table

C.

Use Stackdriver Monitoring to see the usage of BigQuery query slots.

D.

Use the Google Cloud Billing API to see what account the warehouse is being billed to.

Buy Now
Questions 18

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

Options:

A.

Put the data into Google Cloud Storage.

B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.

C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.

D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Buy Now
Questions 19

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

Options:

A.

Add capacity (memory and disk space) to the database server by the order of 200.

B.

Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.

C.

Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

D.

Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Buy Now
Questions 20

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

Options:

A.

Run a local version of Jupiter on the laptop.

B.

Grant the user access to Google Cloud Shell.

C.

Host a visualization tool on a VM on Google Compute Engine.

D.

Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

Buy Now
Questions 21

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

Options:

A.

Load data into different partitions.

B.

Load data into a different dataset for each client.

C.

Put each client’s BigQuery dataset into a different table.

D.

Restrict a client’s dataset to approved users.

E.

Only allow a service account to access the datasets.

F.

Use the appropriate identity and access management (IAM) roles for each client’s users.

Buy Now
Questions 22

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?

Options:

A.

Linear regression

B.

Logistic classification

C.

Recurrent neural network

D.

Feedforward neural network

Buy Now
Questions 23

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

Options:

A.

Issue a command to restart the database servers.

B.

Retry the query with exponential backoff, up to a cap of 15 minutes.

C.

Retry the query every second until it comes back online to minimize staleness of data.

D.

Reduce the query frequency to once every hour until the database comes back online.

Buy Now
Questions 24

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

Options:

A.

Use federated data sources, and check data in the SQL query.

B.

Enable BigQuery monitoring in Google Stackdriver and create an alert.

C.

Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.

D.

Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Buy Now
Questions 25

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

Options:

A.

Update the current pipeline and use the drain flag.

B.

Update the current pipeline and provide the transform mapping JSON object.

C.

Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.

D.

Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Buy Now
Questions 26

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

Options:

A.

The CSV data loaded in BigQuery is not flagged as CSV.

B.

The CSV data has invalid rows that were skipped on import.

C.

The CSV data loaded in BigQuery is not using BigQuery’s default encoding.

D.

The CSV data has not gone through an ETL phase before loading into BigQuery.

Buy Now
Questions 27

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Professional-Data-Engineer Question 27

Professional-Data-Engineer Question 27

Options:

A.

Option A

B.

Option B.

C.

Option C

D.

Option D

Buy Now
Questions 28

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)

Options:

A.

Redis

B.

HBase

C.

MySQL

D.

MongoDB

E.

Cassandra

F.

HDFS with Hive

Buy Now
Questions 29

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:

    The user profile: What the user likes and doesn’t like to eat

    The user account information: Name, address, preferred meal times

    The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Bigtable

D.

Cloud Datastore

Buy Now
Questions 30

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)

Options:

A.

Introduce data compression for each file to increase the rate file of file transfer.

B.

Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.

C.

Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.

D.

Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.

E.

Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Buy Now
Questions 31

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

Options:

A.

Rewrite the job in Pig.

B.

Rewrite the job in Apache Spark.

C.

Increase the size of the Hadoop cluster.

D.

Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Buy Now
Questions 32

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?

Options:

A.

Change the processing job to use Google Cloud Dataproc instead.

B.

Manually start the Cloud Dataflow job each morning when you get into the office.

C.

Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.

D.

Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Buy Now
Questions 33

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

Options:

A.

Load the data every 30 minutes into a new partitioned table in BigQuery.

B.

Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery

C.

Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore

D.

Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Buy Now
Questions 34

Cloud Dataproc charges you only for what you really use with _____ billing.

Options:

A.

month-by-month

B.

minute-by-minute

C.

week-by-week

D.

hour-by-hour

Buy Now
Questions 35

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

Options:

A.

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Buy Now
Questions 36

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?

Options:

A.

Convert all daily log tables into date-partitioned tables

B.

Convert the sharded tables into a single partitioned table

C.

Enable query caching so you can cache data from previous months

D.

Create separate views to cover each month, and query from these views

Buy Now
Questions 37

You are designing the architecture to process your data from Cloud Storage to BigQuery by using Dataflow. The network team provided you with the Shared VPC network and subnetwork to be used by your pipelines. You need to enable the deployment of the pipeline on the Shared VPC network. What should you do?

Options:

A.

Assign the compute. networkUser role to the Dataflow service agent.

B.

Assign the compute.networkUser role to the service account that executes the Dataflow pipeline.

C.

Assign the dataflow, admin role to the Dataflow service agent.

D.

Assign the dataflow, admin role to the service account that executes the Dataflow pipeline.

Buy Now
Questions 38

Your organization is modernizing their IT services and migrating to Google Cloud. You need to organize the data that will be stored in Cloud Storage and BigQuery. You need to enable a data mesh approach to share the data between sales, product design, and marketing departments What should you do?

Options:

A.

1Create a project for storage of the data for your organization.

2 Create a central Cloud Storage bucket with three folders to store the files for each department.

3. Create a central BigQuery dataset with tables prefixed with the department name.

4 Give viewer rights for the storage project for the users of your departments.

B.

1Create a project for storage of the data for each of your departments.

2 Enable each department to create Cloud Storage buckets and BigQuery datasets.

3. Create user groups for authorized readers for each bucket and dataset.

4 Enable the IT team to administer the user groups to add or remove users as the departments' request.

C.

1 Create multiple projects for storage of the data for each of your departments' applications.

2 Enable each department to create Cloud Storage buckets and BigQuery datasets.

3. Publish the data that each department shared in Analytics Hub.

4 Enable all departments to discover and subscribe to the data they need in Analytics Hub.

D.

1 Create multiple projects for storage of the data for each of your departments' applications.

2 Enable each department to create Cloud Storage buckets and BigQuery datasets.

3 In Dataplex, map each department to a data lake and the Cloud Storage buckets, and map the BigQuery datasets to zones.

4 Enable each department to own and share the data of their data lakes.

Buy Now
Questions 39

You are using Cloud Bigtable to persist and serve stock market data for each of the major indices. To serve the trading application, you need to access only the most recent stock prices that are streaming in How should you design your row key and tables to ensure that you can access the data with the most simple query?

Options:

A.

Create one unique table for all of the indices, and then use the index and timestamp as the row key design

B.

Create one unique table for all of the indices, and then use a reverse timestamp as the row key design.

C.

For each index, have a separate table and use a timestamp as the row key design

D.

For each index, have a separate table and use a reverse timestamp as the row key design

Buy Now
Questions 40

You are migrating your data warehouse to BigQuery. You have migrated all of your data into tables in a dataset. Multiple users from your organization will be using the data. They should only see certain tables based on their team membership. How should you set user permissions?

Options:

A.

Assign the users/groups data viewer access at the table level for each table

B.

Create SQL views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the SQL views

C.

Create authorized views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the authorized views

D.

Create authorized views for each team in datasets created for each team. Assign the authorized views data viewer access to the dataset in which the data resides. Assign the users/groups data viewer access to the datasets in which the authorized views reside

Buy Now
Questions 41

You have an upstream process that writes data to Cloud Storage. This data is then read by an Apache Spark job that runs on Dataproc. These jobs are run in the us-central1 region, but the data could be stored anywhere in the United States. You need to have a recovery process in place in case of a catastrophic single region failure. You need an approach with a maximum of 15 minutes of data loss (RPO=15 mins). You want to ensure that there is minimal latency when reading the data. What should you do?

Options:

A.

1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.

2. Enable turbo replication.

3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the us-south1 region.

4. In case of a regional failure, redeploy your Dataproc duster to the us-south1 region and continue reading from the same bucket.

B.

1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.

2. Enable turbo replication.

3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the same region.

4. In case of a regional failure, redeploy the Dataproc clusters to the us-south1 region and read from the same bucket.

C.

1. Create a Cloud Storage bucket in the US multi-region.

2. Run the Dataproc cluster in a zone in the ua-central1 region, reading data from the US multi-region bucket.

3. In case of a regional failure, redeploy the Dataproc cluster to the us-central2 region and continue reading from the same bucket.

D.

1. Create two regional Cloud Storage buckets, one in the us-central1 region and one in the us-south1 region.

2. Have the upstream process write data to the us-central1 bucket. Use the Storage Transfer Service to copy data hourly from the us-central1 bucket to the us-south1 bucket.

3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in that region.

4. In case of regional failure, redeplo

Buy Now
Questions 42

You want to optimize your queries for cost and performance. How should you structure your data?

Options:

A.

Partition table data by create_date, location_id and device_version

B.

Partition table data by create_date cluster table data by location_Id and device_version

C.

Cluster table data by create_date location_id and device_version

D.

Cluster table data by create_date partition by locationed and device_version

Buy Now
Questions 43

You are designing a Dataflow pipeline for a batch processing job. You want to mitigate multiple zonal failures at job submission time. What should you do?

Options:

A.

Specify a worker region by using the —region flag.

B.

Set the pipeline staging location as a regional Cloud Storage bucket.

C.

Submit duplicate pipelines in two different zones by using the —zone flag.

D.

Create an Eventarc trigger to resubmit the job in case of zonal failure when submitting the job.

Buy Now
Questions 44

You need to choose a database to store time series CPU and memory usage for millions of computers. You need to store this data in one-second interval samples. Analysts will be performing real-time, ad hoc analytics against the database. You want to avoid being charged for every query executed and ensure that the schema design will allow for future growth of the dataset. Which database and data model should you choose?

Options:

A.

Create a table in BigQuery, and append the new samples for CPU and memory to the table

B.

Create a wide table in BigQuery, create a column for the sample value at each second, and update the row with the interval for each second

C.

Create a narrow table in Cloud Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each second

D.

Create a wide table in Cloud Bigtable with a row key that combines the computer identifier with the sample time at each minute, and combine the values for each second as column data.

Buy Now
Questions 45

You stream order data by using a Dataflow pipeline, and write the aggregated result to Memorystore. You provisioned a Memorystore for Redis instance with Basic Tier. 4 GB capacity, which is used by 40 clients for read-only access. You are expecting the number of read-only clients to increase significantly to a few hundred and you need to be able to support the demand. You want to ensure that read and write access availability is not impacted, and any changes you make can be deployed quickly. What should you do?

Options:

A.

Create multiple new Memorystore for Redis instances with Basic Tier (4 GB capacity) Modify the Dataflow pipeline and new clients to use all instances

B.

Create a new Memorystore for Redis instance with Standard Tier Set capacity to 4 GB and read replica to No read replicas (high availability only). Delete the old instance.

C.

Create a new Memorystore for Memcached instance Set a minimum of three nodes, and memory per node to 4 GB. Modify the Dataflow pipeline and all clients to use the Memcached instance Delete the old instance.

D.

Create a new Memorystore for Redis instance with Standard Tier Set capacity to 5 GB and create multiple read replicas Delete the old instance.

Buy Now
Questions 46

Which row keys are likely to cause a disproportionate number of reads and/or writes on a particular node in a Bigtable cluster (select 2 answers)?

Options:

A.

A sequential numeric ID

B.

A timestamp followed by a stock symbol

C.

A non-sequential numeric ID

D.

A stock symbol followed by a timestamp

Buy Now
Questions 47

What is the HBase Shell for Cloud Bigtable?

Options:

A.

The HBase shell is a GUI based interface that performs administrative tasks, such as creating and deleting tables.

B.

The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting tables.

C.

The HBase shell is a hypervisor based shell that performs administrative tasks, such as creating and deleting new virtualized instances.

D.

The HBase shell is a command-line tool that performs only user account management functions to grant access to Cloud Bigtable instances.

Buy Now
Questions 48

Which of the following IAM roles does your Compute Engine account require to be able to run pipeline jobs?

Options:

A.

dataflow.worker

B.

dataflow.compute

C.

dataflow.developer

D.

dataflow.viewer

Buy Now
Questions 49

You have an Apache Kafka Cluster on-prem with topics containing web application logs. You need to replicate the data to Google Cloud for analysis in BigQuery and Cloud Storage. The preferred replication method is mirroring to avoid deployment of Kafka Connect plugins.

What should you do?

Options:

A.

Deploy a Kafka cluster on GCE VM Instances. Configure your on-prem cluster to mirror your topics to the cluster running in GCE. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.

B.

Deploy a Kafka cluster on GCE VM Instances with the PubSub Kafka connector configured as a Sink connector. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.

C.

Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Source connector. Use a Dataflow job to read fron PubSub and write to GCS.

D.

Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Sink connector. Use a Dataflow job to read fron PubSub and write to GCS.

Buy Now
Questions 50

You work for an advertising company, and you’ve developed a Spark ML model to predict click-through rates at advertisement blocks. You’ve been developing everything at your on-premises data center, and now your company is migrating to Google Cloud. Your data center will be migrated to BigQuery. You periodically retrain your Spark ML models, so you need to migrate existing training pipelines to Google Cloud. What should you do?

Options:

A.

Use Cloud ML Engine for training existing Spark ML models

B.

Rewrite your models on TensorFlow, and start using Cloud ML Engine

C.

Use Cloud Dataproc for training existing Spark ML models, but start reading data directly from BigQuery

D.

Spin up a Spark cluster on Compute Engine, and train Spark ML models on the data exported from BigQuery

Buy Now
Questions 51

You need to compose visualization for operations teams with the following requirements:

    Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

    The report must not be more than 3 hours delayed from live data.

    The actionable report should only show suboptimal links.

    Most suboptimal links should be sorted to the top.

    Suboptimal links can be grouped and filtered by regional geography.

    User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?

Options:

A.

Look through the current data and compose a series of charts and tables, one for each possible

combination of criteria.

B.

Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.

C.

Export the data to a spreadsheet, compose a series of charts and tables, one for each possible

combination of criteria, and spread them across multiple tabs.

D.

Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Buy Now
Questions 52

MJTelco is building a custom interface to share data. They have these requirements:

    They need to do aggregations over their petabyte-scale datasets.

    They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

Options:

A.

Cloud Datastore and Cloud Bigtable

B.

Cloud Bigtable and Cloud SQL

C.

BigQuery and Cloud Bigtable

D.

BigQuery and Cloud Storage

Buy Now
Questions 53

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

Options:

A.

Rowkey: date#device_idColumn data: data_point

B.

Rowkey: dateColumn data: device_id, data_point

C.

Rowkey: device_idColumn data: date, data_point

D.

Rowkey: data_pointColumn data: device_id, date

E.

Rowkey: date#data_pointColumn data: device_id

Buy Now
Questions 54

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.

Create a table called tracking_table and include a DATE column.

B.

Create a partitioned table called tracking_table and include a TIMESTAMP column.

C.

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.

D.

Create a table called tracking_table with a TIMESTAMP column to represent the day.

Buy Now
Questions 55

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

Options:

A.

Ensure all the tables are included in global dataset.

B.

Ensure each table is included in a dataset for a region.

C.

Adjust the settings for each table to allow a related region-based security group view access.

D.

Adjust the settings for each view to allow a related region-based security group view access.

E.

Adjust the settings for each dataset to allow a related region-based security group view access.

Buy Now
Questions 56

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?

Options:

A.

The zone

B.

The number of workers

C.

The disk size per worker

D.

The maximum number of workers

Buy Now
Questions 57

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

Options:

A.

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.

B.

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.

C.

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.

D.

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Buy Now
Exam Name: Google Professional Data Engineer Exam
Last Update: Dec 4, 2024
Questions: 372

PDF + Testing Engine

$57.75  $164.99

Testing Engine

$43.75  $124.99
buy now Professional-Data-Engineer testing engine

PDF (Q&A)

$36.75  $104.99
buy now Professional-Data-Engineer pdf
dumpsmate guaranteed to pass
24/7 Customer Support

DumpsMate's team of experts is always available to respond your queries on exam preparation. Get professional answers on any topic of the certification syllabus. Our experts will thoroughly satisfy you.

Site Secure

mcafee secure

TESTED 12 Dec 2024