Free Professional-Data-Engineer Exam Braindumps

Pass your Google Professional Data Engineer Exam exam with these free Questions and Answers

Page 14 of 54
QUESTION 61

- (Exam Topic 6)
You need to choose a database for a new project that has the following requirements:
Professional-Data-Engineer dumps exhibit Fully managed
Professional-Data-Engineer dumps exhibit Able to automatically scale up
Professional-Data-Engineer dumps exhibit Transactionally consistent
Professional-Data-Engineer dumps exhibit Able to scale up to 6 TB
Professional-Data-Engineer dumps exhibit Able to be queried using SQL Which database do you choose?

  1. A. Cloud SQL
  2. B. Cloud Bigtable
  3. C. Cloud Spanner
  4. D. Cloud Datastore

Correct Answer: C

QUESTION 62

- (Exam Topic 5)
Which is not a valid reason for poor Cloud Bigtable performance?

  1. A. The workload isn't appropriate for Cloud Bigtable.
  2. B. The table's schema is not designed correctly.
  3. C. The Cloud Bigtable cluster has too many nodes.
  4. D. There are issues with the network connection.

Correct Answer: C
The Cloud Bigtable cluster doesn't have enough nodes. If your Cloud Bigtable cluster is overloaded, adding more nodes can improve performance. Use the monitoring tools to check whether the cluster is overloaded.
Reference: https://cloud.google.com/bigtable/docs/performance

QUESTION 63

- (Exam Topic 6)
You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications have been defaulted. You have been asked to train a model to predict default rates for credit applicants.
What should you do?

  1. A. Increase the size of the dataset by collecting additional data.
  2. B. Train a linear regression to predict a credit default risk score.
  3. C. Remove the bias from the data and collect applications that have been declined loans.
  4. D. Match loan applicants with their social profiles to enable feature engineering.

Correct Answer: B

QUESTION 64

- (Exam Topic 1)
You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

  1. A. Update the current pipeline and use the drain flag.
  2. B. Update the current pipeline and provide the transform mapping JSON object.
  3. C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.
  4. D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Correct Answer: D

QUESTION 65

- (Exam Topic 6)
You need to copy millions of sensitive patient records from a relational database to BigQuery. The total size of the database is 10 TB. You need to design a solution that is secure and time-efficient. What should you do?

  1. A. Export the records from the database as an Avro fil
  2. B. Upload the file to GCS using gsutil, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.
  3. C. Export the records from the database as an Avro fil
  4. D. Copy the file onto a Transfer Appliance and send itto Google, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.
  5. E. Export the records from the database into a CSV fil
  6. F. Create a public URL for the CSV file, and then use Storage Transfer Service to move the file to Cloud Storag
  7. G. Load the CSV file into BigQuery using the BigQuery web UI in the GCP Console.
  8. H. Export the records from the database as an Avro fil
  9. I. Create a public URL for the Avro file, and then use Storage Transfer Service to move the file to Cloud Storag
  10. J. Load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.

Correct Answer: A

Page 14 of 54

Post your Comments and Discuss Google Professional-Data-Engineer exam with other Community members: