Free DP-203 Exam Braindumps

Pass your Data Engineering on Microsoft Azure exam with these free Questions and Answers

Page 14 of 61
QUESTION 61

- (Exam Topic 3)
You have a data model that you plan to implement in a data warehouse in Azure Synapse Analytics as shown in the following exhibit.
DP-203 dumps exhibit
All the dimension tables will be less than 2 GB after compression, and the fact table will be approximately 6 TB.
Which type of table should you use for each table? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
DP-203 dumps exhibit
Solution:
DP-203 dumps exhibit

Does this meet the goal?

  1. A. Yes
  2. B. No

Correct Answer: A

QUESTION 62

- (Exam Topic 3)
You have an Azure Synapse Analytics dedicated SQL pool named SA1 that contains a table named Table1. You need to identify tables that have a high percentage of deleted rows. What should you run?
A)
DP-203 dumps exhibit
B)
DP-203 dumps exhibit
C)
DP-203 dumps exhibit
D)
DP-203 dumps exhibit

  1. A. Option
  2. B. Option
  3. C. Option
  4. D. Option

Correct Answer: B

QUESTION 63

- (Exam Topic 3)
You are performing exploratory analysis of the bus fare data in an Azure Data Lake Storage Gen2 account by using an Azure Synapse Analytics serverless SQL pool.
You execute the Transact-SQL query shown in the following exhibit.
DP-203 dumps exhibit
What do the query results include?

  1. A. Only CSV files in the tripdata_2020 subfolder.
  2. B. All files that have file names that beginning with "tripdata_2020".
  3. C. All CSV files that have file names that contain "tripdata_2020".
  4. D. Only CSV that have file names that beginning with "tripdata_2020".

Correct Answer: D

QUESTION 64

- (Exam Topic 3)
A company has a real-time data analysis solution that is hosted on Microsoft Azure. The solution uses Azure Event Hub to ingest data and an Azure Stream Analytics cloud job to analyze the data. The cloud job is configured to use 120 Streaming Units (SU).
You need to optimize performance for the Azure Stream Analytics job.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  1. A. Implement event ordering.
  2. B. Implement Azure Stream Analytics user-defined functions (UDF).
  3. C. Implement query parallelization by partitioning the data output.
  4. D. Scale the SU count for the job up.
  5. E. Scale the SU count for the job down.
  6. F. Implement query parallelization by partitioning the data input.

Correct Answer: DF
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization

QUESTION 65

- (Exam Topic 3)
You have an Azure Data lake Storage account that contains a staging zone.
You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.
Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse.
Dow this meet the goal?

  1. A. Yes
  2. B. No

Correct Answer: B
If you need to transform data in a way that is not supported by Data Factory, you can create a custom activity, not an Azure Databricks notebook, with your own data processing logic and use the activity in the pipeline. You can create a custom activity to run R scripts on your HDInsight cluster with R installed.
Reference:
https://docs.microsoft.com/en-US/azure/data-factory/transform-data

Page 14 of 61

Post your Comments and Discuss Microsoft DP-203 exam with other Community members: