Free AWS-Certified-Machine-Learning-Specialty Exam Braindumps

Pass your AWS Certified Machine Learning - Specialty exam with these free Questions and Answers

Page 9 of 42
QUESTION 36

A retail company wants to update its customer support system. The company wants to implement automatic routing of customer claims to different queues to prioritize the claims by category.
Currently, an operator manually performs the category assignment and routing. After the operator classifies and routes the claim, the company stores the claim’s record in a central database. The claim’s record includes the claim’s category.
The company has no data science team or experience in the field of machine learning (ML). The company’s small development team needs a solution that requires no ML expertise.
Which solution meets these requirements?

  1. A. Export the database to a .csv file with two columns: claim_label and claim_tex
  2. B. Use the Amazon SageMaker Object2Vec algorithm and the .csv file to train a mode
  3. C. Use SageMaker to deploy the model to an inference endpoin
  4. D. Develop a service in the application to use the inference endpoint to process incoming claims, predict the labels, and route the claims to the appropriate queue.
  5. E. Export the database to a .csv file with one column: claim_tex
  6. F. Use the Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm and the .csv file to train a mode
  7. G. Use the LDA algorithm to detect labels automaticall
  8. H. Use SageMaker to deploy the model to an inference endpoin
  9. I. Develop a service in the application to use the inference endpoint to process incoming claims, predict the labels, and route the claims to the appropriate queue.
  10. J. Use Amazon Textract to process the database and automatically detect two columns: claim_label and claim_tex
  11. K. Use Amazon Comprehend custom classification and the extracted information to train the custom classifie
  12. L. Develop a service in the application to use the Amazon Comprehend API to process incoming claims, predict the labels, and route the claims to the appropriate queue.
  13. M. Export the database to a .csv file with two columns: claim_label and claim_tex
  14. N. Use Amazon Comprehend custom classification and the .csv file to train the custom classifie
  15. O. Develop a service in the application to use the Amazon Comprehend API to process incoming claims, predict the labels, and route the claims to the appropriate queue.

Correct Answer: C

QUESTION 37

A Machine Learning Specialist must build out a process to query a dataset on Amazon S3 using Amazon Athena The dataset contains more than 800.000 records stored as plaintext CSV files Each record contains 200 columns and is approximately 1 5 MB in size Most queries will span 5 to 10 columns only
How should the Machine Learning Specialist transform the dataset to minimize query runtime?

  1. A. Convert the records to Apache Parquet format
  2. B. Convert the records to JSON format
  3. C. Convert the records to GZIP CSV format
  4. D. Convert the records to XML format

Correct Answer: A
Using compressions will reduce the amount of data scanned by Amazon Athena, and also reduce your S3 bucket storage. It’s a Win-Win for your AWS bill. Supported formats: GZIP, LZO, SNAPPY (Parquet) and ZLIB.

QUESTION 38

A company supplies wholesale clothing to thousands of retail stores. A data scientist must create a model that predicts the daily sales volume for each item for each store. The data scientist discovers that more than half of the stores have been in business for less than 6 months. Sales data is highly consistent from week to week. Daily data from the database has been aggregated weekly, and weeks with no sales are omitted from the current dataset. Five years (100 MB) of sales data is available in Amazon S3.
Which factors will adversely impact the performance of the forecast model to be developed, and which actions should the data scientist take to mitigate them? (Choose two.)

  1. A. Detecting seasonality for the majority of stores will be an issu
  2. B. Request categorical data to relate new stores with similar stores that have more historical data.
  3. C. The sales data does not have enough varianc
  4. D. Request external sales data from other industries to improve the model's ability to generalize.
  5. E. Sales data is aggregated by wee
  6. F. Request daily sales data from the source database to enable building a daily model.
  7. G. The sales data is missing zero entries for item sale
  8. H. Request that item sales data from the source database include zero entries to enable building the model.
  9. I. Only 100 MB of sales data is available in Amazon S3. Request 10 years of sales data, which would provide 200 MB of training data for the model.

Correct Answer: AB

QUESTION 39

A Machine Learning Specialist is working with a media company to perform classification on popular articles from the company's website. The company is using random forests to classify how popular an article will be before it is published A sample of the data being used is below.
Given the dataset, the Specialist wants to convert the Day-Of_Week column to binary values. What technique should be used to convert this column to binary values.
AWS-Certified-Machine-Learning-Specialty dumps exhibit

  1. A. Binarization
  2. B. One-hot encoding
  3. C. Tokenization
  4. D. Normalization transformation

Correct Answer: B

QUESTION 40

A Machine Learning Specialist is using an Amazon SageMaker notebook instance in a private subnet of a corporate VPC. The ML Specialist has important data stored on the Amazon SageMaker notebook instance's Amazon EBS volume, and needs to take a snapshot of that EBS volume. However the ML Specialist cannot find the Amazon SageMaker notebook instance's EBS volume or Amazon EC2 instance within the VPC.
Why is the ML Specialist not seeing the instance visible in the VPC?

  1. A. Amazon SageMaker notebook instances are based on the EC2 instances within the customer account, but they run outside of VPCs.
  2. B. Amazon SageMaker notebook instances are based on the Amazon ECS service within customer accounts.
  3. C. Amazon SageMaker notebook instances are based on EC2 instances running within AWS serviceaccounts.
  4. D. Amazon SageMaker notebook instances are based on AWS ECS instances running within AWS service accounts.

Correct Answer: C

Page 9 of 42

Post your Comments and Discuss Amazon AWS-Certified-Machine-Learning-Specialty exam with other Community members: