Free AWS-Certified-Database-Specialty Exam Braindumps

Pass your AWS Certified Database - Specialty exam with these free Questions and Answers

Page 9 of 54
QUESTION 36

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

  1. A. The restored DB instance does not have Enhanced Monitoring enabled
  2. B. The production DB instance is using a custom parameter group
  3. C. The restored DB instance is using the default security group
  4. D. The production DB instance is using a custom option group

Correct Answer: C
https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html

QUESTION 37

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?

  1. A. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  2. B. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  3. C. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
  4. D. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  5. E. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryptio
  6. F. Use AWS DMS to finish copying data to Amazon Redshift.
  7. G. Leverage AWS SCT and apply the converted schema to Amazon Redshif
  8. H. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryptio
  9. I. Use AWS Glue to load the data to Amazon redshift.
  10. J. Set up a VPN tunnel for encrypting data over the network from the data center to AW
  11. K. Leverage a native database export feature to export the data and compress the file
  12. L. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryptio
  13. M. Once complete, load the data to Amazon Redshift using AWS Glue.

Correct Answer: B
https://aws.amazon.com/blogs/database/new-aws-dms-and-aws-snowball-integration-enables-mass-database-mi

QUESTION 38

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?

  1. A. Ensure the table is always provisioned to meet peak needs
  2. B. Allow burst capacity to handle the additional load
  3. C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
  4. D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Correct Answer: D
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html#bp-partition "DynamoDB provides some flexibility in your per-partition throughput provisioning by providing burst
capacity. Whenever you're not fully using a partition's throughput, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes. DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table. DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice. Note that these burst capacity details might change in the future."

QUESTION 39

A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users’ devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.
Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.
Which recommendation would resolve this issue?

  1. A. Ensure the DynamoDB table is configured to be always consistent.
  2. B. Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
  3. C. Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
  4. D. Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.

Correct Answer: D
https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/developerguide/API_BatchGetItem_v20111205.htm By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

QUESTION 40

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.
Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

  1. A. Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
  2. B. Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
  3. C. Create additional readers to cater to the different scenarios.
  4. D. Use custom endpoints to satisfy the different workloads.

Correct Answer: D
https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-c You can now create custom endpoints for Amazon Aurora databases. This allows you to distribute and load balance workloads across different sets of database instances in your Aurora cluster. For example, you may provision a set of Aurora Replicas to use an instance type with higher memory capacity in order to run an analytics workload. A custom endpoint can then help you route the analytics workload to these
appropriately-configured instances, while keeping other instances in your cluster isolated from this workload. As you add or remove instances from the custom endpoint to match your workload, the endpoint helps spread the load around.

Page 9 of 54

Post your Comments and Discuss Amazon AWS-Certified-Database-Specialty exam with other Community members: