Free AWS-Certified-Developer-Associate Exam Braindumps

Pass your Amazon AWS Certified Developer - Associate exam with these free Questions and Answers

Page 9 of 26
QUESTION 36

An Amazon Kinesis Data Firehose delivery stream is receiving customer data that contains personally identifiable information. A developer needs to remove pattern-based customer identifiers from the data and store the modified data in an Amazon S3 bucket.
What should the developer do to meet these requirements?

  1. A. Implement Kinesis Data Firehose data transformation as an AWS Lambda functio
  2. B. Configure the function to remove the customer identifier
  3. C. Set an Amazon S3 bucket as the destination of the delivery stream.
  4. D. Launch an Amazon EC2 instanc
  5. E. Set the EC2 instance as the destination of the delivery strea
  6. F. Run an application on the EC2 instance to remove the customer identifier
  7. G. Store the transformed data in an Amazon S3 bucket.
  8. H. Create an Amazon OpenSearch Service instanc
  9. I. Set the OpenSearch Service instance as the destination of the delivery strea
  10. J. Use search and replace to remove the customer identifier
  11. K. Export the data to an Amazon S3 bucket.
  12. L. Create an AWS Step Functions workflow to remove the customer identifier
  13. M. As the last step in the workflow, store the transformed data in an Amazon S3 bucke
  14. N. Set the workflow as the destination of the delivery stream.

Correct Answer: A
Amazon Kinesis Data Firehose is a service that delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Amazon Kinesis Data Analytics. The developer can implement Kinesis Data Firehose data transformation as an AWS Lambda function. The function can remove pattern-based customer identifiers from the data and return the modified data to Kinesis Data Firehose. The developer can set an Amazon S3 bucket as the destination of the delivery stream. References:
✑ [What Is Amazon Kinesis Data Firehose? - Amazon Kinesis Data Firehose]
✑ [Data Transformation - Amazon Kinesis Data Firehose]

QUESTION 37

When using the AWS Encryption SDK how does the developer keep track of the data encryption keys used to encrypt data?

  1. A. The developer must manually keep Hack of the data encryption keys used for each data object.
  2. B. The SDK encrypts the data encryption key and stores it (encrypted) as part of the resumed ophertext.
  3. C. The SDK stores the data encryption keys automaticity in Amazon S3.
  4. D. The data encryption key is stored m the user data for the EC2 instance.

Correct Answer: B
This solution will meet the requirements by using AWS Encryption SDK, which is a client-side encryption library that enables developers to encrypt and decrypt data using data encryption keys that are protected by AWS Key Management Service (AWS KMS). The SDK encrypts the data encryption key with a customer master key (CMK) that is managed by AWS KMS, and stores it (encrypted) as part of the returned ciphertext. The developer does not need to keep track of the data encryption keys used to encrypt data, as they are stored with the encrypted data and can be retrieved and decrypted by using AWS KMS when needed. Option A is not optimal because it will require manual tracking of the data encryption keys used for each data object, which is error-prone and inefficient. Option C is not optimal because it will store the data encryption keys automatically in Amazon S3, which is unnecessary and insecure as Amazon S3 is not designed for storing encryption keys. Option D is not optimal because it will store the data encryption key in the user data for the EC2 instance, which is also unnecessary and insecure as user data is not encrypted by default.
References: [AWS Encryption SDK], [AWS Key Management Service]
AWS-Certified-Developer-Associate dumps exhibit

QUESTION 38

A company has installed smart motes in all Its customer locations. The smart meter’s measure power usage at 1minute intervals and send the usage readings to a remote endpoint tot collection. The company needs to create an endpoint that will receive the smart meter readings and store the readings in a database. The company wants to store the location ID and timestamp information.
The company wants to give Is customers low-latency access to their current usage and historical usage on demand The company expects demand to increase significantly. The solution must not impact performance or include downtime write seeing.
AWS-Certified-Developer-Associate dumps exhibitWhen solution will meet these requirements MOST cost-effectively?

  1. A. Store the smart meter readings in an Amazon RDS databas
  2. B. Create an index on the location ID and timestamp columns Use the columns to filter on the customers ‘data.
  3. C. Store the smart motor readings m an Amazon DynamoDB table Croato a composite Key oy using the location ID and timestamp column
  4. D. Use the columns to filter on the customers' data.
  5. E. Store the smart meter readings in Amazon EastCache for Reds Create a Sorted set key y using the location ID and timestamp column
  6. F. Use the columns to filter on the customers’ data.
  7. G. Store the smart meter readings m Amazon S3 Parton the data by using the location ID and timestamp column
  8. H. Use Amazon Athena lo tiler on me customers' data.

Correct Answer: B
The solution that will meet the requirements most cost-effectively is to store the smart meter readings in an Amazon DynamoDB table. Create a composite key by using the location ID and timestamp columns. Use the columns to filter on the customers’ data. This way, the company can leverage the scalability, performance, and low latency of DynamoDB to store and retrieve the smart meter readings. The company can also use the composite key to query the data by location ID and timestamp efficiently. The other options either involve more expensive or less scalable services, or do not provide low-latency access to the current usage.
Reference: Working with Queries in DynamoDB

QUESTION 39

A developer wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket.
Which set of steps would be necessary to achieve this?

  1. A. Create an event with Amazon EventBridge that will monitor the S3 bucket and then insert the records into DynamoDB.
  2. B. Configure an S3 event to invoke an AWS Lambda function that inserts records into DynamoDB.
  3. C. Create an AWS Lambda function that will poll the S3 bucket and then insert the records into DynamoDB.
  4. D. Create a cron job that will run at a scheduled time and insert the records into DynamoDB.

Correct Answer: B
Amazon S3 is a service that provides highly scalable, durable, and secure object storage. Amazon DynamoDB is a fully managed NoSQL database service that
AWS-Certified-Developer-Associate dumps exhibitprovides fast and consistent performance with seamless scalability. AWS Lambda is a service that lets developers run code without provisioning or managing servers. The developer can configure an S3 event to invoke a Lambda function that inserts records into DynamoDB whenever a new file is added to the S3 bucket. This solution will meet the requirement of inserting a record into DynamoDB as soon as a new file is added to S3. References:
✑ [Amazon Simple Storage Service (S3)]
✑ [Amazon DynamoDB]
✑ [What Is AWS Lambda? - AWS Lambda]
✑ [Using AWS Lambda with Amazon S3 - AWS Lambda]

QUESTION 40

A developer is writing an AWS Lambda function. The developer wants to log key events that occur while the Lambda function runs. The developer wants to include a unique identifier to associate the events with a specific function invocation. The developer adds the following code to the Lambda function:
AWS-Certified-Developer-Associate dumps exhibit
Which solution will meet this requirement?

  1. A. Obtain the request identifier from the AWS request ID field in the context objec
  2. B. Configure the application to write logs to standard output.
  3. C. Obtain the request identifier from the AWS request ID field in the event objec
  4. D. Configure the application to write logs to a file.
  5. E. Obtain the request identifier from the AWS request ID field in the event objec
  6. F. Configure the application to write logs to standard output.
  7. G. Obtain the request identifier from the AWS request ID field in the context objec
  8. H. Configure the application to write logs to a file.

Correct Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/nodejs-context.html https://docs.aws.amazon.com/lambda/latest/dg/nodejs-logging.html
There is no explicit information for the runtime, the code is written in Node.js.
AWS Lambda is a service that lets developers run code without provisioning or managing servers. The developer can use the AWS request ID field in the context object to obtain a unique identifier for each function invocation. The developer can configure the application to write logs to standard output, which will be captured by Amazon CloudWatch Logs. This solution will meet the requirement of logging key events with a unique identifier.
AWS-Certified-Developer-Associate dumps exhibitReferences:
✑ [What Is AWS Lambda? - AWS Lambda]
✑ [AWS Lambda Function Handler in Node.js - AWS Lambda]
✑ [Using Amazon CloudWatch - AWS Lambda]

Page 9 of 26

Post your Comments and Discuss Amazon AWS-Certified-Developer-Associate exam with other Community members: