Aws Flashcards
1. Which of the following describes a physical location around the world where AWS clusters data centers?A. EndpointB. CollectionC. FleetD. Region
1. D. A region is a named set of AWS resources in the same geographical area. A region comprises at least two Availability Zones. Endpoint, Collection, and Fleet do not describe a physical location around the world where AWS clusters data centers.
2. Each AWS region is composed of two or more locations that offer organizations the ability to operate production systems that are more highly available, fault tolerant, and scalable than would be possible using a single data center. What are these locations called?A. Availability ZonesB. Replication areasC. Geographic districtsD. Compute centers
2. A. An Availability Zone is a distinct location within a region that is insulated from failures in other Availability Zones and provides inexpensive, low-latency network connectivity to other Availability Zones in the same region. Replication areas, geographic districts, and compute centers are not terms used to describe AWS data center locations.
3. What is the deployment term for an environment that extends an existing on-premises infrastructure into the cloud to connect cloud resources to internal systems?A. All-in deploymentB. Hybrid deploymentC. On-premises deploymentD. Scatter deployment
3. B. A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. An all-in deployment refers to an environment that exclusively runs in the cloud. An on-premises deployment refers to an environment that runs exclusively in an organization's data center.
4. Which AWS Cloud service allows organizations to gain system-wide visibility into resource utilization, application performance, and operational health?A. AWS Identity and Access Management (IAM)B. Amazon Simple Notification Service (Amazon SNS)C. Amazon CloudWatchD. AWS CloudFormation
4. C. Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications organizations run on AWS. It allows organizations to collect and track metrics, collect and monitor log files, and set alarms. AWS IAM, Amazon SNS, and AWS CloudFormation do not provide visibility into resource utilization, application performance, and the operational health of your AWS resources.
5. Which of the following AWS Cloud services is a fully managed NoSQL database service?A. Amazon Simple Queue Service (Amazon SQS)B. Amazon DynamoDBC. Amazon ElastiCacheD. Amazon Relational Database Service (Amazon RDS)
5. B. Amazon DynamoDB is a fully managed, fast, and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Amazon SQS, Amazon ElastiCache, and Amazon RDS do not provide a NoSQL database service. Amazon SQS is a managed message queuing service. Amazon ElastiCache is a service that provides in-memory cache in the cloud. Finally, Amazon RDS provides managed relational databases.
6. Your company experiences fluctuations in traffic patterns to their e-commerce website based on flash sales. What service can help your company dynamically match the required compute capacity to the spike in traffic during flash sales?A. Auto ScalingB. Amazon GlacierC. Amazon Simple Notification Service (Amazon SNS)D. Amazon Virtual Private Cloud (Amazon VPC)
6. A. Auto Scaling helps maintain application availability and allows organizations to scale Amazon Elastic Compute Cloud (Amazon EC2) capacity up or down automatically according to conditions defined for the particular workload. Not only can it be used to help ensure that the desired number of Amazon EC2 instances are running, but it also allows resources to scale in and out to match the demands of dynamic workloads. Amazon Glacier, Amazon SNS, and Amazon VPC do not provide services to scale compute capacity automatically.
7. Your company provides an online photo sharing service. The development team is looking for ways to deliver image files with the lowest latency to end users so the website content is delivered with the best possible performance. What service can help speed up distribution of these image files to end users around the world?A. Amazon Elastic Compute Cloud (Amazon EC2)B. Amazon Route 53C. AWS Storage GatewayD. Amazon CloudFront
7. D. Amazon CloudFront is a web service that provides a CDN to speed up distribution of your static and dynamic web contentâfor example, .html, .css, .php, image, and media filesâto end users. Amazon CloudFront delivers content through a worldwide network of edge locations. Amazon EC2, Amazon Route 53, and AWS Storage Gateway do not provide CDN services that are required to meet the needs for the photo sharing service.
8. Your company runs an Amazon Elastic Compute Cloud (Amazon EC2) instance periodically to perform a batch processing job on a large and growing filesystem. At the end of the batch job, you shut down the Amazon EC2 instance to save money but need to persist the filesystem on the Amazon EC2 instance from the previous batch runs. What AWS Cloud service can you leverage to meet these requirements?A. Amazon Elastic Block Store (Amazon EBS)B. Amazon DynamoDBC. Amazon GlacierD. AWS CloudFormation
8. A. Amazon EBS provides persistent block-level storage volumes for use with Amazon EC2 instances on the AWS Cloud. Amazon DynamoDB, Amazon Glacier, and AWS CloudFormation do not provide persistent block-level storage for Amazon EC2 instances. Amazon DynamoDB provides managed NoSQL databases. Amazon Glacier provides lowcost archival storage. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.
9. What AWS Cloud service provides a logically isolated section of the AWS Cloud where organizations can launch AWS resources in a virtual network that they define?A. Amazon Simple Workflow Service (Amazon SWF)B. Amazon Route 53C. Amazon Virtual Private Cloud (Amazon VPC)D. AWS CloudFormation
9. C. Amazon VPC lets organizations provision a logically isolated section of the AWS Cloud where they can launch AWS resources in a virtual network that they define. Amazon SWF, Amazon Route 53, and AWS CloudFormation do not provide a virtual network. Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. Amazon Route 53 provides a highly available and scalable cloud Domain Name System (DNS) web service. Amazon CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources.
10. Your company provides a mobile voting application for a popular TV show, and 5 to 25 million viewers all vote in a 15-second timespan. What mechanism can you use to decouple the voting application from your back-end services that tally the votes?A. AWS CloudTrailB. Amazon Simple Queue Service (Amazon SQS)C. Amazon RedshiftD. Amazon Simple Notification Service (Amazon SNS)
10. B. Amazon SQS is a fast, reliable, scalable, fully managed message queuing service that allows organizations to decouple the components of a cloud application. With Amazon SQS, organizations can transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. AWS CloudTrail records AWS API calls, and Amazon Redshift is a data warehouse, neither of which would be useful as an architecture component for decoupling components. Amazon SNS provides a messaging bus complement to Amazon SQS; however, it doesn't provide the decoupling of components necessary for this scenario.
1. In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from block and file storage? (Choose 2 answers)A. Amazon S3 stores data in fixed size blocks.B. Objects are identified by a numbered address.C. Objects can be any size.D. Objects contain both data and metadata.E. Objects are stored in buckets.
1. D, E. Objects are stored in buckets, and objects contain both data and metadata.
2. Which of the following are not appropriates use cases for Amazon Simple Storage Service (Amazon S3)? (Choose 2 answers)A. Storing web contentB. Storing a file system mounted to an Amazon Elastic Compute Cloud (Amazon EC2) instanceC. Storing backups for a relational databaseD. Primary storage for a databaseE. Storing logs for analytics
2. B, D. Amazon S3 cannot be mounted to an Amazon EC2 instance like a file system and should not serve as primary database storage.
3. What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)? (Choose 3 answers)A. All objects have a URL.B. Amazon S3 can store unlimited amounts of data.C. Objects are world-readable by default.D. Amazon S3 uses a REST (Representational State Transfer) Application Program Interface (API).E. You must pre-allocate the storage in a bucket.
3. A, B, D. C and E are incorrectâobjects are private by default, and storage in a bucket does not need to be pre-allocated.
4. Which features can be used to restrict access to Amazon Simple Storage Service (Amazon S3) data? (Choose 3 answers)A. Enable static website hosting on the bucket.B. Create a pre-signed URL for an object.C. Use an Amazon S3 Access Control List (ACL) on a bucket or object.D. Use a lifecycle policy.E. Use an Amazon S3 bucket policy.
4. B, C, E. Static website hosting does not restrict data access, and neither does an Amazon S3 lifecycle policy.
5. Your application stores critical data in Amazon Simple Storage Service (Amazon S3), which must be protected against inadvertent or intentional deletion. How can this data be protected? (Choose 2 answers)A. Use cross-region replication to copy data to another bucket automatically.B. Set a vault lock.C. Enable versioning on the bucket.D. Use a lifecycle policy to migrate data to Amazon Glacier.E. Enable MFA Delete on the bucket.
5. C, E. Versioning protects data against inadvertent or intentional deletion by storing all versions of the object, and MFA Delete requires a one-time code from a Multi-Factor Authentication (MFA) device to delete objects. Cross-region replication and migration to the Amazon Glacier storage class do not protect against deletion. Vault locks are a feature of Amazon Glacier, not a feature of Amazon S3.
6. Your company stores documents in Amazon Simple Storage Service (Amazon S3), but it wants to minimize cost. Most documents are used actively for only about a month, then much less frequently. However, all data needs to be available within minutes when requested. How can you meet these requirements?A. Migrate the data to Amazon S3 Reduced Redundancy Storage (RRS) after 30 days.B. Migrate the data to Amazon Glacier after 30 days.C. Migrate the data to Amazon S3 Standard - Infrequent Access (IA) after 30 days.D. Turn on versioning, then migrate the older version to Amazon Glacier.
6. C. Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is correct. Amazon S3 RRS should only be used for easily replicated data, not critical data. Migration to Amazon Glacier might minimize storage costs if retrievals are infrequent, but documents would not be available in minutes when needed.
7. How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability?A. Data is automatically replicated to other regions.B. Data is automatically replicated within a region.C. Data is replicated only if versioning is enabled on the bucket.D. Data is automatically backed up on tape and restored if needed.
7. B. Data is automatically replicated within a region. Replication to other regions and versioning are optional. Amazon S3 data is not backed up to tape.
8. Based on the following Amazon Simple Storage Service (Amazon S3) URL, which one of the following statements is correct?https://bucket1.abc.com.s3.amazonaws.com/folderx/myfile.docA. The object "myfile.doc" is stored in the folder "folderx" in the bucket "bucket1.abc.com."B. The object "myfile.doc" is stored in the bucket "bucket1.abc.com."C. The object "folderx/myfile.doc" is stored in the bucket "bucket1.abc.com."D. The object "myfile.doc" is stored in the bucket "bucket1."
8. C. In a URL, the bucket name precedes the string "s3.amazonaws.com/," and the object key is everything after that. There is no folder structure in Amazon S3.
9. To have a record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from where, you should do what?A. Enable versioning on the bucket.B. Enable website hosting on the bucket.C. Enable server access logs on the bucket.D. Create an AWS Identity and Access Management (IAM) bucket policy.E. Enable Amazon CloudWatch logs.
9. C. Amazon S3 server access logs store a record of what requestor accessed the objects in your bucket, including the requesting IP address.
10. What are some reasons to enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers)A. You want a backup of your data in case of accidental deletion.B. You have a set of users or customers who can access the second bucket with lowerlatency.C. For compliance reasons, you need to store data in a location at least 300 miles away from the first region.D. Your data needs at least five nines of durability.
10. B, C. Cross-region replication can help lower latency and satisfy compliance requirements on distance. Amazon S3 is designed for eleven nines durability for objects in a single region, so a second region does not significantly increase durability. Crossregion replication does not protect against accidental deletion.
11. Your company requires that all data sent to external storage be encrypted before being sent. Which Amazon Simple Storage Service (Amazon S3) encryption solution will meet this requirement?A. Server-Side Encryption (SSE) with AWS-managed keys (SSE-S3)B. SSE with customer-provided keys (SSE-C)C. Client-side encryption with customer-managed keysD. Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSEKMS)
11. C. If data must be encrypted before being sent to Amazon S3, client-side encryption must be used.
12. You have a popular web application that accesses data stored in an Amazon Simple Storage Service (Amazon S3) bucket. You expect the access to be very read-intensive, with expected request rates of up to 500 GETs per second from many clients. How can you increase the performance and scalability of Amazon S3 in this case?A. Turn on cross-region replication to ensure that data is served from multiple locations.B. Ensure randomness in the namespace by including a hash prefix to key names.C. Turn on server access logging.D. Ensure that key names are sequential to enable pre-fetch.
12. B. Amazon S3 scales automatically, but for request rates over 100 GETS per second, it helps to make sure there is some randomness in the key space. Replication and logging will not affect performance or scalability. Using sequential key names could have a negative effect on performance or scalability.
13. What is needed before you can enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers)A. Enable versioning on the bucket.B. Enable a lifecycle rule to migrate data to the second region.C. Enable static website hosting.D. Create an AWS Identity and Access Management (IAM) policy to allow Amazon S3 to replicate objects on your behalf.
13. A, D. You must enable versioning before you can enable cross-region replication, and Amazon S3 must have IAM permissions to perform the replication. Lifecycle rules migrate data from one storage class to another, not from one bucket to another. Static website hosting is not a prerequisite for replication.
14. Your company has 100TB of financial records that need to be stored for seven years by law. Experience has shown that any record more than one-year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost efficient manner?A. Store the data on Amazon Elastic Block Store (Amazon EBS) volumes attached to t2.micro instances.B. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year and delete the object after seven years.C. Store the data in Amazon DynamoDB and run daily script to delete data older than seven years.D. Store the data in Amazon Elastic MapReduce (Amazon EMR).
14. B. Amazon S3 is the most cost effective storage on AWS, and lifecycle policies are a simple and effective feature to address the business requirements.
15. Amazon Simple Storage Service (S3) bucket policies can restrict access to an Amazon S3 bucket and objects by which of the following? (Choose 3 answers)A. Company nameB. IP address rangeC. AWS accountD. Country of originE. Objects with a specific prefix
15. B, C, E. Amazon S3 bucket policies cannot specify a company name or a country or origin, but they can specify request IP range, AWS account, and a prefix for objects that can be accessed.
16. Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what kinds of operations is it possible to get stale data as a result of eventual consistency? (Choose 2 answers)A. GET after PUT of a new objectB. GET or LIST after a DELETEC. GET after overwrite PUT (PUT to an existing key)D. DELETE after PUT of new object
16. B, C. Amazon S3 provides read-after-write consistency for PUTs to new objects (new key), but eventual consistency for GETs and DELETEs of existing objects (existing key).
17. What must be done to host a static website in an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 3 answers)A. Configure the bucket for static hosting and specify an index and error document.B. Create a bucket with the same name as the website.C. Enable File Transfer Protocol (FTP) on the bucket.D. Make the objects in the bucket world-readable.E. Enable HTTP on the bucket.
17. A, B, D. A, B, and D are required, and normally you also set a friendly CNAME to the bucket URL. Amazon S3 does not support FTP transfers, and HTTP does not need to be enabled.
18. You have valuable media files hosted on AWS and want them to be served only to authenticated users of your web application. You are concerned that your content could be stolen and distributed for free. How can you protect your content?A. Use static web hosting.B. Generate pre-signed URLs for content in the web application.C. Use AWS Identity and Access Management (IAM) policies to restrict access.D. Use logging to track your content.
18. B. Pre-signed URLs allow you to grant time-limited permission to download objects from an Amazon Simple Storage Service (Amazon S3) bucket. Static web hosting generally requires world-read access to all content. AWS IAM policies do not know who the authenticated users of the web app are. Logging can help track content loss, but not prevent it.
19. Amazon Glacier is well-suited to data that is which of the following? (Choose 2 answers)A. Is infrequently or rarely accessedB. Must be immediately available when neededC. Is available after a three- to five-hour restore periodD. Is frequently erased within 30 days
19. A, C. Amazon Glacier is optimized for long-term archival storage and is not suited to data that needs immediate access or short-lived data that is erased within 90 days.
20. Which statements about Amazon Glacier are true? (Choose 3 answers)A. Amazon Glacier stores data in objects that live in archives.B. Amazon Glacier archives are identified by user-specified key names.C. Amazon Glacier archives take three to five hours to restore.D. Amazon Glacier vaults can be locked.E. Amazon Glacier can be used as a standalone service and as an Amazon S3 storage class.
20. C, D, E. Amazon Glacier stores data in archives, which are contained in vaults. Archives are identified by system-created archive IDs, not key names.
1. Your web application needs four instances to support steady traffic nearly all of the time. On the last day of each month, the traffic triples. What is a cost-effective way to handle this traffic pattern?A. Run 12 Reserved Instances all of the time.B. Run four On-Demand Instances constantly, then add eight more On-Demand Instances on the last day of each month.C. Run four Reserved Instances constantly, then add eight On-Demand Instances on the last day of each month.D. Run four On-Demand Instances constantly, then add eight Reserved Instances on the last day of each month.
1. C. Reserved Instances provide cost savings when you can commit to running instances full time, such as to handle the base traffic. On-Demand Instances provide the flexibility to handle traffic spikes, such as on the last day of the month.
2. Your order-processing application processes orders extracted from a queue with two Reserved Instances processing 10 orders/minute. If an order fails during processing, then it is returned to the queue without penalty. Due to a weekend sale, the queues have several hundred orders backed up. While the backup is not catastrophic, you would like to drain it so that customers get their confirmation emails faster. What is a cost-effective way to drain the queue for orders?A. Create more queues.B. Deploy additional Spot Instances to assist in processing the orders.C. Deploy additional Reserved Instances to assist in processing the orders.D. Deploy additional On-Demand Instances to assist in processing the orders.
2. B. Spot Instances are a very cost-effective way to address temporary compute needs that are not urgent and are tolerant of interruption. That's exactly the workload described here. Reserved Instances are inappropriate for temporary workloads. On-Demand Instances are good for temporary workloads, but don't offer the cost savings of Spot Instances. Adding more queues is a non-responsive answer as it would not address the problem.
3. Which of the following must be specified when launching a new Amazon Elastic Compute Cloud (Amazon EC2) Windows instance? (Choose 2 answers)A. The Amazon EC2 instance IDB. Password for the administrator accountC. Amazon EC2 instance typeD. Amazon Machine Image (AMI)
3. C, D. The Amazon EC2 instance ID will be assigned by AWS as part of the launch process. The administrator password is assigned by AWS and encrypted via the public key. The instance type defines the virtual hardware and the AMI defines the initial software state. You must specify both upon launch.
4. You have purchased an m3.xlarge Linux Reserved instance in us-east-1a. In which ways can you modify this reservation? (Choose 2 answers)A. Change it into two m3.large instances.B. Change it to a Windows instance.C. Move it to us-east-1b.D. Change it to an m4.xlarge.
4. A, C. You can change the instance type only within the same instance type family, or you can change the Availability Zone. You cannot change the operating system nor the instance type family.
5. Your instance is associated with two security groups. The first allows Remote Desktop Protocol (RDP) access over port 3389 from Classless Inter-Domain Routing (CIDR) block 72.14.0.0/16. The second allows HTTP access over port 80 from CIDR block 0.0.0.0/0. What traffic can reach your instance?A. RDP and HTTP access from CIDR block 0.0.0.0/0B. No traffic is allowed.C. RDP and HTTP traffic from 72.14.0.0/16D. RDP traffic over port 3389 from 72.14.0.0/16 and HTTP traffic over port 80 from 0.0.00/0
5. D. When there are multiple security groups associated with an instance, all the rules are aggregated.
6. Which of the following are features of enhanced networking? (Choose 3 answers)A. More Packets Per Second (PPS)B. Lower latencyC. Multiple network interfacesD. Border Gateway Protocol (BGP) routingE. Less jitter
6. A, B, E. These are the benefits of enhanced networking.
7. You are creating a High-Performance Computing (HPC) cluster and need very low latency and high bandwidth between instances. What combination of the following will allow this? (Choose 3 answers)A. Use an instance type with 10 Gbps network performance.B. Put the instances in a placement group.C. Use Dedicated Instances.D. Enable enhanced networking on the instances.E. Use Reserved Instances.
7. A, B, D. The other answers have nothing to do with networking.
8. Which Amazon Elastic Compute Cloud (Amazon EC2) feature ensures that your instances will not share a physical host with instances from any other AWS customer?A. Amazon Virtual Private Cloud (VPC)B. Placement groupsC. Dedicated InstancesD. Reserved Instances
8. C. Dedicated Instances will not share hosts with other accounts.
9. Which of the following are true of instance stores? (Choose 2 answers)A. Automatic backupsB. Data is lost when the instance stops.C. Very high IOPSD. Charge is based on the total amount of storage provisioned.
9. B, C. Instance stores are low-durability, high-IOPS storage that is included for free with the hourly cost of an instance.
10. Which of the following are features of Amazon Elastic Block Store (Amazon EBS)?(Choose 2 answers)A. Data stored on Amazon EBS is automatically replicated within an Availability Zone.B. Amazon EBS data is automatically backed up to tape.C. Amazon EBS volumes can be encrypted transparently to workloads on the attached instance.D. Data on an Amazon EBS volume is lost when the attached instance is stopped.
10. A, C. There are no tapes in the AWS infrastructure. Amazon EBS volumes persist when the instance is stopped. The data is automatically replicated within an Availability Zone. Amazon EBS volumes can be encrypted upon creation and used by an instance in the same manner as if they were not encrypted.
11. You need to take a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume. How long will the volume be unavailable?A. It depends on the provisioned size of the volume.B. The volume will be available immediately.C. It depends on the amount of data stored on the volume.D. It depends on whether the attached instance is an Amazon EBS-optimized instance.
11. B. There is no delay in processing when commencing a snapshot.
12. You are restoring an Amazon Elastic Block Store (Amazon EBS) volume from a snapshot. How long will it be before the data is available?A. It depends on the provisioned size of the volume.B. The data will be available immediately.C. It depends on the amount of data stored on the volume.D. It depends on whether the attached instance is an Amazon EBS-optimized instance.
12. B. The volume is created immediately but the data is loaded lazily. This means that the volume can be accessed upon creation, and if the data being requested has not yet been restored, it will be restored upon first request.
13. You have a workload that requires 15,000 consistent IOPS for data that must be durable. What combination of the following steps do you need? (Choose 2 answers)A. Use an Amazon Elastic Block Store (Amazon EBS)-optimized instance.B. Use an instance store.C. Use a Provisioned IOPS SSD volume.D. Use a magnetic volume.
13. A, C. B and D are incorrect because an instance store will not be durable and a magnetic volume offers an average of 100 IOPS. Amazon EBS-optimized instances reserve network bandwidth on the instance for IO, and Provisioned IOPS SSD volumes provide the highest consistent IOPS.
14. Which of the following can be accomplished through bootstrapping?A. Install the most current security updates.B. Install the current version of the application.C. Configure Operating System (OS) services.D. All of the above.
14. D. Bootstrapping runs the provided script, so anything you can accomplish in a script you can accomplish during bootstrapping.
15. How can you connect to a new Linux instance using SSH?A. Decrypt the root password.B. Using a certificateC. Using the private half of the instance's key pairD. Using Multi-Factor Authentication (MFA)
15. C. The public half of the key pair is stored on the instance, and the private half can then be used to connect via SSH.
16. VM Import/Export can import existing virtual machines as: (Choose 2 answers)A. Amazon Elastic Block Store (Amazon EBS) volumesB. Amazon Elastic Compute Cloud (Amazon EC2) instancesC. Amazon Machine Images (AMIs)D. Security groups
16. B, C. These are the possible outputs of VM Import/Export.
17. Which of the following can be used to address an Amazon Elastic Compute Cloud (Amazon EC2) instance over the web? (Choose 2 answers)A. Windows machine nameB. Public DNS nameC. Amazon EC2 instance IDD. Elastic IP address
17. B, D. Neither the Windows machine name nor the Amazon EC2 instance ID can be resolved into an IP address to access the instance.
18. Using the correctly decrypted Administrator password and RDP, you cannot log in to a Windows instance you just launched. Which of the following is a possible reason?A. There is no security group rule that allows RDP access over port 3389 from your IP address.B. The instance is a Reserved Instance.C. The instance is not using enhanced networking.D. The instance is not an Amazon EBS-optimized instance.
18. A. None of the other options will have any effect on the ability to connect.
19. You have a workload that requires 1 TB of durable block storage at 1,500 IOPS during normal use. Every night there is an Extract, Transform, Load (ETL) task that requires 3,000 IOPS for 15 minutes. What is the most appropriate volume type for this workload?A. Use a Provisioned IOPS SSD volume at 3,000 IOPS.B. Use an instance store.C. Use a general-purpose SSD volume.D. Use a magnetic volume.
19. C. A short period of heavy traffic is exactly the use case for the bursting nature of general-purpose SSD volumesâthe rest of the day is more than enough time to build up enough IOPS credits to handle the nightly task. Instance stores are not durable, magnetic volumes cannot provide enough IOPS, and to set up a Provisioned IOPS SSD volume to handle the peak would mean spending money for more IOPS than you need.
20. How are you billed for elastic IP addresses?A. Hourly when they are associated with an instanceB. Hourly when they are not associated with an instanceC. Based on the data that flows through themD. Based on the instance type to which they are attached
20. B. There is a very small hourly charge for allocated elastic IP addresses that are not associated with an instance.
1. What is the minimum size subnet that you can have in an Amazon VPC?A. /24B. /26C. /28D. /30
1. C. The minimum size subnet that you can have in an Amazon VPC is /28.
2. You are a solutions architect working for a large travel company that is migrating its existing server estate to AWS. You have recommended that they use a custom Amazon VPC, and they have agreed to proceed. They will need a public subnet for their web servers and a private subnet in which to place their databases. They also require that the web servers and database servers be highly available and that there be a minimum of two web servers and two database servers each. How many subnets should you have to maintain high availability?A. 2B. 3C. 4D. 1
2. C. You need two public subnets (one for each Availability Zone) and two private subnets (one for each Availability Zone). Therefore, you need four subnets.
3. Which of the following is an optional security control that can be applied at the subnet layer of a VPC?A. Network ACLB. Security GroupC. FirewallD. Web application firewall
3. A. Network ACLs are associated to a VPC subnet to control traffic flow.
4. What is the maximum size IP address range that you can have in an Amazon VPC?A. /16B. /24C. /28D. /30
4. A. The maximum size subnet that you can have in a VPC is /16.
5. You create a new subnet and then add a route to your route table that routes traffic out from that subnet to the Internet using an IGW. What type of subnet have you created?A. An internal subnetB. A private subnetC. An external subnetD. A public subnet
5. D. By creating a route out to the Internet using an IGW, you have made this subnet public.
6. What happens when you create a new Amazon VPC?A. A main route table is created by default.B. Three subnets are created by defaultâone for each Availability Zone.C. Three subnets are created by default in one Availability Zone.D. An IGW is created by default.
6. A. When you create an Amazon VPC, a route table is created by default. You must manually create subnets and an IGW.
7. You create a new VPC in US-East-1 and provision three subnets inside this Amazon VPC. Which of the following statements is true?A. By default, these subnets will not be able to communicate with each other; you will need to create routes.B. All subnets are public by default.C. All subnets will be able to communicate with each other by default.D. Each subnet will have identical CIDR blocks.
7. C. When you provision an Amazon VPC, all subnets can communicate with each other by default.
8. How many IGWs can you attach to an Amazon VPC at any one time?A. 1B. 2C. 3D. 4
8. A. You may only have one IGW for each Amazon VPC.
9. What aspect of an Amazon VPC is stateful?A. Network ACLsB. Security groupsC. Amazon DynamoDBD. Amazon S3
9. B. Security groups are stateful, whereas network ACLs are stateless.
11. Which of the following will occur when an Amazon Elastic Block Store (Amazon EBS)-backed Amazon EC2 instance in an Amazon VPC with an associated EIP is stopped and started? (Choose 2 answers)A. The EIP will be dissociated from the instance.B. All data on instance-store devices will be lost.C. All data on Amazon EBS devices will be lost.D. The ENI is detached.E. The underlying host for the instance is changed.
11. B, E. In the EC2-Classic network, the EIP will be disassociated with the instance; in the EC2-VPC network, the EIP remains associated with the instance. Regardless of the underlying network, a stop/start of an Amazon EBS-backed Amazon EC2 instance always changes the host computer.
12. How many VPC Peering connections are required for four VPCs located within the same AWS region to be able to send traffic to each of the others?A. 3B. 4C. 5D. 6
12. D. Six VPC Peering connections are needed for each of the four VPCs to send traffic to the other.
13. Which of the following AWS resources would you use in order for an EC2-VPC instance to resolve DNS names outside of AWS?A. A VPC peering connectionB. A DHCP option setC. A routing ruleD. An IGW
13. B. A DHCP option set allows customers to define DNS servers for DNS name resolution, establish domain names for instances within an Amazon VPC, define NTP servers, and define the NetBIOS name servers.
14. Which of the following is the Amazon side of an Amazon VPN connection?A. An EIPB. A CGWC. An IGWD. A VPG
14. D. A CGW is the customer side of a VPN connection, and an IGW connects a network to the Internet. A VPG is the Amazon side of a VPN connection.
15. What is the default limit for the number of Amazon VPCs that a customer may have in a region?A. 5B. 6C. 7D. There is no default maximum number of VPCs within a region.
15. A. The default limit for the number of Amazon VPCs that a customer may have in a region is 5.
17. Which of the following is the security protocol supported by Amazon VPC?A. SSHB. Advanced Encryption Standard (AES)C. Point-to-Point Tunneling Protocol (PPTP)D. IPsec
17. D. IPsec is the security protocol supported by Amazon VPC.
18. Which of the following Amazon VPC resources would you use in order for EC2-VPC instances to send traffic directly to Amazon S3?A. Amazon S3 gatewayB. IGWC. CGWD. VPC endpoint
18. D. An Amazon VPC endpoint enables you to create a private connection between your Amazon VPC and another AWS service without requiring access over the Internet or through a NAT device, VPN connection, or AWS Direct Connect.
19. What properties of an Amazon VPC must be specified at the time of creation? (Choose 2 answers)A. The CIDR block representing the IP address rangeB. One or more subnets for the Amazon VPCC. The region for the Amazon VPCD. Amazon VPC Peering relationships
19. A, C. The CIDR block is specified upon creation and cannot be changed. An Amazon VPC is associated with exactly one region which must be specified upon creation. You can add a subnet to an Amazon VPC any time after it has been created, provided its address range falls within the Amazon VPC CIDR block and does not overlap with the address range of any existing CIDR block. You can set up peering relationships between Amazon VPCs after they have been created.
20. Which Amazon VPC feature allows you to create a dual-homed instance? A. EIP addressB. ENIC. Security groupsD. CGW
20. B. Attaching an ENI associated with a different subnet to an instance can make the instance dual-homed.
1. Which of the following are required elements of an Auto Scaling group? (Choose 2 answers)A. Minimum sizeB. Health checksC. Desired capacityD. Launch configuration
1. A, D. An Auto Scaling group must have a minimum size and a launch configuration defined in order to be created. Health checks and a desired capacity are optional.
2. You have created an Elastic Load Balancing load balancer listening on port 80, and you registered it with a single Amazon Elastic Compute Cloud (Amazon EC2) instance also listening on port 80. A client makes a request to the load balancer with the correct protocol and port for the load balancer. In this scenario, how many connections does the balancer maintain?A. 1B. 2C. 3D. 4
2. B. The load balancer maintains two separate connections: one connection with the client and one connection with the Amazon EC2 instance.
3. How long does Amazon CloudWatch keep metric data?A. 1 dayB. 2 daysC. 1 weekD. 2 weeks
3. D. Amazon CloudWatch metric data is kept for 2 weeks.
4. Which of the following are the minimum required elements to create an Auto Scaling launch configuration?A. Launch configuration name, Amazon Machine Image (AMI), and instance typeB. Launch configuration name, AMI, instance type, and key pairC. Launch configuration name, AMI, instance type, key pair, and security groupD. Launch configuration name, AMI, instance type, key pair, security group, and block device mapping
4. A. Only the launch configuration name, AMI, and instance type are needed to create an Auto Scaling launch configuration. Identifying a key pair, security group, and a block device mapping are optional elements for an Auto Scaling launch configuration.
5. You are responsible for the application logging solution for your company's existing applications running on multiple Amazon EC2 instances. Which of the following is the best approach for aggregating the application logs within AWS?A. Amazon CloudWatch custom metricsB. Amazon CloudWatch Logs AgentC. An Elastic Load Balancing listenerD. An internal Elastic Load Balancing load balancer
5. B. You can use the Amazon CloudWatch Logs Agent installer on existing Amazon EC2 instances to install and configure the CloudWatch Logs Agent.
6. Which of the following must be configured on an Elastic Load Balancing load balancer to accept incoming traffic?A. A portB. A network interfaceC. A listenerD. An instance
6. C. You configure your load balancer to accept incoming traffic by specifying one or more listeners.
8. You want to host multiple Hypertext Transfer Protocol Secure (HTTPS) websites on a fleet of Amazon EC2 instances behind an Elastic Load Balancing load balancer with asingle X.509 certificate. How must you configure the Secure Sockets Layer (SSL) certificate so that clients connecting to the load balancer are not presented with awarning when they connect? A. Create one SSL certificate with a Subject Alternative Name (SAN) value for each website name.B. Create one SSL certificate with the Server Name Indication (SNI) value checked.C. Create multiple SSL certificates with a SAN value for each website name.D. Create SSL certificates for each Availability Zone with a SAN value for each website name.
8. A. An SSL certificate must specify the name of the website in either the subject name or listed as a value in the SAN extension of the certificate in order for connecting clients to not receive a warning.
9. Your web application front end consists of multiple Amazon Compute Cloud (Amazon EC2) instances behind an Elastic Load Balancing load balancer. You have configured the load balancer to perform health checks on these Amazon EC2 instances. If an instance fails to pass health checks, which statement will be true?A. The instance is replaced automatically by the load balancer.B. The instance is terminated automatically by the load balancer.C. The load balancer stops sending traffic to the instance that failed its health check.D. The instance is quarantined by the load balancer for root cause analysis.
9. C. When Amazon EC2 instances fail the requisite number of consecutive health checks, the load balancer stops sending traffic to the Amazon EC2 instance.
10. In the basic monitoring package for Amazon Elastic Compute Cloud (Amazon EC2), what Amazon CloudWatch metrics are available?A. Web server visible metrics such as number of failed transaction requestsB. Operating system visible metrics such as memory utilizationC. Database visible metrics such as number of connectionsD. Hypervisor visible metrics such as CPU utilization
10. D. Amazon CloudWatch metrics provide hypervisor visible metrics.
12. For an application running in the ap-northeast-1 region with three Availability Zones (apnortheast-1a, ap-northeast-1b, and ap-northeast-1c), which instance deployment provides high availability for the application that normally requires nine running Amazon Elastic Compute Cloud (Amazon EC2) instances but can run on a minimum of 65 percent capacity while Auto Scaling launches replacement instances in the remaining Availability Zones?A. Deploy the application on four servers in ap-northeast-1a and five servers in apnortheast-1b, and keep five stopped instances in ap-northeast-1a as reserve.B. Deploy the application on three servers in ap-northeast-1a, three servers in apnortheast-1b, and three servers in ap-northeast-1c.C. Deploy the application on six servers in ap-northeast-1b and three servers in apnortheast-1c.D. Deploy the application on nine servers in ap-northeast-1b, and keep nine stopped instances in ap-northeast-1a as reserve.
12. B. Auto Scaling will provide high availability across three Availability Zones with three Amazon EC2 instances in each and keep capacity above the required minimum capacity, even in the event of an entire Availability Zone becoming unavailable.
13. Which of the following are characteristics of the Auto Scaling service on AWS? (Choose 3 answers)A. Sends traffic to healthy instancesB. Responds to changing conditions by adding or terminating Amazon Elastic Compute Cloud (Amazon EC2) instancesC. Collects and tracks metrics and sets alarmsD. Delivers push notificationsE. Launches instances from a specified Amazon Machine Image (AMI)F. Enforces a minimum number of running Amazon EC2 instances
13. B, E, F. Auto Scaling responds to changing conditions by adding or terminating instances, launches instances from an AMI specified in the launch configuration associated with the Auto Scaling group, and enforces a minimum number of instances in the min-size parameter of the Auto Scaling group.
14. Why is the launch configuration referenced by the Auto Scaling group instead of being part of the Auto Scaling group?A. It allows you to change the Amazon Elastic Compute Cloud (Amazon EC2) instance type and Amazon Machine Image (AMI) without disrupting the Auto Scaling group.B. It facilitates rolling out a patch to an existing set of instances managed by an Auto Scaling group.C. It allows you to change security groups associated with the instances launched without having to make changes to the Auto Scaling group.D. All of the aboveE. None of the above
14. D. A, B, and C are all true statements about launch configurations being loosely coupled and referenced by the Auto Scaling group instead of being part of the Auto Scaling group.
15. An Auto Scaling group may use: (Choose 2 answers)A. On-Demand InstancesB. Stopped instancesC. Spot InstancesD. On-premises instancesE. Already running instances if they use the same Amazon Machine Image (AMI) as the Auto Scaling group's launch configuration and are not already part of another Auto Scaling group
15. A, C. An Auto Scaling group may use On-Demand and Spot Instances. An Auto Scaling group may not use already stopped instances, instances running someplace other than AWS, and already running instances not started by the Auto Scaling group itself.
16. Amazon CloudWatch supports which types of monitoring plans? (Choose 2 answers)A. Basic monitoring, which is freeB. Basic monitoring, which has an additional costC. Ad hoc monitoring, which is freeD. Ad hoc monitoring, which has an additional costE. Detailed monitoring, which is freeF. Detailed monitoring, which has an additional cost
16. A, F. Amazon CloudWatch has two plans: basic, which is free, and detailed, which has an additional cost. There is no ad hoc plan for Amazon CloudWatch.
17. Elastic Load Balancing health checks may be: (Choose 3 answers)A. A pingB. A key pair verificationC. A connection attemptD. A page requestE. An Amazon Elastic Compute Cloud (Amazon EC2) instance status check
17. A, C, D. An Elastic Load Balancing health check may be a ping, a connection attempt, or a page that is checked.
18. When an Amazon Elastic Compute Cloud (Amazon EC2) instance registered with an Elastic Load Balancing load balancer using connection draining is deregistered or unhealthy, which of the following will happen? (Choose 2 answers)A. Immediately close all existing connections to that instance.B. Keep the connections open to that instance, and attempt to complete in-flight requests.C. Redirect the requests to a user-defined error page like "Oops this is embarrassing" or "Under Construction."D. Forcibly close all connections to that instance after a timeout period.E. Leave the connections open as long as the load balancer is running.
18. B, C. When connection draining is enabled, the load balancer will stop sending requests to a deregistered or unhealthy instance and attempt to complete in-flight requests until a connection draining timeout period is reached, which is 300 seconds by default.
19. Elastic Load Balancing supports which of the following types of load balancers? (Choose 3 answers)A. Cross-regionB. Internet-facingC. InterimD. ItinerantE. InternalF. Hypertext Transfer Protocol Secure (HTTPS) using Secure Sockets Layer (SSL)
19. B, E, F. Elastic Load Balancing supports Internet-facing, internal, and HTTPS load balancers.
20. Auto Scaling supports which of the following plans for Auto Scaling groups? (Choose 3 answers)A. PredictiveB. ManualC. PreemptiveD. ScheduledE. DynamicF. End-user request drivenG. Optimistic
20. B, D, E. Auto Scaling supports maintaining the current size of an Auto Scaling group using four plans: maintain current levels, manual scaling, scheduled scaling, and dynamic scaling.
1. Which of the following methods will allow an application using an AWS SDK to be authenticated as a principal to access AWS Cloud services? (Choose 2 answers)A. Create an IAM user and store the user name and password for the user in the application's configuration.B. Create an IAM user and store both parts of the access key for the user in the application's configuration.C. Run the application on an Amazon EC2 instance with an assigned IAM role.D. Make all the API calls over an SSL connection.
1. B, C. Programmatic access is authenticated with an access key, not with user names/passwords. IAM roles provide a temporary security token to an application using an SDK.
2. Which of the following are found in an IAM policy? (Choose 2 answers)A. Service NameB. RegionC. ActionD. Password
2. A, C. IAM policies are independent of region, so no region is specified in the policy. IAM policies are about authorization for an already-authenticated principal, so no password is needed.
3. Your AWS account administrator left your company today. The administrator had access to the root user and a personal IAM administrator account. With these accounts, he generated other IAM accounts and keys. Which of the following should you do today to protect your AWS infrastructure? (Choose 4 answers)A. Change the password and add MFA to the root user.B. Put an IP restriction on the root user.C. Rotate keys and change passwords for IAM accounts.D. Delete all IAM accounts.E. Delete the administrator's personal IAM account.F. Relaunch all Amazon EC2 instances with new roles.
3. A, B, C, E. Locking down your root user and all accounts to which the administrator had access is the key here. Deleting all IAM accounts is not necessary, and it would cause great disruption to your operations. Amazon EC2 roles use temporary security tokens, so relaunching Amazon EC2 instances is not necessary.
4. Which of the following actions can be authorized by IAM? (Choose 2 answers)A. Installing ASP.NET on a Windows ServerB. Launching an Amazon Linux EC2 instanceC. Querying an Oracle databaseD. Adding a message to an Amazon Simple Queue Service (Amazon SQS) queue
4. B, D. IAM controls access to AWS resources only. Installing ASP.NET will require Windows operating system authorization, and querying an Oracle database will require Oracle authorization.
5. Which of the following are IAM security features? (Choose 2 answers)A. Password policiesB. Amazon DynamoDB global secondary indexesC. MFAD. Consolidated Billing
5. A, C. Amazon DynamoDB global secondary indexes are a performance feature of Amazon DynamoDB; Consolidated Billing is an accounting feature allowing all bills to roll up under a single account. While both are very valuable features, neither is a security feature.
6. Which of the following are benefits of using Amazon EC2 roles? (Choose 2 answers)A. No policies are required.B. Credentials do not need to be stored on the Amazon EC2 instance.C. Key rotation is not necessary.D. Integration with Active Directory is automatic.
6. B, C. Amazon EC2 roles must still be assigned a policy. Integration with Active Directory involves integration between Active Directory and IAM via SAML.
7. Which of the following are based on temporary security tokens? (Choose 2 answers)A. Amazon EC2 rolesB. MFAC. Root userD. Federation
7. A, D. Amazon EC2 roles provide a temporary token to applications running on the instance; federation maps policies to identities from other sources via temporary tokens.
9. You want to grant the individuals on your network team the ability to fully manipulate Amazon EC2 instances. Which of the following accomplish this goal? (Choose 2 answers)A. Create a new policy allowing EC2:* actions, and name the policy NetworkTeam.B. Assign the managed policy, EC2FullAccess, to a group named NetworkTeam, and assign all the team members' IAM user accounts to that group.C. Create a new policy that grants EC2:* actions on all resources, and assign that policy to each individual's IAM user account on the network team.D. Create a NetworkTeam IAM group, and have each team member log in to the AWS Management Console using the user name/password for the group.
9. B, C. Access requires an appropriate policy associated with a principal. Response A is merely a policy with no principal, and response D is not a principal as IAM groups do not have user names and passwords. Response B is the best solution; response C will also work but it is much harder to manage.
10. What is the format of an IAM policy?A. XMLB. Key/value pairsC. JSOND. Tab-delimited text
10. C. An IAM policy is a JSON document.
1. Which AWS database service is best suited for traditional Online Transaction Processing (OLTP)?A. Amazon RedshiftB. Amazon Relational Database Service (Amazon RDS)C. Amazon GlacierD. Elastic Database
1. B. Amazon RDS is best suited for traditional OLTP transactions. Amazon Redshift, on the other hand, is designed for OLAP workloads. Amazon Glacier is designed for cold archival storage.
2. Which AWS database service is best suited for non-relational databases?A. Amazon RedshiftB. Amazon Relational Database Service (Amazon RDS)C. Amazon GlacierD. Amazon DynamoDB
2. D. Amazon DynamoDB is best suited for non-relational databases. Amazon RDS and Amazon Redshift are both structured relational databases.
3. You are a solutions architect working for a media company that hosts its website on AWS. Currently, there is a single Amazon Elastic Compute Cloud (Amazon EC2) Instance on AWS with MySQL installed locally to that Amazon EC2 Instance. You have been asked to make the company's production environment more resilient and to increase performance. You suggest that the company split out the MySQL database onto an Amazon RDS Instance with Multi-AZ enabled. This addresses the company's increased resiliency requirements. Now you need to suggest how you can increase performance. Ninety-nine percent of the company's end users are magazine subscribers who will be reading additional articles on the website, so only one percent of end users will need to write data to the site. What should you suggest to increase performance?A. Alter the connection string so that if a user is going to write data, it is written to the secondary copy of the Multi-AZ database.B. Alter the connection string so that if a user is going to write data, it is written to the primary copy of the Multi-AZ database.C. Recommend that the company use read replicas, and distribute the traffic across multiple read replicas.D. Migrate the MySQL database to Amazon Redshift to take advantage of columnar storage and maximize performance.
3. C. In this scenario, the best idea is to use read replicas to scale out the database and thus maximize read performance. When using Multi-AZ, the secondary database is not accessible and all reads and writes must go to the primary or any read replicas.
4. Which AWS Cloud service is best suited for Online Analytics Processing (OLAP)?A. Amazon RedshiftB. Amazon Relational Database Service (Amazon RDS)C. Amazon GlacierD. Amazon DynamoDB
4. A. Amazon Redshift is best suited for traditional OLAP transactions. While Amazon RDS can also be used for OLAP, Amazon Redshift is purpose-built as an OLAP data warehouse.
5. You have been using Amazon Relational Database Service (Amazon RDS) for the last year to run an important application with automated backups enabled. One of your team members is performing routine maintenance and accidentally drops an important table, causing an outage. How can you recover the missing data while minimizing the duration of the outage?A. Perform an undo operation and recover the table.B. Restore the database from a recent automated DB snapshot.C. Restore only the dropped table from the DB snapshot.D. The data cannot be recovered.
5. B. DB Snapshots can be used to restore a complete copy of the database at a specific point in time. Individual tables cannot be extracted from a snapshot.
6. Which Amazon Relational Database Service (Amazon RDS) database engines support Multi-AZ?A. All of themB. Microsoft SQL Server, MySQL, and OracleC. Oracle, Amazon Aurora, and PostgreSQLD. MySQL
6. A. All Amazon RDS database engines support Multi-AZ deployment.
7. Which Amazon Relational Database Service (Amazon RDS) database engines support read replicas?A. Microsoft SQL Server and OracleB. MySQL, MariaDB, PostgreSQL, and AuroraC. Aurora, Microsoft SQL Server, and OracleD. MySQL and PostgreSQL
7. B. Read replicas are supported by MySQL, MariaDB, PostgreSQL, and Aurora.
8. Your team is building an order processing system that will span multiple Availability Zones. During testing, the team wanted to test how the application will react to a database failover. How can you enable this type of test?A. Force a Multi-AZ failover from one Availability Zone to another by rebooting the primary instance using the Amazon RDS console.B. Terminate the DB instance, and create a new one. Update the connection string.C. Create a support case asking for a failover.D. It is not possible to test a failover.
8. A. You can force a failover from one Availability Zone to another by rebooting the primary instance in the AWS Management Console. This is often how people test a failover in the real world. There is no need to create a support case.
9. You are a system administrator whose company has moved its production database to AWS. Your company monitors its estate using Amazon CloudWatch, which sends alarms using Amazon Simple Notification Service (Amazon SNS) to your mobile phone. One night, you get an alert that your primary Amazon Relational Database Service (Amazon RDS) Instance has gone down. You have Multi-AZ enabled on this instance. What should you do to ensure the failover happens quickly?A. Update your Domain Name System (DNS) to point to the secondary instance's new IP address, forcing your application to fail over to the secondary instance.B. Connect to your server using Secure Shell (SSH) and update your connection strings so that your application can communicate to the secondary instance instead of the failed primary instance.C. Take a snapshot of the secondary instance and create a new instance using this snapshot, then update your connection string to point to the new instance.D. No action is necessary. Your connection string points to the database endpoint, and AWS automatically updates this endpoint to point to your secondary instance.
9. D. Monitor the environment while Amazon RDS attempts to recover automatically. AWS will update the DB endpoint to point to the secondary instance automatically.
10. You are working for a small organization without a dedicated database administrator on staff. You need to install Microsoft SQL Server Enterprise edition quickly to support an accounting back office application on Amazon Relational Database Service (Amazon RDS). What should you do?A. Launch an Amazon RDS DB Instance, and select Microsoft SQL Server Enterprise Edition under the Bring Your Own License (BYOL) model.B. Provision SQL Server Enterprise Edition using the License Included option from the Amazon RDS Console.C. SQL Server Enterprise edition is only available via the Command Line Interface (CLI). Install the command-line tools on your laptop, and then provision your new Amazon RDS Instance using the CLI.D. You cannot use SQL Server Enterprise edition on Amazon RDS. You should install this on to a dedicated Amazon Elastic Compute Cloud (Amazon EC2) Instance.
10. A. Amazon RDS supports Microsoft SQL Server Enterprise edition and the license is available only under the BYOL model.
11. You are building the database tier for an enterprise application that gets occasional activity throughout the day. Which storage type should you select as your default option?A. Magnetic storageB. General Purpose Solid State Drive (SSD)C. Provisioned IOPS (SSD)D. Storage Area Network (SAN)-attached
11. B. General Purpose (SSD) volumes are generally the right choice for databases that have bursts of activity.
12. You are designing an e-commerce web application that will scale to potentially hundreds of thousands of concurrent users. Which database technology is best suited to hold the session state for large numbers of concurrent users?A. Relational database using Amazon Relational Database Service (Amazon RDS)B. NoSQL database table using Amazon DynamoDBC. Data warehouse using Amazon RedshiftD. Amazon Simple Storage Service (Amazon S3)
12. B. NoSQL databases like Amazon DynamoDB excel at scaling to hundreds of thousands of requests with key/value access to user profile and session.
13. Which of the following techniques can you use to help you meet Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements? (Choose 3 answers)A. DB snapshotsB. DB option groupsC. Read replicaD. Multi-AZ deployment
13. A, C, D. DB snapshots allow you to back up and recover your data, while read replicas and a Multi-AZ deployment allow you to replicate your data and reduce the time to failover.
14. When using Amazon Relational Database Service (Amazon RDS) Multi-AZ, how can you offload read requests from the primary? (Choose 2 answers)A. Configure the connection string of the clients to connect to the secondary node and perform reads while the primary is used for writes.B. Amazon RDS automatically sends writes to the primary and sends reads to the secondary.C. Add a read replica DB instance, and configure the client's application logic to use a read-replica.D. Create a caching environment using ElastiCache to cache frequently used data. Update the application logic to read/write from the cache.
14. C, D. Amazon RDS allows for the creation of one or more read-replicas for many engines that can be used to handle reads. Another common pattern is to create a cache using Memcached and Amazon ElastiCache to store frequently used queries. The secondary slave DB Instance is not accessible and cannot be used to offload queries.
15. You are building a large order processing system and are responsible for securing the database. Which actions will you take to protect the data? (Choose 3 answers)A. Adjust AWS Identity and Access Management (IAM) permissions for administrators.B. Configure security groups and network Access Control Lists (ACLs) to limit network access.C. Configure database users, and grant permissions to database objects.D. Install anti-virus software on the Amazon RDS DB Instance.
15. A, B, C. Protecting your database requires a multilayered approach that secures the infrastructure, the network, and the database itself. Amazon RDS is a managed service and direct access to the OS is not available.
16. Your team manages a popular website running Amazon Relational Database Service (Amazon RDS) MySQL back end. The Marketing department has just informed you about an upcoming television commercial that will drive thousands of new visitors to the website. How can you prepare your database to handle the load? (Choose 3 answers)A. Vertically scale the DB Instance by selecting a more powerful instance class.B. Create read replicas to offload read requests and update your application.C. Upgrade the storage from Magnetic volumes to General Purpose Solid State Drive (SSD) volumes.D. Upgrade to Amazon Redshift for faster columnar storage.
16. A, B, C. Vertically scaling up is one of the simpler options that can give you additional processing power without making any architectural changes. Read replicas require some application changes but let you scale processing power horizontally. Finally, busy databases are often I/O- bound, so upgrading storage to General Purpose (SSD) or Provisioned IOPS (SSD) can often allow for additional request processing.
17. You are building a photo management application that maintains metadata on millions of images in an Amazon DynamoDB table. When a photo is retrieved, you want to display the metadata next to the image. Which Amazon DynamoDB operation will you use to retrieve the metadata attributes from the table?A. Scan operationB. Search operationC. Query operationD. Find operation
17. C. Query is the most efficient operation to find a single item in a large table.
18. You are creating an Amazon DynamoDB table that will contain messages for a social chat application. This table will have the following attributes: Username (String), Timestamp (Number), Message (String). Which attribute should you use as the partition key? The sort key?A. Username, TimestampB. Username, MessageC. Timestamp, MessageD. Message, Timestamp
18. A. Using the Username as a partition key will evenly spread your users across the partitions. Messages are often filtered down by time range, so Timestamp makes sense as a sort key.
19. Which of the following statements about Amazon DynamoDB tables are true? (Choose 2 answers)A. Global secondary indexes can only be created when the table is being created.B. Local secondary indexes can only be created when the table is being created.C. You can only have one global secondary index.D. You can only have one local secondary index.
19. B, D. You can only have a single local secondary index, and it must be created at the same time the table is created. You can create many global secondary indexes after the table has been created.
20. Which of the following workloads are a good fit for running on Amazon Redshift? (Choose 2 answers)A. Transactional database supporting a busy e-commerce order processing websiteB. Reporting database supporting back-office analyticsC. Data warehouse used to aggregate multiple disparate data sourcesD. Manage session state and user profile data for thousands of concurrent users
20. B, C. Amazon Redshift is an Online Analytical Processing (OLAP) data warehouse designed for analytics, Extract, Transform, Load (ETL), and high-speed querying. It is not well suited for running transactional applications that require high volumes of small inserts or updates.
1. Which of the following is not a supported Amazon Simple Notification Service (Amazon SNS) protocol?A. HTTPSB. AWS LambdaC. Email-JSOND. Amazon DynamoDB
1. D. Amazon DynamoDB is not a supported Amazon SNS protocol.
2. When you create a new Amazon Simple Notification Service (Amazon SNS) topic, which of the following is created automatically?A. An Amazon Resource Name (ARN)B. A subscriberC. An Amazon Simple Queue Service (Amazon SQS) queue to deliver your Amazon SNS topicD. A message
2. A. When you create a new Amazon SNS topic, an Amazon ARN is created automatically.
3. Which of the following are features of Amazon Simple Notification Service (Amazon SNS)? (Choose 3 answers)A. PublishersB. ReadersC. SubscribersD. Topic
3. A, C, D. Publishers, subscribers, and topics are the correct answers. You have subscribers to an Amazon SNS topic, not readers.
4. What is the default time for an Amazon Simple Queue Service (Amazon SQS) visibility timeout?A. 30 secondsB. 60 secondsC. 1 hourD. 12 hours
4. A. The default time for an Amazon SQS visibility timeout is 30 seconds.
5. What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) visibility timeout?A. 30 secondsB. 60 secondsC. 1 hourD. 12 hours
5. D. The maximum time for an Amazon SQS visibility timeout is 12 hours.
6. Which of the following options are valid properties of an Amazon Simple Queue Service (Amazon SQS) message? (Choose 2 answers)A. DestinationB. Message IDC. TypeD. Body
6. B, D. The valid properties of an SQS message are Message ID and Body. Each message receives a system-assigned Message ID that Amazon SQS returns to you in the SendMessage response. The Message Body is composed of name/value pairs and the unstructured, uninterpreted content.
7. You are a solutions architect who is working for a mobile application company that wants to use Amazon Simple Workflow Service (Amazon SWF) for their new takeout ordering application. They will have multiple workflows that will need to interact. What should you advise them to do in structuring the design of their Amazon SWF environment?A. Use multiple domains, each containing a single workflow, and design the workflows to interact across the different domains.B. Use a single domain containing multiple workflows. In this manner, the workflows will be able to interact.C. Use a single domain with a single workflow and collapse all activities to within this single workflow.D. Workflows cannot interact with each other; they would be better off using Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) for their application.
7. B. Use a single domain with multiple workflows. Workflows within separate domains cannot interact.
8. In Amazon Simple Workflow Service (Amazon SWF), which of the following are actors? (Choose 3 answers)A. Activity workersB. Workflow startersC. DecidersD. Activity tasks
8. A, B, C. In Amazon SWF, actors can be activity workers, workflow starters, or deciders.
9. You are designing a new application, and you need to ensure that the components of your application are not tightly coupled. You are trying to decide between the different AWS Cloud services to use to achieve this goal. Your requirements are that messages between your application components may not be delivered more than once, tasks must be completed in either a synchronous or asynchronous fashion, and there must be some form of application logic that decides what do when tasks have been completed. What application service should you use?A. Amazon Simple Queue Service (Amazon SQS)B. Amazon Simple Workflow Service (Amazon SWF)C. Amazon Simple Storage Service (Amazon S3)D. Amazon Simple Email Service (Amazon SES)
9. B. Amazon SWF would best serve your purpose in this scenario because it helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.
10. How does Amazon Simple Queue Service (Amazon SQS) deliver messages?A. Last In, First Out (LIFO)B. First In, First Out (FIFO)C. SequentiallyD. Amazon SQS doesn't guarantee delivery of your messages in any particular order.
10. D. Amazon SQS does not guarantee in what order your messages will be delivered.
11. Of the following options, what is an efficient way to fanout a single Amazon Simple Notification Service (Amazon SNS) message to multiple Amazon Simple Queue Service (Amazon SQS) queues?A. Create an Amazon SNS topic using Amazon SNS. Then create and subscribe multiple Amazon SQS queues sent to the Amazon SNS topic.B. Create one Amazon SQS queue that subscribes to multiple Amazon SNS topics.C. Amazon SNS allows exactly one subscriber to each topic, so fanout is not possible.D. Create an Amazon SNS topic using Amazon SNS. Create an application that subscribes to that topic and duplicates the message. Send copies to multiple Amazon SQS queues.
11. A. Multiple queues can subscribe to an Amazon SNS topic, which can enable parallel asynchronous processing.
12. Your application polls an Amazon Simple Queue Service (Amazon SQS) queue frequently and returns immediately, often with empty ReceiveMessageResponses. What is one thing that can be done to reduce Amazon SQS costs?A. Pricing on Amazon SQS does not include a cost for service requests; therefore, there is no concern.B. Increase the timeout value for short polling to wait for messages longer before returning a response.C. Change the message visibility value to a higher number.D. Use long polling by supplying a WaitTimeSeconds of greater than 0 seconds when calling ReceiveMessage.
12. D. Long polling allows your application to poll the queue, and, if nothing is there, Amazon Elastic Compute Cloud (Amazon EC2) waits for an amount of time you specify (between 1 and 20 seconds). If a message arrives in that time, it is delivered to your application as soon as possible. If a message does not arrive in that time, you need to execute the ReceiveMessage function again.
13. What is the longest time available for an Amazon Simple Queue Service (Amazon SQS) long polling timeout?A. 10 secondsB. 20 secondsC. 30 secondsD. 1 hour
13. B. The maximum time for an Amazon SQS long polling timeout is 20 seconds.
14. What is the longest configurable message retention period for Amazon Simple Queue Service (Amazon SQS)?A. 30 minutesB. 4 daysC. 30 secondsD. 14 days
14. D. The longest configurable message retention period for Amazon SQS is 14 days.
15. What is the default message retention period for Amazon Simple Queue Service (Amazon SQS)?A. 30 minutesB. 4 daysC. 30 secondsD. 14 days
15. B. The default message retention period that can be set in Amazon SQS is four days.
16. Amazon Simple Notification Service (Amazon SNS) is a push notification service that lets you send individual or multiple messages to large numbers of recipients. What types of clients are supported?A. Java and JavaScript clients that support publisher and subscriber typesB. Producers and consumers supported by C and C++ clientsC. Mobile and AMQP support for publisher and subscriber client typesD. Publisher and subscriber client types
16. D. With Amazon SNS, you send individual or multiple messages to large numbers of recipients using publisher and subscriber client types.
17. In Amazon Simple Workflow Service (Amazon SWF), a decider is responsible for what?A. Executing each step of the workB. Defining work coordination logic by specifying work sequencing, timing, and failure conditionsC. Executing your workflowD. Registering activities and workflow with Amazon SWF
17. B. The decider schedules the activity tasks and provides input data to the activity workers. The decider also processes events that arrive while the workflow is in progress and closes the workflow when the objective has been completed.
18. Can an Amazon Simple Notification Service (Amazon SNS) topic be recreated with a previously used topic name?A. Yes. The topic name should typically be available after 24 hours after the previous topic with the same name has been deleted.B. Yes. The topic name should typically be available after 1-3 hours after the previous topic with the same name has been deleted.C. Yes. The topic name should typically be available after 30-60 seconds after the previous topic with the same name has been deleted.D. At this time, this feature is not supported.
18. C. Topic names should typically be available for reuse approximately 30-60 seconds after the previous topic with the same name has been deleted. The exact time will depend on the number of subscriptions active on the topic; topics with a few subscribers will be available instantly for reuse, while topics with larger subscriber lists may take longer.
19. What should you do in order to grant a different AWS account permission to your Amazon Simple Queue Service (Amazon SQS) queue?A. Share credentials to your AWS account and have the other account's applications use your account's credentials to access the Amazon SQS queue.B. Create a user for that account in AWS Identity and Access Management (IAM) and establish an IAM policy that grants access to the queue.C. Create an Amazon SQS policy that grants the other account access.D. Amazon Virtual Private Cloud (Amazon VPC) peering must be used to achieve this.
19. C. The main difference between Amazon SQS policies and IAM policies is that an Amazon SQS policy enables you to grant a different AWS account permission to your Amazon SQS queues, but an IAM policy does not.
20. Can an Amazon Simple Notification Service (Amazon SNS) message be deleted after being published to a topic?A. Only if a subscriber(s) has/have not read the message yetB. Only if the Amazon SNS recall message parameter has been setC. No. After a message has been successfully published to a topic, it cannot be recalled.D. Yes. However it can be deleted only if the subscribers are Amazon SQS queues.
20. C. No. After a message has been successfully published to a topic, it cannot be recalled.
1. Which type of record is commonly used to route traffic to an IPv6 address?A. An A recordB. A CNAMEC. An AAAA recordD. An MX record
1. C. An AAAA record is used to route traffic to an IPv6 address, whereas an A record is used to route traffic to an IPv4 address.
2. Where do you register a domain name?A. With your local government authorityB. With a domain registrarC. With InterNIC directlyD. With the Internet Assigned Numbers Authority (IANA)
2. B. Domain names are registered with a domain registrar, which then registers the name to InterNIC.
3. You have an application that for legal reasons must be hosted in the United States when U.S. citizens access it. The application must be hosted in the European Union when citizens of the EU access it. For all other citizens of the world, the application must be hosted in Sydney. Which routing policy should you choose in order to achieve this?A. Latency-based routingB. Simple routingC. Geolocation routingD. Failover routing
3. C. You should route your traffic based on where your end users are located. The best routing policy to achieve this is geolocation routing.
4. Which type of DNS record should you use to resolve an IP address to a domain name?A. An A recordB. A C NameC. An SPF recordD. A PTR record
4. D. A PTR record is used to resolve an IP address to a domain name, and it is commonly referred to as "reverse DNS."
5. You host a web application across multiple AWS regions in the world, and you need to configure your DNS so that your end users will get the fastest network performance possible. Which routing policy should you apply?A. Geolocation routingB. Latency-based routingC. Simple routingD. Weighted routing
5. B. You want your users to have the fastest network access possible. To do this, you would use latency-based routing. Geolocation routing would not achieve this as well as latencybased routing, which is specifically geared toward measuring the latency and thus would direct you to the AWS region in which you would have the lowest latency.
6. Which DNS record should you use to configure the transmission of email to your intended mail server?A. SPF recordsB. A recordsC. MX recordsD. SOA record
6. C. You would use Mail eXchange (MX) records to define which inbound destination mail server should be used.
7. Which DNS records are commonly used to stop email spoofing and spam?A. MX recordsB. SPF recordsC. A recordsD. C names
7. B. SPF records are used to verify authorized senders of mail from your domain.
8. You are rolling out A and B test versions of a web application to see which version results in the most sales. You need 10 percent of your traffic to go to version A, 10 percent to go to version B, and the rest to go to your current production version. Which routing policy should you choose to achieve this?A. Simple routingB. Weighted routingC. Geolocation routingD. Failover routing
8. B. Weighted routing would best achieve this objective because it allows you to specify which percentage of traffic is directed to each endpoint.
9. Which DNS record must all zones have by default?A. SPFB. TXTC. MXD. SOA
9. D. The start of a zone is defined by the SOA; therefore, all zones must have an SOA record by default.
10. Your company has its primary production site in Western Europe and its DR site in the Asia Pacific. You need to configure DNS so that if your primary site becomes unavailable,you can fail DNS over to the secondary site. Which DNS routing policy would best achieve this?A. Weighted routingB. Geolocation routingC. Simple routingD. Failover routing
10. D. Failover-based routing would best achieve this objective.
11. Which type of DNS record should you use to resolve a domain name to another domain name?A. An A recordB. A CNAME recordC. An SPF recordD. A PTR record
11. B. The CNAME record maps a name to another name. It should be used only when there are no other records on that name.
12. Which is a function that Amazon Route 53 does not perform?A. Domain registrationB. DNS serviceC. Load balancingD. Health checks
12. C. Amazon Route 53 performs three main functions: domain registration, DNS service, and health checking.
13. Which DNS record can be used to store human-readable information about a server, network, and other accounting data with a host?A. A TXT recordB. An MX recordC. An SPF recordD. A PTR record
13. A. A TXT record is used to store arbitrary and unformatted text with a host.
14. Which resource record set would not be allowed for the hosted zone example.com?A. www.example.comB. www.aws.example.comC. www.example.caD. www.beta.example.com
14. C. The resource record sets contained in a hosted zone must share the same suffix.
15. Which port number is used to serve requests by DNS?A. 22B. 53C. 161D. 389
15. B. DNS uses port number 53 to serve requests.
16. Which protocol is primarily used by DNS to serve requests?A. Transmission Control Protocol (TCP)B. Hyper Text Transfer Protocol (HTTP)C. File Transfer Protocol (FTP)D. User Datagram Protocol (UDP)
16. D. DNS primarily uses UDP to serve requests.
17. Which protocol is used by DNS when response data size exceeds 512 bytes?A. Transmission Control Protocol (TCP)B. Hyper Text Transfer Protocol (HTTP)C. File Transfer Protocol (FTP)D. User Datagram Protocol (UDP)
17. A. The TCP protocol is used by DNS server when the response data size exceeds 512 bytes or for tasks such as zone transfers.
18. What are the different hosted zones that can be created in Amazon Route 53? 1. Public hosted zone 2. Global hosted zone 3. Private hosted zoneA. 1 and 2B. 1 and 3C. 2 and 3D. 1, 2, and 3
18. B. Using Amazon Route 53, you can create two types of hosted zones: public hosted zones and private hosted zones.
19. Amazon Route 53 cannot route queries to which AWS resource?A. Amazon CloudFront distributionB. Elastic Load Balancing load balancerC. Amazon EC2D. AWS OpsWorks
19. D. Amazon Route 53 can route queries to a variety of AWS resources such as an Amazon CloudFront distribution, an Elastic Load Balancing load balancer, an Amazon EC2 instance, a website hosted in an Amazon S3 bucket, and an Amazon Relational Database (Amazon RDS).
20. When configuring Amazon Route 53 as your DNS service for an existing domain, which is the first step that needs to be performed?A. Create hosted zones.B. Create resource record sets.C. Register a domain with Amazon Route 53.D. Transfer domain registration from current registrar to Amazon Route 53.
20. D. You must first transfer the existing domain registration from another registrar to Amazon Route 53 to configure it as your DNS service.
1. Which of the following objects are good candidates to store in a cache? (Choose 3 answers)A. Session stateB. Shopping cartC. Product catalogD. Bank account balance
1. A, B, C. Many types of objects are good candidates to cache because they have the potential to be accessed by numerous users repeatedly. Even the balance of a bank account could be cached for short periods of time if the back-end database query is slow to respond.
2. Which of the following cache engines are supported by Amazon ElastiCache? (Choose 2 answers)A. MySQLB. MemcachedC. RedisD. Couchbase
2. B, C. Amazon ElastiCache supports Memcached and Redis cache engines. MySQL is not a cache engine, and Couchbase is not supported.
3. How many nodes can you add to an Amazon ElastiCache cluster running Memcached?A. 1B. 5C. 20D. 100
3. C. The default limit is 20 nodes per cluster.
4. How many nodes can you add to an Amazon ElastiCache cluster running Redis?A. 1B. 5C. 20D. 100
4. A. Redis clusters can only contain a single node; however, you can group multiple clusters together into a replication group.
5. An application currently uses Memcached to cache frequently used database queries. Which steps are required to migrate the application to use Amazon ElastiCache with minimal changes? (Choose 2 answers)A. Recompile the application to use the Amazon ElastiCache libraries.B. Update the configuration file with the endpoint for the Amazon ElastiCache cluster.C. Configure a security group to allow access from the application servers.D. Connect to the Amazon ElastiCache nodes using Secure Shell (SSH) and install the latest version of Memcached.
5. B, C. Amazon ElastiCache is Application Programming Interface (API)-compatible with existing Memcached clients and does not require the application to be recompiled or linked against the libraries. Amazon ElastiCache manages the deployment of the Amazon ElastiCache binaries.
6. How can you back up data stored in Amazon ElastiCache running Redis? (Choose 2 answers)A. Create an image of the Amazon Elastic Compute Cloud (Amazon EC2) instance.B. Configure automatic snapshots to back up the cache environment every night.C. Create a snapshot manually.D. Redis clusters cannot be backed up.
6. B, C. Amazon ElastiCache with the Redis engine allows for both manual and automatic snapshots. Memcached does not have a backup function.
7. How can you secure an Amazon ElastiCache cluster? (Choose 3 answers)A. Change the Memcached root password.B. Restrict Application Programming Interface (API) actions using AWS Identity and Access Management (IAM) policies.C. Restrict network access using security groups.D. Restrict network access using a network Access Control List (ACL).
7. B, C, D. Limit access at the network level using security groups or network ACLs, and limit infrastructure changes using IAM.
8. You are working on a mobile gaming application and are building the leaderboard feature to track the top scores across millions of users. Which AWS services are best suited for this use case?A. Amazon RedshiftB. Amazon ElastiCache using MemcachedC. Amazon ElastiCache using RedisD. Amazon Simple Storage Service (S3)
8. C. Amazon ElastiCache with Redis provides native functions that simplify the development of leaderboards. With Memcached, it is more difficult to sort and rank large datasets. Amazon Redshift and Amazon S3 are not designed for high volumes of small reads and writes, typical of a mobile game.
9. You have built a large web application that uses Amazon ElastiCache using Memcached to store frequent query results. You plan to expand both the web fleet and the cache fleet multiple times over the next year to accommodate increased user traffic. How do you minimize the amount of changes required when a scaling event occurs?A. Configure AutoDiscovery on the client sideB. Configure AutoDiscovery on the server sideC. Update the configuration file each time a new clusterD. Use an Elastic Load Balancer to proxy the requests
9. A. When the clients are configured to use AutoDiscovery, they can discover new cache nodes as they are added or removed. AutoDiscovery must be configured on each client and is not active server side. Updating the configuration file each time will be very difficult to manage. Using an Elastic Load Balancer is not recommended for this scenario.
10. Which cache engines does Amazon ElastiCache support? (Choose 2 answers)A. MemcachedB. RedisC. MembaseD. Couchbase
10. A, B. Amazon ElastiCache supports both Memcached and Redis. You can run selfmanaged installations of Membase and Couchbase using Amazon Elastic Compute Cloud (Amazon EC2).
1. What origin servers are supported by Amazon CloudFront? (Choose 3 answers)A. An Amazon Route 53 Hosted ZoneB. An Amazon Simple Storage Service (Amazon S3) bucketC. An HTTP server running on Amazon Elastic Compute Cloud (Amazon EC2)D. An Amazon EC2 Auto Scaling GroupE. An HTTP server running on-premises
1. B, C, E. Amazon CloudFront can use an Amazon S3 bucket or any HTTP server, whether or not it is running in Amazon EC2. A Route 53 Hosted Zone is a set of DNS resource records, while an Auto Scaling Group launches or terminates Amazon EC2 instances automatically. Neither can be specified as an origin server for a distribution.
2. Which of the following are good use cases for Amazon CloudFront? (Choose 2 answers)A. A popular software download site that supports users around the world, with dynamic content that changes rapidlyB. A corporate website that serves training videos to employees. Most employees are located in two corporate campuses in the same city.C. A heavily used video and music streaming service that requires content to be delivered only to paid subscribersD. A corporate HR website that supports a global workforce. Because the site contains sensitive data, all users must connect through a corporate Virtual Private Network (VPN).
2. A, C. The site in A is "popular" and supports "users around the world," key indicators that CloudFront is appropriate. Similarly, the site in C is "heavily used," and requires private content, which is supported by Amazon CloudFront. Both B and D are corporate use cases where the requests come from a single geographic location or appear to come from one (because of the VPN). These use cases will generally not see benefit from Amazon CloudFront.
3. You have a web application that contains both static content in an Amazon Simple Storage Service (Amazon S3) bucketâprimarily images and CSS filesâand also dynamic content currently served by a PHP web app running on Amazon Elastic Compute Cloud (Amazon EC2). What features of Amazon CloudFront can be used to support this application with a single Amazon CloudFront distribution? (Choose 2 answers)A. Multiple Origin Access IdentifiersB. Multiple signed URLsC. Multiple originsD. Multiple edge locationsE. Multiple cache behaviors
3. C, E. Using multiple origins and setting multiple cache behaviors allow you to serve static and dynamic content from the same distribution. Origin Access Identifiers and signed URLs support serving private content from Amazon CloudFront, while multiple edge locations are simply how Amazon CloudFront serves any content.
4. You are building a media-sharing web application that serves video files to end users onboth PCs and mobile devices. The media files are stored as objects in an Amazon SimpleStorage Service (Amazon S3) bucket, but are to be delivered through Amazon CloudFront. What is the simplest way to ensure that only Amazon CloudFront has accessto the objects in the Amazon S3 bucket?A. Create Signed URLs for each Amazon S3 object.B. Use an Amazon CloudFront Origin Access Identifier (OAI).C. Use public and private keys with signed cookies.D. Use an AWS Identity and Access Management (IAM) bucket policy.
4. B. Amazon CloudFront OAI is a special identity that can be used to restrict access to an Amazon S3 bucket only to an Amazon CloudFront distribution. Signed URLs, signed cookies, and IAM bucket policies can help to protect content served through Amazon CloudFront, but OAIs are the simplest way to ensure that only Amazon CloudFront has access to a bucket.
5. Your company data center is completely full, but the sales group has determined a need to store 200TB of product video. The videos were created over the last several years, with the most recent being accessed by sales the most often. The data must be accessed locally, but there is no space in the data center to install local storage devices to store this data. What AWS cloud service will meet sales' requirements?A. AWS Storage Gateway Gateway-Stored volumesB. Amazon Elastic Compute Cloud (Amazon EC2) instances with attached Amazon EBS VolumesC. AWS Storage Gateway Gateway-Cached volumesD. AWS Import/Export Disk
5. C. AWS Storage Gateway allows you to access data in Amazon S3 locally, with the Gateway-Cached volume configuration allowing you to expand a relatively small amount of local storage into Amazon S3.
6. Your company wants to extend their existing Microsoft Active Directory capability into an Amazon Virtual Private Cloud (Amazon VPC) without establishing a trust relationshipwith the existing on-premises Active Directory. Which of the following is the best approach to achieve this goal?A. Create and connect an AWS Directory Service AD Connector.B. Create and connect an AWS Directory Service Simple AD.C. Create and connect an AWS Directory Service for Microsoft Active Directory (Enterprise Edition).D. None of the above
6. B. Simple AD is a Microsoft Active Directory-compatible directory that is powered by Samba 4. Simple AD supports commonly used Active Directory features such as user accounts, group memberships, domain-joining Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux and Microsoft Windows, Kerberos-based Single Sign-On (SSO), and group policies.
7. Which of the following are AWS Key Management Service (AWS KMS) keys that will never exit AWS unencrypted?A. AWS KMS data keysB. Envelope encryption keysC. AWS KMS Customer Master Keys (CMKs)D. A and C
7. C. AWS KMS CMKs are the fundamental resources that AWS KMS manages. CMKs can never leave AWS KMS unencrypted, but data keys can.
8. Which cryptographic method is used by AWS Key Management Service (AWS KMS) to encrypt data?A. Password-based encryptionB. AsymmetricC. Shared secretD. Envelope encryption
8. D. AWS KMS uses envelope encryption to protect data. AWS KMS creates a data key, encrypts it under a Customer Master Key (CMK), and returns plaintext and encrypted versions of the data key to you. You use the plaintext key to encrypt data and store the encrypted key alongside the encrypted data. You can retrieve a plaintext data key only if you have the encrypted data key and you have permission to use the corresponding master key.
9. Which AWS service records Application Program Interface (API) calls made on your account and delivers log files to your Amazon Simple Storage Service (Amazon S3)bucket?A. AWS CloudTrailB. Amazon CloudWatchC. Amazon KinesisD. AWS Data Pipeline
9. A. AWS CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS Cloud service.
10. You are trying to decrypt ciphertext with AWS KMS and the decryption operation is failing. Which of the following are possible causes? (Choose 2 answers)A. The private key does not match the public key in the ciphertext.B. The plaintext was encrypted along with an encryption context, and you are not providing the identical encryption context when calling the Decrypt API.C. The ciphertext you are trying to decrypt is not valid.D. You are not providing the correct symmetric key to the Decrypt API.
10. B, C. Encryption context is a set of key/value pairs that you can pass to AWS KMS when you call the Encrypt, Decrypt, ReEncrypt, GenerateDataKey, and GenerateDataKeyWithoutPlaintext APIs. Although the encryption context is not included in the ciphertext, it is cryptographically bound to the ciphertext during encryption and must be passed again when you call the Decrypt (or ReEncrypt) API. Invalid ciphertext for decryption is plaintext that has been encrypted in a different AWS account or ciphertext that has been altered since it was originally encrypted.
11. Your company has 30 years of financial records that take up 15TB of on-premises storage. It is regulated that you maintain these records, but in the year you have workedfor the company no one has ever requested any of this data. Given that the company data center is already filling the bandwidth of its Internet connection, what is an alternative way to store the data on the most appropriate cloud storage?A. AWS Import/Export to Amazon Simple Storage Service (Amazon S3)B. AWS Import/Export to Amazon GlacierC. Amazon KinesisD. Amazon Elastic MapReduce (AWS EMR)
11. B. Because the Internet connection is full, the best solution will be based on using AWS Import/Export to ship the data. The most appropriate storage location for data that must be stored, but is very rarely accessed, is Amazon Glacier.
12. Your company collects information from the point of sale registers at all of its franchise locations. Each month these processes collect 200TB of information stored in Amazon Simple Storage Service (Amazon S3). Analytics jobs taking 24 hours are performed to gather knowledge from this data. Which of the following will allow you to perform these analytics in a cost-effective way?A. Copy the data to a persistent Amazon Elastic MapReduce (Amazon EMR) cluster, and run the MapReduce jobs.B. Create an application that reads the information of the Amazon S3 bucket and runs it through an Amazon Kinesis stream.C. Run a transient Amazon EMR cluster, and run the MapReduce jobs against the data directly in Amazon S3.D. Launch a d2.8xlarge (32 vCPU, 244GB RAM) Amazon Elastic Compute Cloud (Amazon EC2) instance, and run an application to read and process each object sequentially.
12. C. Because the job is run monthly, a persistent cluster will incur unnecessary compute costs during the rest of the month. Amazon Kinesis is not appropriate because the company is running analytics as a batch job and not on a stream. A single large instance does not scale out to accommodate the large compute needs.
13. Which service allows you to process nearly limitless streams of data in flight?A. Amazon Kinesis FirehoseB. Amazon Elastic MapReduce (Amazon EMR)C. Amazon RedshiftD. Amazon Kinesis Streams
13. D. The Amazon Kinesis services enable you to work with large data streams. Within the Amazon Kinesis family of services, Amazon Kinesis Firehose saves streams to AWS storage services, while Amazon Kinesis Streams provide the ability to process the data in the stream.
14. What combination of services enable you to copy daily 50TB of data to Amazon storage, process the data in Hadoop, and store the results in a large data warehouse?A. Amazon Kinesis, Amazon Data Pipeline, Amazon Elastic MapReduce (Amazon EMR), and Amazon Elastic Compute Cloud (Amazon EC2)B. Amazon Elastic Block Store (Amazon EBS), Amazon Data Pipeline, Amazon EMR, and Amazon RedshiftC. Amazon Simple Storage Service (Amazon S3), Amazon Data Pipeline, Amazon EMR, and Amazon RedshiftD. Amazon S3, Amazon Simple Workflow, Amazon EMR, and Amazon DynamoDB
14. C. Amazon Data Pipeline allows you to run regular Extract, Transform, Load (ETL) jobs on Amazon and on-premises data sources. The best storage for large data is Amazon S3, and Amazon Redshift is a large-scale data warehouse service.
15. Your company has 50,000 weather stations around the country that send updates every 2 seconds. What service will enable you to ingest this stream of data and store it to Amazon Simple Storage Service (Amazon S3) for future processing?A. Amazon Simple Queue Service (Amazon SQS)B. Amazon Kinesis FirehoseC. Amazon Elastic Compute Cloud (Amazon EC2)D. Amazon Data Pipeline
15. B. Amazon Kinesis Firehose allows you to ingest massive streams of data and store the data on Amazon S3 (as well as Amazon Redshift and Amazon Elasticsearch).
16. Your organization uses Chef heavily for its deployment automation. What AWS cloud service provides integration with Chef recipes to start new application server instances, configure application server software, and deploy applications?A. AWS Elastic BeanstalkB. Amazon KinesisC. AWS OpsWorksD. AWS CloudFormation
16. C. AWS OpsWorks uses Chef recipes to start new app server instances, configure application server software, and deploy applications. Organizations can leverage Chef recipes to automate operations like software configurations, package installations, database setups, server scaling, and code deployment.
17. A firm is moving its testing platform to AWS to provide developers with instant access to clean test and development environments. The primary requirement for the firm is to make environments easily reproducible and fungible. What service will help the firm meet their requirements?A. AWS CloudFormationB. AWS ConfigC. Amazon RedshiftD. AWS Trusted Advisor
17. A. With AWS CloudFormation, you can reuse your template to set up your resources consistently and repeatedly. Just describe your resources once and then provision the same resources over and over in multiple stacks.
18. Your company's IT management team is looking for an online tool to provide recommendations to save money, improve system availability and performance, and to help close security gaps. What can help the management team?A. Cloud-initB. AWS Trusted AdvisorC. AWS ConfigD. Configuration Recorder
18. B. AWS Trusted Advisor inspects your AWS environment and makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. AWS Trusted Advisor draws upon best practices learned from the aggregated operational history of serving hundreds of thousands of AWS customers.
19. Your company works with data that requires frequent audits of your AWS environment to ensure compliance with internal policies and best practices. In order to perform these audits, you need access to historical configurations of your resources to evaluate relevant configuration changes. Which service will provide the necessary information for your audits?A. AWS ConfigB. AWS Key Management Service (AWS KMS)C. AWS CloudTrailD. AWS OpsWorks
19. A. AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config, you can discover existing and deleted AWS resources, determine your overall compliance against rules, and dive into configuration details of a resource at any point in time. These capabilities enable compliance auditing.
20. All of the website deployments are currently done by your company's development team. With a surge in website popularity, the company is looking for ways to be more agile with deployments. What AWS cloud service can help the developers focus more on writing code instead of spending time managing and configuring servers, databases, load balancers, firewalls, and networks?A. AWS ConfigB. AWS Trusted AdvisorC. Amazon KinesisD. AWS Elastic Beanstalk
20. D. AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their application code, and the service automatically handles all the details such as resource provisioning, load balancing, Auto Scaling, and monitoring.
1. Which is an operational process performed by AWS for data security?A. Advanced Encryption Standard (AES)-256 encryption of data stored on any shared storage deviceB. Decommissioning of storage devices using industry-standard practicesC. Background virus scans of Amazon Elastic Block Store (Amazon EBS) volumes and Amazon EBS snapshotsD. Replication of data across multiple AWS regionsE. Secure wiping of Amazon EBS data when an Amazon EBS volume is unmounted
1. B. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.
2. You have launched a Windows Amazon Elastic Compute Cloud (Amazon EC2) instance and specified an Amazon EC2 key pair for the instance at launch. Which of the following accurately describes how to log in to the instance?A. Use the Amazon EC2 key pair to securely connect to the instance via Secure Shell (SSH).B. Use your AWS Identity and Access Management (IAM) user X.509 certificate to log in to the instance.C. Use the Amazon EC2 key pair to decrypt the administrator password and then securely connect to the instance via Remote Desktop Protocol (RDP) as the administrator.D. A key pair is not needed. Securely connect to the instance via RDP.
2. C. The administrator password is encrypted with the public key of the key pair, and you provide the private key to decrypt the password. Then log in to the instance as the administrator with the decrypted password.
3. A Database security group controls network access to a database instance that is inside a Virtual Private Cloud (VPC) and by default allows access from?A. Access from any IP address for the standard ports that the database uses is provided by default.B. Access from any IP address for any port is provided by default in the DB security group.C. No access is provided by default, and any access must be explicitly added with a rule to the DB security group.D. Access for the database connection string is provided by default in the DB security group.
3. C. By default, network access is turned off to a DB Instance. You can specify rules in a security group that allows access from an IP address range, port, or Amazon Elastic Compute Cloud (Amazon EC2) security group.
4. Which encryption algorithm is used by Amazon Simple Storage Service (Amazon S3) to encrypt data at rest with Service-Side Encryption (SSE)?A. Advanced Encryption Standard (AES)-256B. RSA 1024C. RSA 2048D. AES-128
4. A. Amazon S3 SSE uses one of the strongest block ciphers available, 256-bit AES.
5. How many access keys may an AWS Identity and Access Management (IAM) user have active at one time?A. 0B. 1C. 2D. 3
5. C. IAM permits users to have no more than two active access keys at one time.
6. Which of the following is the name of the security model employed by AWS with its customers?A. The shared secret modelB. The shared responsibility modelC. The shared secret key modelD. The secret key responsibility model
6. B. The shared responsibility model is the name of the model employed by AWS with its customers.
7. Which of the following describes the scheme used by an Amazon Redshift cluster leveraging AWS Key Management Service (AWS KMS) to encrypt data-at-rest?A. Amazon Redshift uses a one-tier, key-based architecture for encryption.B. Amazon Redshift uses a two-tier, key-based architecture for encryption.C. Amazon Redshift uses a three-tier, key-based architecture for encryption.D. Amazon Redshift uses a four-tier, key-based architecture for encryption.
7. D. When you choose AWS KMS for key management with Amazon Redshift, there is a four-tier hierarchy of encryption keys. These keys are the master key, a cluster key, a database key, and data encryption keys.
8. Which of the following Elastic Load Balancing options ensure that the load balancer determines which cipher is used for a Secure Sockets Layer (SSL) connection?A. Client Server Cipher SuiteB. Server Cipher OnlyC. First Server CipherD. Server Order Preference
8. D. Elastic Load Balancing supports the Server Order Preference option for negotiating connections between a client and a load balancer. During the SSL connection negotiation process, the client and the load balancer present a list of ciphers and protocols that they each support, in order of preference. By default, the first cipher on the client's list that matches any one of the load balancer's ciphers is selected for the SSL connection. If the load balancer is configured to support Server Order Preference, then the load balancer selects the first cipher in its list that is in the client's list of ciphers. This ensures that the load balancer determines which cipher is used for SSL connection. If you do not enable Server Order Preference, the order of ciphers presented by the client is used to negotiate connections between the client and the load balancer.
9. Which technology does Amazon WorkSpaces use to provide data security?A. Secure Sockets Layer (SSL)/Transport Layer Security (TLS)B. Advanced Encryption Standard (AES)-256C. PC-over-IP (PCoIP)D. AES-128
9. C. Amazon WorkSpaces uses PCoIP, which provides an interactive video stream without transmitting actual data.
10. As a Solutions Architect, how should you architect systems on AWS?A. You should architect for least cost.B. You should architect your AWS usage to take advantage of Amazon Simple Storage Service's (Amazon S3) durability.C. You should architect your AWS usage to take advantage of multiple regions and Availability Zones.D. You should architect with Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling to ensure capacity is available when needed.
10. C. Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.
11. Which security scheme is used by the AWS Multi-Factor Authentication (AWS MFA) token?A. Time-Based One-Time Password (TOTP)B. Perfect Forward Secrecy (PFC)C. Ephemeral Diffie Hellman (EDH)D. Split-Key Encryption (SKE)
11. A. A virtual MFA device uses a software application that generates six-digit authentication codes that are compatible with the TOTP standard, as described in RFC 6238.
12. DynamoDB tables may contain sensitive data that needs to be protected. Which of the following is a way for you to protect DynamoDB table content? (Choose 2 answers)A. DynamoDB encrypts all data server-side by default so nothing is required.B. DynamoDB can store data encrypted with a client-side encryption library solution before storing the data in DynamoDB.C. DynamoDB obfuscates all data stored so encryption is not required.D. DynamoDB can be used with the AWS Key Management Service to encrypt the data before storing the data in DynamoDB.E. DynamoDB should not be used to store sensitive information requiring protection.
12. B, D. Amazon DynamoDB does not have a server-side feature to encrypt items within a table. You need to use a solution outside of DynamoDB such as a client-side library to encrypt items before storing them, or a key management service like AWS Key Management Service to manage keys that are used to encrypt items before storing them in DynamoDB.
13. You have launched an Amazon Linux Elastic Compute Cloud (Amazon EC2) instance into EC2-Classic, and the instance has successfully passed the System Status Check and Instance Status Check. You attempt to securely connect to the instance via Secure Shell (SSH) and receive the response, "WARNING: UNPROTECTED PRIVATE KEY FILE," after which the login fails. Which of the following is the cause of the failed login?A. You are using the wrong private key.B. The permissions for the private key are too insecure for the key to be trusted.C. A security group rule is blocking the connection.D. A security group rule has not been associated with the private key.
13. B. If your private key can be read or written to by anyone but you, then SSH ignores your key.
14. Which of the following public identity providers are supported by Amazon Cognito Identity?A. AmazonB. GoogleC. FacebookD. All of the above
14. D. Amazon Cognito Identity supports public identity providersâAmazon, Facebook, and Googleâas well as unauthenticated identities.
15. Which feature of AWS is designed to permit calls to the platform from an Amazon Elastic Compute Cloud (Amazon EC2) instance without needing access keys placed on the instance?A. AWS Identity and Access Management (IAM) instance profileB. IAM groupsC. IAM rolesD. Amazon EC2 key pairs
15. A. An instance profile is a container for an IAM role that you can use to pass role information to an Amazon EC2 instance when the instance starts.
16. Which of the following Amazon Virtual Private Cloud (Amazon VPC) elements acts as a stateless firewall?A. Security groupB. Network Access Control List (ACL)C. Network Address Translation (NAT) instanceD. An Amazon VPC endpoint
16. B. A network ACL is an optional layer of security for your Amazon VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your Amazon VPC.
17. Which of the following is the most recent version of the AWS digital signature calculation process?A. Signature Version 1B. Signature Version 2C. Signature Version 3D. Signature Version 4
17. D. The Signature Version 4 signing process describes how to add authentication information to AWS requests. For security, most requests to AWS must be signed with an access key (Access Key ID [AKI] and Secret Access Key [SAK]). If you use the AWS Command Line Interface (AWS CLI) or one of the AWS Software Development Kits (SDKs), those tools automatically sign requests for you based on credentials that you specify when you configure the tools. However, if you make direct HTTP or HTTPS calls to AWS, you must sign the requests yourself.
18. Which of the following is the name of the feature within Amazon Virtual Private Cloud (Amazon VPC) that allows you to launch Amazon Elastic Compute Cloud (Amazon EC2) instances on hardware dedicated to a single customer?A. Amazon VPC-based tenancyB. Dedicated tenancyC. Default tenancyD. Host-based tenancy
18. B. Dedicated instances are physically isolated at the host hardware level from your instances that aren't dedicated instances and from instances that belong to other AWS accounts.
19. Which of the following describes how Amazon Elastic MapReduce (Amazon EMR) protects access to the cluster?A. The master node and the slave nodes are launched into an Amazon Virtual Private Cloud (Amazon VPC).B. The master node supports a Virtual Private Network (VPN) connection from the key specified at cluster launch.C. The master node is launched into a security group that allows Secure Shell (SSH) and service access, while the slave nodes are launched into a separate security group that only permits communication with the master node.D. The master node and slave nodes are launched into a security group that allows SSH and service access.
19. C. Amazon EMR starts your instances in two Amazon Elastic Compute Cloud (Amazon EC2) security groups, one for the master and another for the slaves. The master security group has a port open for communication with the service. It also has the SSH port open to allow you to securely connect to the instances via SSH using the key specified at startup. The slaves start in a separate security group, which only allows interaction with the master instance. By default, both security groups are set up to prevent access from external sources, including Amazon EC2 instances belonging to other customers. Because these are security groups in your account, you can reconfigure them using the standard Amazon EC2 tools or dashboard.
20. To help prevent data loss due to the failure of any single hardware component, Amazon Elastic Block Storage (Amazon EBS) automatically replicates EBS volume data to which of the following?A. Amazon EBS replicates EBS volume data within the same Availability Zone in a region.B. Amazon EBS replicates EBS volume data across other Availability Zones within the same region.C. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in one other region.D. Amazon EBS replicates EBS volume data across Availability Zones in the same region and in Availability Zones in every other region.
20. A. When you create an Amazon EBS volume in an Availability Zone, it is automatically replicated within that Availability Zone to prevent data loss due to failure of any single hardware component. An EBS Snapshot creates a copy of an EBS volume to Amazon S3 so that copies of the volume can reside in different Availability Zones within a region.
1. AWS communicates with customers regarding its security and control environment through a variety of different mechanisms. Which of the following are valid mechanisms? (Choose 3 answers)A. Obtaining industry certifications and independent third-party attestationsB. Publishing information about security and AWS control practices via the website, whitepapers, and blogsC. Directly providing customers with certificates, reports, and other documentation (under NDA in some cases)D. Allowing customers' auditors direct access to AWS data centers, infrastructure, and senior staff
1. A, B, C. Answers A through C describe valid mechanisms that AWS uses to communicate with customers regarding its security and control environment. AWS does not allow customers' auditors direct access to AWS data centers, infrastructure, or staff.
2. Which of the following statements is true when it comes to the AWS shared responsibility model?A. The shared responsibility model is limited to security considerations only; it does not extend to IT controls.B. The shared responsibility model is only applicable for customers who want to be compliant with SOC 1 Type II.C. The shared responsibility model is not just limited to security considerations; it also extends to IT controls.D. The shared responsibility model is only applicable for customers who want to be compliant with ISO 27001.
2. C. The shared responsibility model can include IT controls, and it is not just limited to security considerations. Therefore, answer C is correct.
3. AWS provides IT control information to customers in which of the following ways?A. By using specific control definitions or through general control standard complianceB. By using specific control definitions or through SAS 70C. By using general control standard compliance and by complying with ISO 27001D. By complying with ISO 27001 and SOC 1 Type II
3. A. AWS provides IT control information to customers through either specific control definitions or general control standard compliance.
4. Which of the following is a valid report, certification, or third-party attestation for AWS? (Choose 3 answers)A. SOC 1B. PCI DSS Level 1C. SOC 4D. ISO 27001
4. A, B, D. There is no such thing as a SOC 4 report, therefore answer C is incorrect.
6. Which of the following statements is true when it comes to the risk and compliance advantages of the AWS environment?A. Workloads must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations.B. The critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the non-critical components do not.C. The non-critical components of a workload must be moved entirely into the AWS Cloud in order to be compliant with various certifications and third-party attestations, but the critical components do not.D. Few, many, or all components of a workload can be moved to the AWS Cloud, but it is the customer's responsibility to ensure that their entire workload remains compliant with various certifications and third-party attestations.
6. D. Any number of components of a workload can be moved into AWS, but it is the customer's responsibility to ensure that the entire workload remains compliant with various certifications and third-party attestations.
7. Which of the following statements best describes an Availability Zone?A. Each Availability Zone consists of a single discrete data center with redundant power and networking/connectivity.B. Each Availability Zone consists of multiple discrete data centers with redundant power and networking/connectivity.C. Each Availability Zone consists of multiple discrete regions, each with a single data center with redundant power and networking/connectivity.D. Each Availability Zone consists of multiple discrete data centers with shared power and redundant networking/connectivity.
7. B. An Availability Zone consists of multiple discrete data centers, each with their own redundant power and networking/connectivity, therefore answer B is correct.
8. With regard to vulnerability scans and threat assessments of the AWS platform, which of the following statements are true? (Choose 2 answers)A. AWS regularly performs scans of public-facing endpoint IP addresses for vulnerabilities.B. Scans performed by AWS include customer instances.C. AWS security notifies the appropriate parties to remediate any identified vulnerabilities.D. Customers can perform their own scans at any time without advance notice.
8. A, C. AWS regularly scans public-facing, non-customer endpoint IP addresses and notifies appropriate parties. AWS does not scan customer instances, and customers must request the ability to perform their own scans in advance, therefore answers A and C are correct.
9. Which of the following best describes the risk and compliance communication responsibilities of customers to AWS?A. AWS and customers both communicate their security and control environment information to each other at all times.B. AWS publishes information about the AWS security and control practices online, and directly to customers under NDA. Customers do not need to communicate their use and configurations to AWS.C. Customers communicate their use and configurations to AWS at all times. AWS does not communicate AWS security and control practices to customers for security reasons.D. Both customers and AWS keep their security and control practices entirely confidential and do not share them in order to ensure the greatest security for all parties.
9. B. AWS publishes information publicly online and directly to customers under NDA, but customers are not required to share their use and configuration information with AWS, therefore answer B is correct.
10. When it comes to risk management, which of the following is true?A. AWS does not develop a strategic business plan; risk management and mitigation is entirely the responsibility of the customer.B. AWS has developed a strategic business plan to identify any risks and implemented controls to mitigate or manage those risks. Customers do not need to develop and maintain their own risk management plans.C. AWS has developed a strategic business plan to identify any risks and has implemented controls to mitigate or manage those risks. Customers should also develop and maintain their own risk management plans to ensure they are compliant with any relevant controls and certifications.D. Neither AWS nor the customer needs to worry about risk management, so no plan is needed from either party.
10. C. AWS has developed a strategic business plan, and customers should also develop and maintain their own risk management plans, therefore answer C is correct.
11. The AWS control environment is in place for the secure delivery of AWS Cloud service offerings. Which of the following does the collective control environment NOT explicitly include?A. PeopleB. EnergyC. TechnologyD. Processes
11. B. The collective control environment includes people, processes, and technology necessary to establish and maintain an environment that supports the operating effectiveness of AWS control framework. Energy is not a discretely identified part of the control environment, therefore B is the correct answer.
12. Who is responsible for the configuration of security groups in an AWS environment?A. The customer and AWS are both jointly responsible for ensuring that security groups are correctly and securely configured.B. AWS is responsible for ensuring that all security groups are correctly and securely configured. Customers do not need to worry about security group configuration.C. Neither AWS nor the customer is responsible for the configuration of security groups; security groups are intelligently and automatically configured using traffic heuristics.D. AWS provides the security group functionality as a service, but the customer is responsible for correctly and securely configuring their own security groups.
12. D. Customers are responsible for ensuring all of their security group configurations are appropriate for their own applications, therefore answer D is correct.
13. Which of the following is NOT a recommended approach for customers trying to achieve strong compliance and governance over an entire IT control environment?A. Take a holistic approach: review information available from AWS together with all other information, and document all compliance requirements.B. Verify that all control objectives are met and all key controls are designed and operating effectively.C. Implement generic control objectives that are not specifically designed to meet their organization's compliance requirements.D. Identify and document controls owned by all third parties.
13. C. Customers should ensure that they implement control objectives that are designed to meet their organization's own unique compliance requirements, therefore answer C iscorrect.
1. When designing a loosely coupled system, which AWS services provide an intermediate durable storage layer between components? (Choose 2 answers)A. Amazon CloudFrontB. Amazon KinesisC. Amazon Route 53D. AWS CloudFormationE. Amazon Simple Queue Service (Amazon SQS)
1. B, E. Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data. Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS makes it simple and cost-effective to decouple the components of a cloud application.
2. Which of the following options will help increase the availability of a web server farm? (Choose 2 answers)A. Use Amazon CloudFront to deliver content to the end users with low latency and high data transfer speeds.B. Launch the web server instances across multiple Availability Zones.C. Leverage Auto Scaling to recover from failed instances.D. Deploy the instances in an Amazon Virtual Private Cloud (Amazon VPC).E. Add more CPU and RAM to each instance.
2. B, C. Launching instances across multiple Availability Zones helps ensure the application is isolated from failures in a single Availability Zone, allowing the application to achieve higher availability. Whether you are running one Amazon EC2 instance or thousands, you can use Auto Scaling to detect impaired Amazon EC2 instances and unhealthy applications and replace the instances without your intervention. This ensures that your application is getting the compute capacity that you expect, thereby maintaining your availability.
3. Which of the following AWS Cloud services are designed according to the Multi-AZ principle? (Choose 2 answers)A. Amazon DynamoDBB. Amazon ElastiCacheC. Elastic Load BalancingD. Amazon Virtual Private Cloud (Amazon VPC)E. Amazon Simple Storage Service (Amazon S3)
3. A, E. Amazon DynamoDB runs across AWS proven, high-availability data centers. The service replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage. Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Your data is redundantly stored across multiple facilities and multiple devices in each facility. While Elastic Load Balancing and Amazon ElastiCache can be deployed across multiple Availability Zones, you must explicitly take such steps when creating them.
4. Your e-commerce site was designed to be stateless and currently runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. In an effort to control cost and increase availability, you have a requirement to scale the fleet based on CPU and network utilization to match the demand curve for your site. What services do you need to meet this requirement? (Choose 2 answers)A. Amazon CloudWatchB. Amazon DynamoDBC. Elastic Load BalancingD. Auto ScalingE. Amazon Simple Storage Service (Amazon S3)
4. A, D. Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to provision Amazon EC2 capacity manually in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average CPU and network utilization of your Amazon EC2 fleet monitored in Amazon CloudWatch is high; similarly, you can set a condition to remove instances in the same increments when CPU and network utilization are low.
5. Your compliance department has mandated a new requirement that all data on Amazon Elastic Block Storage (Amazon EBS) volumes must be encrypted. Which of the following steps would you follow for your existing Amazon EBS volumes to comply with the new requirement? (Choose 3 answers)A. Move the existing Amazon EBS volume into an Amazon Virtual Private Cloud (Amazon VPC).B. Create a new Amazon EBS volume with encryption enabled.C. Modify the existing Amazon EBS volume properties to enable encryption.D. Attach an Amazon EBS volume with encryption enabled to the instance that hosts the data, then migrate the data to the encryption-enabled Amazon EBS volume.E. Copy the data from the unencrypted Amazon EBS volume to the Amazon EBS volume with encryption enabled.
5. B, D, E. There is no direct way to encrypt an existing unencrypted volume. However, you can migrate data between encrypted and unencrypted volumes.
6. When building a Distributed Denial of Service (DDoS)-resilient architecture, how does Amazon Virtual Private Cloud (Amazon VPC) help minimize the attack surface area? (Choose 3 answers)A. Reduces the number of necessary Internet entry pointsB. Combines end user traffic with management trafficC. Obfuscates necessary Internet entry points to the level that untrusted end users cannot access themD. Adds non-critical Internet entry points to the architectureE. Scales the network to absorb DDoS attacks
6. A, C, D. The attack surface is composed of the different Internet entry points that allow access to your application. The strategy to minimize the attack surface area is to (a) reduce the number of necessary Internet entry points, (b) eliminate non-critical Internet entry points, (c) separate end user traffic from management traffic, (d) obfuscate necessary Internet entry points to the level that untrusted end users cannot access them, and (e) decouple Internet entry points to minimize the effects of attacks. This strategy can be accomplished with Amazon VPC.
7. Your e-commerce application provides daily and ad hoc reporting to various business units on customer purchases. This is resulting in an extremely high level of read traffic to your MySQL Amazon Relational Database Service (Amazon RDS) instance. What can you do to scale up read traffic without impacting your database's performance?A. Increase the allocated storage for the Amazon RDS instance.B. Modify the Amazon RDS instance to be a Multi-AZ deployment.C. Create a read replica for an Amazon RDS instance.D. Change the Amazon RDS instance DB engine version.
7. C. Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS instances. This replication feature makes it easy to scale out elastically beyond the capacity constraints of a single Amazon RDS instance for read-heavy database workloads. You can create one or more replicas of a given source Amazon RDS instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
8. Your website is hosted on a fleet of web servers that are load balanced across multiple Availability Zones using an Elastic Load Balancer (ELB). What type of record set in Amazon Route 53 can be used to point myawesomeapp.com to your website?A. Type A Alias resource record setB. MX record setC. TXT record setD. CNAME record set
8. A. An alias resource record set can point to an ELB. You cannot create a CNAME record at the top node of a Domain Name Service (DNS) namespace, also known as the zone apex, as the case in this example. Alias resource record sets can save you time because Amazon Route 53 automatically recognizes changes in the resource record sets to which the alias resource record set refers.
9. You need a secure way to distribute your AWS credentials to an application running on Amazon Elastic Compute Cloud (Amazon EC2) instances in order to access supplementary AWS Cloud services. What approach provides your application access to use short-term credentials for signing requests while protecting those credentials from other users?A. Add your credentials to the UserData parameter of each Amazon EC2 instance.B. Use a configuration file to store your access and secret keys on the Amazon EC2 instances.C. Specify your access and secret keys directly in your application.D. Provision the Amazon EC2 instances with an instance profile that has the appropriate privileges.
9. D. An instance profile is a container for an AWS Identity and Access Management (IAM) role that you can use to pass role information to an Amazon EC2 instance when the instance starts. The IAM role should have a policy attached that only allows access to the AWS Cloud services necessary to perform its function.
10. You are running a suite of microservices on AWS Lambda that provide the business logic and access to data stored in Amazon DynamoDB for your task management system. You need to create well-defined RESTful Application Program Interfaces (APIs) for these microservices that will scale with traffic to support a new mobile application. What AWS Cloud service can you use to create the necessary RESTful APIs?A. Amazon KinesisB. Amazon API GatewayC. Amazon CognitoD. Amazon Elastic Compute Cloud (Amazon EC2) Container Registry
10. B. Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a "front door" for applications to access data, business logic, or functionality from your code running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.
11. Your WordPress website is hosted on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances that leverage Auto Scaling to provide high availability. To ensure that the content of the WordPress site is sustained through scale up and scale down events, you need a common file system that is shared between more than one Amazon EC2 instance. Which AWS Cloud service can meet this requirement?A. Amazon CloudFrontB. Amazon ElastiCacheC. Amazon Elastic File System (Amazon EFS)D. Amazon Elastic Beanstalk
11. C. Amazon EFS is a file storage service for Amazon EC2 instances. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for the content of the WordPress site running on more than one instance.
12. You are changing your application to move session state information off the individual Amazon Elastic Compute Cloud (Amazon EC2) instances to take advantage of the elasticity and cost benefits provided by Auto Scaling. Which of the following AWS Cloud services is best suited as an alternative for storing session state information?A. Amazon DynamoDBB. Amazon RedshiftC. Amazon Storage GatewayD. Amazon Kinesis
12. A. Amazon DynamoDB is a NoSQL database store that is a great choice as an alternative due to its scalability, high-availability, and durability characteristics. Many platforms provide open-source, drop-in replacement libraries that allow you to store native sessions in Amazon DynamoDB. Amazon DynamoDB is a great candidate for a session storage solution in a share-nothing, distributed architecture.
13. A media sharing application is producing a very high volume of data in a very short period of time. Your back-end services are unable to manage the large volume of transactions. What option provides a way to manage the flow of transactions to your back-end services?A. Store the inbound transactions in an Amazon Relational Database Service (Amazon RDS) instance so that your back-end services can retrieve them as time permits.B. Use an Amazon Simple Queue Service (Amazon SQS) queue to buffer the inbound transactions.C. Use an Amazon Simple Notification Service (Amazon SNS) topic to buffer the inbound transactions.D. Store the inbound transactions in an Amazon Elastic MapReduce (Amazon EMR) cluster so that your back-end services can retrieve them as time permits.
13. B. Amazon SQS is a fast, reliable, scalable, and fully managed message queuing service. Amazon SQS should be used to decouple the large volume of inbound transactions, allowing the back-end services to manage the level of throughput without losing messages.
14. Which of the following are best practices for managing AWS Identity and Access Management (IAM) user access keys? (Choose 3 answers)A. Embed access keys directly into application code.B. Use different access keys for different applications.C. Rotate access keys periodically.D. Keep unused access keys for an indefinite period of time.E. Configure Multi-Factor Authentication (MFA) for your most sensitive operations.
14. B, C, E. You should protect AWS user access keys like you would your credit card numbers or any other sensitive secret. Use different access keys for different applications so that you can isolate the permissions and revoke the access keys for individual applications if an access key is exposed. Remember to change access keys on a regular basis. For increased security, it is recommended to configure MFA for any sensitive operations. Remember to remove any IAM users that are no longer needed so that the user's access to your resources is removed. Always avoid having to embed access keys in an application.
15. You need to implement a service to scan Application Program Interface (API) calls and related events' history to your AWS account. This service will detect things like unused permissions, overuse of privileged accounts, and anomalous logins. Which of the following AWS Cloud services can be leveraged to implement this service? (Choose 3 answers)A. AWS CloudTrailB. Amazon Simple Storage Service (Amazon S3)C. Amazon Route 53D. Auto ScalingE. AWS Lambda
15. A, B, E. You can enable AWS CloudTrail in your AWS account to get logs of API calls and related events' history in your account. AWS CloudTrail records all of the API access events as objects in an Amazon S3 bucket that you specify at the time you enable AWS CloudTrail. You can take advantage of Amazon S3's bucket notification feature by directing Amazon S3 to publish object-created events to AWS Lambda. Whenever AWS CloudTrail writes logs to your Amazon S3 bucket, Amazon S3 can then invoke your AWS Lambda function by passing the Amazon S3 object-created event as a parameter. The AWS Lambda function code can read the log object and process the access records logged by AWS CloudTrail.
16. Government regulations require that your company maintain all correspondence for a period of seven years for compliance reasons. What is the best storage mechanism to keep this data secure in a cost-effective manner?A. Amazon S3B. Amazon GlacierC. Amazon EBSD. Amazon EFS
16. B. Amazon Glacier enables businesses and organizations to retain data for months, years, or decades, easily and cost effectively. With Amazon Glacier, customers can retain more of their data for future analysis or reference, and they can focus on their business instead of operating and maintaining their storage infrastructure. Customers can also use Amazon Glacier Vault Lock to meet regulatory and compliance archiving requirements.
17. Your company provides media content via the Internet to customers through a paid subscription model. You leverage Amazon CloudFront to distribute content to your customers with low latency. What approach can you use to serve this private content securely to your paid subscribers?A. Provide signed Amazon CloudFront URLs to authenticated users to access the paid content.B. Use HTTPS requests to ensure that your objects are encrypted when Amazon CloudFront serves them to viewers.C. Configure Amazon CloudFront to compress the media files automatically for paid subscribers.D. Use the Amazon CloudFront geo restriction feature to restrict access to all of the paid subscription media at the country level.
17. A. Many companies that distribute content via the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, such as users who have paid a fee. To serve this private content securely using Amazon CloudFront, you can require that users access your private content by using special Amazon CloudFront-signed URLs or signed cookies.
18. Your company provides transcoding services for amateur producers to format their short films to a variety of video formats. Which service provides the best option for storing the videos?A. Amazon GlacierB. Amazon Simple Storage Service (Amazon S3)C. Amazon Relational Database Service (Amazon RDS)D. AWS Storage Gateway
18. B. Amazon S3 provides highly durable and available storage for a variety of content. Amazon S3 can be used as a big data object store for all of the videos. Amazon S3's low cost combined with its design for durability of 99.999999999% and for up to 99.99% availability make it a great storage choice for transcoding services.
19. A week before Cyber Monday last year, your corporate data center experienced a failed air conditioning unit that caused flooding into the server racks. The resulting outage cost your company significant revenue. Your CIO mandated a move to the cloud, but he is still concerned about catastrophic failures in a data center. What can you do to alleviate his concerns?A. Distribute the architecture across multiple Availability Zones.B. Use an Amazon Virtual Private Cloud (Amazon VPC) with subnets.C. Launch the compute for the processing services in a placement group.D. Purchase Reserved Instances for the processing services instances.
19. A. An Availability Zone consists of one or more physical data centers. Availability zones within a region provide inexpensive, low-latency network connectivity to other zones in the same region. This allows you to distribute your application across data centers. In the event of a catastrophic failure in a data center, the application will continue to handle requests.
20. Your Amazon Virtual Private Cloud (Amazon VPC) includes multiple private subnets.The instances in these private subnets must access third-party payment ApplicationProgram Interfaces (APIs) over the Internet. Which option will provide highly availableInternet access to the instances in the private subnets?A. Create an AWS Storage Gateway in each Availability Zone and configure your routing to ensure that resources use the AWS Storage Gateway in the same Availability Zone.B. Create a customer gateway in each Availability Zone and configure your routing to ensure that resources use the customer gateway in the same Availability Zone.C. Create a Network Address Translation (NAT) gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.D. Create a NAT gateway in one Availability Zone and configure your routing to ensure that resources use that NAT gateway in all the Availability Zones.
20. C. You can use a NAT gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. If you have resources in multiple Availability Zones and they share one NAT gateway, resources in the other Availability Zones lose Internet access in the event that the NAT gateway's Availability Zone is down. To create an Availability Zone independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.