Aws Certified Solutions Architect Professional (+ Review) Flashcards ionicons-v5-c

IAM: Enables AWS customers to manage users and permissions

AWS Identity and Access Management (IAM)

IAM: Groups can't

be nested at all

IAM: AWS API access credentials

Console passwords, Access keys & Signing certificates

IAM: The organization wants that each user can change their password but cannot change their access keys.

Root account owner can set the policy from the IAM console under the password policy screen

IAM: A new policy which will change the access of an IAM user

Use the IAM groups and add users as per their role to different groups and apply policy to group

IAM: Either use frequently rotated passwords or one-time access credentials in addition to username/password

Configure MFA on the root account and for privileged IAM users, Assign IAM users and groups configured with policies granting least privilege access

IAM: preparing for a security assessment of your use of AWS

Configure MFA on the root account and for privileged IAM users, Assign IAM users and groups configured with policies granting least privilege access

IAM: company wants their EC2 instances in the new region to have the same privileges

Assign the existing IAM role to the Amazon EC2 instances in the new region

IAM: limitations

100 groups per AWS account, 5000 IAM users, 250 roles

IAM: a group is

a collection of users

IAM: set up AWS access for each department

Create IAM groups based on the permission and assign IAM users to the groups

IAM: EC2 instances best practices

latest patch of OS, Disable the password-based login (use keys), revoke the access rights of the individual user when they are not required to connect to EC2.

IAM: access to various AWS services

Assign an IAM role to the Amazon EC2 instance

IAM: Admin control of resources

Enable IAM cross-account access for all corporate IT administrators in each child account, Use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account

IAM: an application deployed on an EC2 instance to write data to a DynamoDB table

Create an IAM Role that allows write access to the DynamoDB table, Add an IAM Role to a running EC2 instance

IAM:

...

IAM: CloudFormation to deploy a three tier web application, DynamoDB for storage, allow the application instance access to the DynamoDB tables without exposing API credentials

Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance

IAM: Before generating the URL the application should verify the existence of the file in S3

Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

IAM: third-party SaaS application needs assess

Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application

IAM: security team would like to be able to delegate user authorization to the individual development teams but independently apply restrictions to the users permissions based on factors such as the users device and location

Add additional IAM policies to the application IAM roles that deny user privileges based on information security policy...policy with deny rules based on location, device and more restrictive wins

IAM: an Auto Scaling group whose Instances need to insert a custom metric into CloudWatch

Create an IAM role with the Put MetricData permission and modify the Auto Scaling launch configuration to launch instances in that role

IAM: each of 10 IAM users to have access to a separate DynamoDB table all in same group

Create a DynamoDB table with the same name as the IAM user name and define the policy rule which grants access based on the DynamoDB ARN using a variable

IAM: each IAM user accesses the IAM console only within the organization and not from outside

Create an IAM policy with a condition which denies access when the IP address range is not from the organization

IAM: policy evaluation logic

By default, all requests are denied, An explicit allow overrides default deny

IAM: Move FTP servers with 250 customers to AWS, to upload and download large graphic files, scalable, cost low, maintain customer privacy

Use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM user for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to subdirectories within the bucket via use of the 'username' Policy variable.

IAM: two permission types used by AWS

User-based and Resource-based

IAM: policy used for cross account access

Trust policy, Permissions Policy

VPC: VPC consisting of an Elastic Load Balancer (ELB), web servers, application servers and a database, only accept traffic from predefined customer IP addresses

Configure your web servers to filter traffic based on the ELB's "X-forwarded-for" header, • Configure ELB security groups to allow traffic from your customers' IPs and implicit deny all outbound traffic (note deny all doesn't work for stateless nacl)

VPC: public and private subnets using the VPC wizard

VPC bounds the main route table with a private subnet and a custom route table with a public subnet

VPC: user change the size of the VPC

Old Answer - It is not possible to change the size of the VPC once it has been created (NOTE - You can now increase the VPC size)

VPC: user created vpc with public and private subnets, CIDR 20.0.1.0/24 and the public subnet uses CIDR 20.0.0.0/24, host a web server public subnet (port 80) and a DB server private subnet (port 3306), NAT instance, for NAT security use

For Inbound allow Source: 20.0.1.0/24 on port 80, For Outbound allow Destination: 0.0.0.0/0 on port 80, For Outbound allow Destination: 0.0.0.0/0 on port 443 (not Inbound allow Source: 20.0.0.0/24 on port 80)

VPC: VPC with CIDR 20.0.0.0/16. created one subnet with CIDR 20.0.0.0/16 by mistake, trying to create another subnet of CIDR 20.0.0.1/24

It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created

VPC: A user has created a public subnet with VPC and launched an EC2 instance within it

It will not allow the user to delete the subnet until the instances are terminated

VPC: web tier will use an Auto Scaling group across multiple Availability Zones (AZs). The database will use Multi-AZ RDS MySQL and should not be publicly accessible.

4 subnets required (2 public subnets for web instances in multiple AZs and 2 private subnets for RDS Multi-AZ)

VPC: In regards to VPC

You can associate multiple subnets with the same Route Table

VPC: for business travelers who must be able to connect to it from their hotel rooms, cafes, public Wi-Fi hotspots, and elsewhere on the Internet, not application on the Internet

Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it

VPC: development environment needs a source code repository, a project management system with a MySQL database resources for performing the builds and a storage location for QA to pick up builds from, concerns are cost, security and how to integrate with existing on-premises applications such as their LDAP and email servers, which cannot move off-premises, and goal is to transition to a continuous integration model of development on AWS with multiple builds triggered within the same day

A VPC with a VPN Gateway back to their on-premises servers, Amazon EC2 for the source-code repository with attached Amazon EBS volumes, Amazon EC2 and Amazon RDS MySQL for the project management system, EIPs for the source code repository and project management system, SQS for a build queue, An Auto Scaling group of EC2 instances for performing builds and S3 for the build output (VPN gateway is required for secure connectivity. SQS for build queue and EC2 for builds)

VPC: CIDR 20.0.0.0/16 (public and private), to host a web server in the public subnet port 80 (CIDR 20.0.0.0/24) and a DB server in the private subnet port 3306 (CIDR 20.0.0.0/24) to configure a security group of the NAT instance...

For Outbound allow Destination: 0.0.0.0/0 on port 80 & 443, For Inbound allow Source: 20.0.1.0/24 on port 80 (Allow inbound HTTP traffic from servers in the private subnett)

VPC: numerous port scans coming in from a specific IP address block

Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block

VPC: in vpc establish separate security zones and enforce network traffic rules across different zones to limit Instance communications...

Configure instances to use pre-set IP addresses with an IP address range every security zone. Configure NACL to explicitly allow or deny communication between the different IP address ranges, as required for interzone communication

VPC: in vpc can you configure the security groups for these instances to only allow the ICMP ping to pass from the monitoring instance to the application instance and nothing else...

Yes, The security group for the monitoring instance needs to allow outbound ICMP and the application instance's security group needs to allow Inbound ICMP (is stateful, so just allow outbound ICMP from monitoring and inbound ICMP on monitored instance)

VPC: configure instances of the same subnet communicate with each other

Configure the security group itself as the source and allow traffic on all the protocols and ports

VPC: highly availabl bastion host ...

Configure the bastion instance in an Auto Scaling group Specify the Auto Scaling group to include multiple AZs but have a min-size of 1 and max-size of 1

EC2: instance types are available as Amazon EBS-backed

General purpose (lowest cost) T2, Compute-optimized C4

EC2: 132. A t2.medium EC2 instance type must be launched with what type (AMI)

An Amazon EBS-backed Hardware Virtual Machine AMI

EC2: write throughput to the database needs to be increased

Use an array of EBS volumes (Striping to increase throughput)

EC2: database requires random read IO disk performance up to a 100,000 IOPS at 4KB block side per node

High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage

EC2: some machines are failing to successfully download some software updates, but not all of their updates within the maintenance window The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances

You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time or you are running the proxy on a affluently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance

EC2: application requires disk performance of at least 100,000 IOPS in addition; the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3TB...

Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance Configure synchronous block-level replication to an identically configured Instance in us-east-1b (ex, another AV)

EC2: user is running one instance for only 3 hours every day. The user wants to save some cost with the instance...

The user should not use RI; instead only go with the on-demand pricing (seems question before the introduction of the Scheduled Reserved instances in Jan 2016, which can be used in this case)

EC2: an internal audit and has been determined to require dedicated hardware for one instance (move this instance to single-tenant hardware)...

Stop the instance, create an AMI, launch a new instance with tenancy=dedicated, and terminate the old instance

EC2: regular analytics reports from your company's log files, log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse.

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability, as opposed to just spot for EMR)

EC2: web front-end utilizes an Elastic Load Balancer and Auto scaling across 3 availability zones. During peak load web servers operate at 90% utilization and leverage a combination of heavy utilization reserved instances for steady state load and on-demand and spot instances for peak load, create a cost effective architecture to allow the application to recover quickly in the event that an availability zone is unavailable during peak load

Increase auto scaling capacity and scaling thresholds to allow the web-front to cost-effectively scale across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability. (Ideal for HA to reduce and distribute load)

EC2: software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing, minimize cost

Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs. (Because the instance will always be online during the day, in a predictable manner, and there are sequences of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "Full" level utilization purchases on EC2.)

EC2: How can software determine the public and private IP addresses of the Amazon EC2 instance that it is running on?

Query the local instance metadata. The base URI for all requests for instance metadata is http://169.254.169.254/latest/

EC2: to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

Use a placement group for your instances so the instances are physically near each other in the same Availability Zone. (You are not guaranteed 10 gigabit performance, except within a placement group. Using placement groups enables applications to participate in a low-latency, 10 Gbps network)

...

...

EC2: You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group.

Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios. (For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case)

EBS: encryption of sensitive data at rest.

Implement third party volume encryption tools, Encrypt data inside your applications before storing it on EBS, Encrypt data using native data encryption drivers at the file system level

EBS: 184. How can you secure data at rest on an EBS volume?

Use an encrypted file system on top of the EBS volume

EBS: 200. Which EBS volume type is best for high performance NoSQL cluster deployments

io1 (io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.)

EBS: lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data

Maintain a single snapshot the latest snapshot is both Incremental and complete

EBS: How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?

Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ

EBS: Why are more frequent snapshots of EBS Volumes faster?

The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.

EBS: There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue?

S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes. (EBS volume snapshots are stored in S3. If S3 is unavailable, snapshots are unavailable)

EBS: moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured?

An AWS Direct Connect link between the VPC and the network housing the internal services (VPN or a DX for communication), An IP address space that does not conflict with the one on-premises (IP address cannot conflict), • A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can't be configured from scratch)

EC2: to ensure the highest network performance (packets per second), lowest latency, and lowest jitter

Enhanced networking (provides network • Amazon VPC (Enhanced networking works only in VPC)

EC2: host a web server as well as an app server on a single EC2 instance, which is a part of the public subnet of a VPC, setup to have two separate public IPs and separate security groups for both the application as well as the web server

Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them (AWS cannot assign public IPs for instance with multiple ENIs)

EC2: organization wants to implement two separate SSLs for the separate modules although it is already using VPC

Create a VPC instance, which will have multiple network interfaces with multiple elastic IP addresses

EC2: You launch an Amazon EC2 instance without an assigned AWS identity and Access Management (IAM) role. Later, you decide that the instance should be running with an IAM role

Create a new IAM role with the same permissions as an existing IAM role, and assign it to the running instance. (As per AWS latest enhancement, this is possible now)

EC2: items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table...

Create an IAM Role that allows write access to the DynamoDB table, Add an IAM Role to a running EC2 instance. (As per AWS latest enhancement, this is possible now)

EC2: an application, which will be hosted on EC2. The application makes calls to DynamoDB to fetch certain data

The user should attach an IAM role with DynamoDB access to the EC2 instance

EC2: is leveraging IAM Roles for EC2 for accessing object stored in S3. Which two of the following IAM policies control access to you S3 objects

An IAM trust policy allows the EC2 instance to assume an EC2 instance role, An IAM access policy allows the EC2 role to access S3 objects

EC2: an application running on an EC2 Instance, which will allow users to download files from a private S3 bucket using a pre-assigned URL. Before generating the URL the application should verify the existence of the file in S3

Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata

S3: A user wants to upload a complete folder to AWS S3 using the S3 Management console

Use the Enable Enhanced Uploader option from the S3 console while uploading objects (NOTE - Its no longer supported by AWS)

S3: While testing the new web fonts, Company ABCD recognized the web fonts are being blocked by the browser

Configure the abcdfonts bucket to allow cross-origin requests by creating a CORS configuration

S3: Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse, optimize costs...

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability )

S3: features helps to prevent and recover from accidental data loss

Enable versioning on your S3 Buckets, • Configure your S3 Buckets with MFA delete

S3: object is stored in the Standard S3 storage class and you want to move it to Glacier

Create a lifecycle policy that will migrate it after a minimum of 30 days. (Any object uploaded to S3 must first be placed into either the Standard, Reduced Redundancy, or Infrequent Access storage class. Once in S3 the only way to move the object to glacier is through a lifecycle policy)

S3: a large amount of aerial image data to S3, have used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite, minimize cost

Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier (Glacier suitable for Tape backup)

S3: restrict access to data

Set an S3 ACL on the bucket or the object, Set an S3 bucket policy

S3: prevent an IP address block from accessing public objects in an S3 bucket

Create a bucket policy and apply it to the bucket

S3: user wants to make the objects public

Set the AWS bucket policy which marks all objects as public

S3: serve static assets for your public-facing web application

Set permissions on the object to public read during upload, Configure the bucket policy to set all objects to public read

S3: best suitable to allow access to the log bucket

Provide ACL for the logging group

S3: data is encrypted at rest

Use Amazon S3 server-side encryption with AWS Key Management Service managed keys, Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key

S3: enabled server side encryption with S3

S3 manages encryption and decryption automatically

S3: 150 PUT requests per second

Add a random prefix to the key names.

S3: protection against accidental loss of data stored in Amazon S3

Set bucket policies to restrict deletes, and also enable versioning

S3: an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business, they are expecting sudden and large increases to traffic to and from S3

You must find out total number of requests per second at peak usage (# of customers doesn't matter, Size does not relate to the key namespace design but the count does, S3 provided unlimited storage the key namespace design would depend on the number)

S3: business model to support both free tier and premium tier users. The premium tier users will be allowed to store up to 200GB of data and free tier customers will be allowed to store only 5GB. The customer expects that billions of files will be stored. All users need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use.

Utilize the amazon simple workflow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholdsm (List operations on S3 not feasible, RDS but with so many objects )

S3: the web application allow users to upload large files while resuming and pausing the upload as needed, files are uploaded to your php front end backed by Elastic Load Balancing and an autoscaling fleet of amazon elastic compute cloud (EC2) instances that scale upon average of bytes received (NetworkIn). After a file has been uploaded. it is copied to amazon simple storage service(S3). Amazon EC2 instances use an AWS Identity and Access Management (AMI) role that allows Amazon S3 uploads, scale have increased significantly, forcing you to increase the auto scaling groups Max parameter, optimize

Re-architect your ingest pattern, have the app authenticate against your identity provider as a broker fetching temporary AWS credentials from AWS Secure token service (GetFederation Token). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic that used the S3 multipart upload API to directly upload the file to Amazon S3 using the given credentials and S3 Prefix. (multipart allows one to start uploading directly to S3 before the actual size is known or complete data is downloaded)

RDS:

...

RDS: promote one of them, what happens to the rest of the Read Replicas

The remaining Read Replicas will still replicate from the older master DB Instance

RDS: operating at 10% writes and 90% reads, based on your logging

Create read replicas for RDS since the load is mostly reads

RDS: When should I choose Provisioned IOPS over Standard RDS storage

If you use production online transaction processing (OLTP) workloads

RDS:

...

RDS:

...

RBS

...

RDS: user will not use the DB for the next 3 months

Create a snapshot of RDS to launch in the future and terminate the instance now

RDS: in a public subnet

Security risk...Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable. DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created.

RDS: three CloudWatch RDS metrics will allow you to identify if the database is the bottleneck

The number of outstanding IOs waiting to access the disk, The amount of write latency, The amount of time a Read Replica DB Instance lags behind the source DB Instance

SQS: Files submitted by your premium customers must be transformed with the highest priority

Use two SQS queues, one for high priority messages, and the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue

SQS: consumers of queue is down for 3 days and then becomes available

Yes, since SQS by default stores message for 4 days

SQS: queue named "queue2" in US-East region with AWS SQS

http://sqs.us-east-1.amazonaws.com/123456789012/queue2

SQS: How do you configure SQS to support longer message retention

Set the MessageRetentionPeriod attribute using the SetQueueAttributes method

SQS: You need to process long-running jobs once and only once

Use an SQS queue and set the visibility timeout to long enough for jobs to process

SQS: an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase.

Some of the new jobs coming in are malformed and unprocessable. (As other options would cause the job to stop processing completely, the only reasonable option seems that some of the recent messages must be malformed and unprocessable)

Enable route propagation to the Virtual Private Gateway (VGW), Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment

DX:

...

DX:

...

DX: from dynamically routed VPN, you provisioned a Direct Connect connection and would like to start using the new connection. After configuring Direct Connect settings in the AWS Console

Update your VPC route tables to point to the Direct Connect connection configure your Direct Connect router with the appropriate settings verify network traffic is leveraging Direct Connect and then delete the VPN connection

Configure a single routing table with a default route via the internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.

Provision a VPN connection between a VPC and existing on -premises equipment, submit a DirectConnect partner request to provision cross connects between your data center and the DirectConnect location, then cut over from the VPN connection to one or more DirectConnect connections as needed.

DX: app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured

An AWS Direct Connect link between the VPC and the network housing the internal services (VPN or a DX for communication), An IP address space that does not conflict with the one on-premises (IP address cannot conflict), A VM Import of the current virtual machine (VM Import to copy the VM to AWS as there is no documentation it can't be configured from scratch)

SG: What does the AWS Storage Gateway provide

It allows to integrate on-premises IT environments with Cloud Storage

SG: running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data while the backups are taking place.

Use Storage Gateway and configure it to use Gateway Stored volumes (Data is hosted on the On-premise server as well. The requirement for 140TB is for file server On-Premise more to confuse and not in AWS. Just need a backup solution hence stored instead of cached volumes)

Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot, Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance, Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot

ELB's behavior when sticky sessions are enabled causes ELB to send requests in the same session to the same backend, the web application uses long polling such as comet or websockets. Thereby keeping a connection open to a web server for a long time

ELB: a multi-platform web application for AWS. The application will run on EC2 instances and will be accessed from PCs, tablets and smart phones. Separate sticky session and SSL certificate setups are required for different platform types.

Assign multiple ELBs to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type. Session stickiness and SSL termination are done at the ELBs. (Session stickiness requires HTTPS listener with SSL termination on the ELB and ELB does not support multiple SSL certs so one is required for each cert)

ELB:You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure

Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it. (terminate SSL on the instance using client-side certificate), Configure your Web servers with EIPs. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. (Remove ELB and use Web Servers directly with Route 53)

ELB: App contains protected health information, must use encryption at rest and in transit, a three-tier architecture where data flows through the load balancer and is stored on Amazon EBS volumes for processing, and the results are stored in Amazon S3 using the AWS SDK,

Use TCP load balancing on the load balancer, SSL termination on the Amazon EC2 instances, OS-level disk encryption on the Amazon EBS volumes, and Amazon S3 with server-side encryption, Use SSL termination on the load balancer, an SSL listener on the Amazon EC2 instances, Amazon EBS encryption on EBS volumes containing PHI, and Amazon S3 with server-side encryption.

ELB: ensure that load-testing HTTP requests are evenly distributed across the four web servers

Re-configure the load-testing software to re-resolve DNS for each web request, Use a third-party load-testing service which offers globally distributed test clients

ELB: A user is configuring the HTTPS protocol on a front end ELB and the SSL protocol for the back-end listener in ELB

It will not allow you to create this configuration (Will give error "Load Balancer protocol is an application layer protocol, but instance protocol is not. Both the Load Balancer protocol and the instance protocol should be at the same layer. Please fix.")

53: true about Amazon Route 53 resource records

An Alias record can map one DNS name to another Amazon Route 53 DNS name, an Amazon Route 53 CNAME record can point to any DNS record hosted anywhere

Consolidated Billing: . A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions.

Enable IAM cross-account access for all corporate IT administrators in each child account. (provides IT goverance), use AWS Consolidated Billing to link the divisions' accounts to a parent corporate account (will provide cost oversight)

53: Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?

Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.

CW: You have been asked to make sure your AWS Infrastructure costs do not exceed the budget set per project for each month.

Set up CloudWatch billing alerts for all AWS resources used by each account, with email notifications when it hits 50%. 80% and 90% of its budgeted monthly spend

CW:You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account

CloudWatch Events Rules, which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis. (CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tool(s))

CW: You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services.

Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (ELK - Elasticsearch, Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

CW:you also need to watch the watcher -the monitoring instance itself - and be notified if it becomes unhealthy.

Set a CloudWatch alarm based on EC2 system and instance status checks and have the alarm notify your operations team of any detected problem with the monitoring instance.

EBS vs Instance Store: provides the fastest storage medium

• SSD Instance (ephemeral) store (SSD Instance Storage provides 100,000 IOPS on some instance types, much faster than any network-attached storage)

IDS/IPS: implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC.

Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection

IDS/IPS:

Implement IDS/IPS agents on each Instance running In VPC, Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server

ELB & AS: leverage to enable an elastic and scalable web tier

Elastic Load Balancing, Amazon EC2, and Auto Scaling

ELB & AS: You are responsible for a web application that consists of an Elastic Load Balancing (ELB) load balancer in front of an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances. For a recent deployment of a new version of the application, a new Amazon Machine Image (AMI) was created, and the Auto Scaling group was updated with a new launch configuration that refers to this new AMI. During the deployment, you received complaints from users that the website was responding with errors. All instances passed the ELB health checks.

Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail, Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration.

CF: a large burst in web traffic, quickly improve your infrastructures

Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.

CF: to distribute confidential training videos to employees

Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

CF: to deliver high-definition raw video for preproduction and dubbing to customer all around the world, and they require the ability to limit downloads per customer and video file to a configurable number

Enable URL parameter forwarding, let the authentication backend count the number of downloads per customer in RDS, and return the content S3 URL unless the download limit is reached, Configure a list of trusted signers, let the authentication backend count the number of download requests per customer in RDS, and return a dynamically signed URL unless the download limit is reached.

CF: a video on-demand streaming platform

Store the video contents to Amazon S3 as an origin server. Configure the Amazon CloudFront distribution with a download option to stream the video contents

CF: Anywhere in the world, your users can see local news on of topics they choose.

Use an Amazon Cloud Front distribution for uploading the content to a central Amazon Simple Storage Service (S3) bucket and for content delivery.

CF: To enable end-to-end HTTPS connections from the user's browser to the origin via CloudFront, which of the following options are valid

Use 3rd-party CA certificate in the origin and CloudFront default certificate in CloudFront, Use 3rd-party CA certificate in both origin and CloudFront (Origin cannot be self signed, CloudFront cert cannot be applied to origin)

CF: serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

CF: You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable

• Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as query string GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. (CloudFront is a Gigabit-Scale HTTP(S) global request distribution service and works fine with peaks higher than 10 Gbps or 15,000 RPS. It can handle scale, geo-spread, spikes, and unpredictability. Access Logs will contain the GET data and work just fine for batch analysis and email using EMR. Other streaming options are expensive as not required as the need is to batch analyze)

CF: Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time.

Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests, which can be served late. (CloudFront can serve request from cache and multiple cache behavior can be defined based on rules for a given URL pattern based on file extensions, file names, or any portion of a URL. Each cache behavior can include the CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.)

EBS: QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis

Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself. (Security group allows instances to access the RDS with new instances launched without any changes)

EBS: migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location.

Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with awslogs and EB with Docker), Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)

EBS: What AWS products and features can be deployed by Elastic Beanstalk

Auto scaling groups, Elastic Load Balancers, RDS Instances

AS: highest on Thursday and Friday between 8 AM to 6 PM

Schedule Auto Scaling to scale up by 8 AM Thursday and scale down after 6 PM on Friday

AS: your application is scaling up and down multiple times in the same hour

Modify the Amazon CloudWatch alarm period that triggers your auto scaling scale down policy, Modify the Auto scaling group cool down timers

AS: a launch configuration for Auto Scaling where CloudWatch detailed monitoring is disabled

Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

AS: Auto Scaling with ELB on the EC2 instances. The user wants to configure that whenever the CPU utilization is below 10%, Auto Scaling should remove one instance

Configure CloudWatch to send a notification to the Auto Scaling group when the CPU Utilization is less than 10% and configure the Auto Scaling policy to remove the instance

SWF: use cases are Simple Workflow Service (SWF) and Amazon EC2 are appropriate solutions

Managing a multi-step and multi-decision checkout process of an e-commerce website, Orchestrating the execution of distributed and auditable business processes

SWF: . appropriate use cases for SWF with Amazon EC2

(1) Video encoding using Amazon S3 and Amazon EC2, large videos are uploaded to Amazon S3 in chunks. Application is built as a workflow where each video file is handled as one workflow execution, and (2) Processing large product catalogs using Amazon Mechanical Turk. While validating data in large catalogs, the products in the catalog are processed in batches. Different batches can be processed concurrently (orchestrating batching)

Services: services provide root access

Elastic Beanstalk, EC2, Opswork

CF: You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template.

Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record only when <code>environment</code> is not <code>production</code>. (Best way to do this is with one template, and a Condition on the resource. Route53 does not allow null strings for Refer link)

CF: Intrinsic Functions

Fn::Base64, Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Select

CF: a failure state in AWS CloudFormation

ROLLBACK_IN_PROGRESS means an UpdateStack operation failed and the stack is in the process of trying to return to the valid, pre-update state

CF: outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass, using Docker to get high consistency between Staging and Production for the application environment on your EC2 instances, there are many service components you may use beyond EC2 virtual machines

Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity. (Only CloudFormation's JSON Templates allow declarative version control of repeatedly deployable models of entire AWS clouds),

CF: automate 3 layers of a large cloud deployment. You want to be able to track this deployment's evolution as it changes over time, and carefully control any alterations

Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your cloud. (CloudFormation allows source controlled, declarative templates as the basis for stack automation and Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed)

CF: circular dependency in AWS CloudFormation

When Resources form a DependOn loop. (Refer link, to resolve a dependency error, add a DependsOn attribute to resources that depend on other resources in the template. Some cases for e.g. EIP and VPC with IGW where EIP depends on IGW need explicitly declaration for the resources to be created in correct order)

CF: deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation

Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda

CF: instantiate new tracking systems in any region without any manual intervention and therefore adopted AWS CloudFormation

Use the built-in function of AWS CloudFormation to set the AvailabilityZone attribute of the ELB resource, Use the built-in Mappings and FindInMap functions of AWS CloudFormation to refer to the AMI ID set in the ImageId attribute of the Auto Scaling::LaunchConfiguration resource.

CF: The user wants the stack creation of ELB and AutoScaling to wait until the EC2 instance is launched and configured properly

The user can use the WaitCondition resource to hold the creation of the other dependent resources

CF: Both IAM groups are attached to IAM policies that grant rights to perform the necessary task of each group as well as the creation, update and deletion of CloudFormation stacks

Network stack updates will fail upon attempts to delete a subnet with EC2 instances (Subnets cannot be deleted with instances in them), Restricting the launch of EC2 instances into VPCs requires resource level permissions in the IAM policy of the application group (IAM permissions need to be given explicitly to launch instances )

Tags: An organization has launched 5 instances: 2 for production and 3 for testing. The organization wants that one particular group of IAM users should only access the test instances and not the production ones

Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags (possible using ResourceTag condition)

Tags: find the separate cost for the production and development instances

User should use Cost Allocation Tags and AWS billing reports

Tags: administrator mistakenly terminated several production EC2 instances

Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources. (Identify production resources using tags and add explicit deny)

Kinesis: consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics

Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs (Can perform real time analysis and stores data for 24 hours which can be extended to 7 days)

Kinesis: replicate API calls across two systems in real time

AWS Kinesis (AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems)

Kinesis: perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first

Kinesis Firehose + RedShift (Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily. Refer link)

DDB: use cases for Amazon DynamoDB

Managing web sessions, Storing JSON documents, Storing metadata for Amazon S3 objects, massive amount of "hot" data and require very low latency, a rapid ingestion of clickstream in order to collect data about user behavior

DDB: ProvisionedThroughputExceededException

You're exceeding your capacity on a particular Hash Key (Hash key determines the partition and hence the performance)

DDB: store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game.

GameID as the hash key, HighestScore as the range key. (hash (partition) key should be the GameID, and there should be a range key for ordering HighestScore. Refer link)

DDB: What is the data model of DynamoDB

"Table", a collection of Items; "Items", with Keys and one or more Attribute; and "Attribute", with Name and Value.

DDB: Global secondary index

An index with a hash and range key that can be different from those on the table

OpsWorks: describe how to add a backend database server to an OpsWorks stack

Add a new database layer and then add recipes to the deploy actions of the database and App Server layers, Set up the connection between the app server and the RDS layer by using a custom recipe. The recipe configures the app server as required, typically by creating a configuration file. The recipe gets the connection data such as the host and database name from a set of attributes in the stack configuration and deployment JSON that AWS OpsWorks installs on every instance, the variables that characterize the RDS database connection—host, user, and so on—are set using the corresponding values from the deploy JSON's [:deploy][:app_name][:database] attributes

OpsWorks: with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store, a new chat feature has been implemented in node.js and waits to be integrated in the architecture

Create one AWS Ops Works stack, create two AWS Ops Works layers create one custom recipe (Single environment stack, two layers for java and node.js application using built-in recipes and custom recipe for DynamoDB connectivity only as other configuration. Refer link)

OpsWorks: You decide to write a script to be run as soon as a new Amazon Linux AMI is released.

Create a new stack and layers with identical configuration, add instances with the latest Amazon Linux AMI specified as a custom AMI to the new layer, switch DNS to the new stack, and tear down the old stack. (Blue-Green Deployment), Add new instances with the latest Amazon Linux AMI specified as a custom AMI to all AWS OpsWorks layers of your stack, and terminate the old ones.

OpsWorks: OpsWorks, which of the following is not an instance type you can allocate in a stack layer

Spot instances (Does not support spot instance directly but can be used with auto scaling)

WAF: questionable log entries and suspect someone is attempting to gain unauthorized access

Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would then pass the traffic to the current web tier. Web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group

EC: suitable for storing session state data

RDS, DyanmoDB, Elasticache

EC: Which statement best describes ElastiCache

Offload the read traffic from your database in order to reduce latency caused by read-heavy workload

EC: application currently uses multicast to share session state between web servers

• Store session state in Amazon ElastiCache for Redis (scalable and makes the web applications stateless)

EC: to support a 24-hour flash sale, which one of the following methods best describes a strategy to lower the latency while keeping up with unusually heavy traffic

Use ElastiCache as in-memory storage on top of DynamoDB to store user sessions (scalable, faster read/writes and in memory storage)

EC: read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically.

Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and RDS with read replicas.

EMR: creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system, lower cost.

Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift (Combination of the Spot and reserved with guarantee performance and help reduce cost. Also, RRS would reduce cost and guarantee data integrity, which is different from data durability)

DS: heavily dependent on low latency connectivity to LDAP for authentication. Your security policy requires minimal changes to the company's existing application user management processes.

Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication (RODCs low latency and minimal setup)

AD: they want to make their internal Microsoft Active Directory available to any applications running on AWS

Using VPC, they could create an extension to their data center and make use of resilient hardware IPSEC tunnels; they could then have two domain controller instances that are joined to their existing domain and reside within different subnets, in different Availability Zones (highly available with 2 AZ's, secure with VPN connection and minimal changes)

AD: needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company's requirements

Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces (WorkSpaces for Virtual desktops, and AWS Directory Services to authenticate to an existing on-premises AD through VPN)

CT: a reliable and durable logging solution to track changes made to your EC2, IAM and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend

Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles, S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Single New bucket with global services option for IAM and MFA delete for confidentiality)

ET: Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video.

Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3

Migration: You must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location.

Create a Dockerfile for the application. Create an AWS Elastic Beanstalk application using the Docker platform and the Dockerfile. Enable logging the Docker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Use Docker configuration with aws logs and EB with Docker), Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon CloudWatch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI. (Use VM Import/Export to create AMI and CloudWatch logs agent to log)

Migration: allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured

An AWS Direct Connect link between the VPC and the network housing the internal services, An IP address space that does not conflict with the one on-premises, A VM Import of the current virtual machine

KMS: regulatory requirements that all data needs to be encrypted before being uploaded to the cloud.

c. Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier. (With CSE-KMS the encryption happens at client side before the object is upload to S3 and KMS is cost effective as well)

Glacier: Each drug trial test may generate up to several thousands of files, with compressed file sizes ranging from 1 byte to 100MB. Once archived, data rarely needs to be restored, and on the rare occasion when restoration is needed, the company has 24 hours to restore specific files that match certain metadata. Searches must be possible by numeric file ID, drug name, participant names, date ranges, and other metadata. Which is the most cost-effective architectural approach that can meet the requirements

First, compress and then concatenate all files for a completed drug trial test into a single Amazon Glacier archive. Store the associated byte ranges for the compressed files along with other search metadata in an Amazon RDS database with regular snapshotting. When restoring data, query the database for files that match the search criteria, and create restored files from the retrieved byte ranges.

Glacier:

...

ES: She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services.

Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (AWS Elasticsearch with Kibana stack is designed specifically for real-time, ad-hoc log analysis and aggregation)

Lambda: Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

You did not request a limit increase on concurrent Lambda function executions. (AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.)