• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/30

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

30 Cards in this Set

  • Front
  • Back
Your customer is willing to consolidate their log streams (access logs, application logs, security logs etc.) in one single system. Once consolidated, the customer wants to analyse these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer's requirements?

a Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
b Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs
c Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
d Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs
c Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs

Use Amazon Kinesis Streams to collect and process large streams of data records in real time. You'll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.
What is an isolated database environment running in the cloud (Amazon RDS) called?

a DB Unit
b DB Server
c DB Volume
d DB Instance
d DB Instance

A DB instance is an isolated database environment running in the cloud. It is the basic building block of Amazon RDS. A DB instance can contain multiple user-created databases, and can be accessed using the same client tools and applications you might use to access a stand-alone database instance. DB instances are simple to create and modify with the Amazon AWS command line tools, Amazon RDS APIs, or the AWS Management RDS Console
IAM provides several policy templates you can use to automatically assign permissions to the groups you create. The _____ policy template gives the Admin group permission to access all account resources, except your AWS account information

a Read Only Access
b Read Only Access
c Power User Access
d Administrator Access
c Power User Access

The power user role provides an AWS Directory Service user or group with full access to AWS services and resources, but does not allow management of IAM users and groups. The following is the policy for this role.
{
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"NotAction" : "iam:*",
"Resource" : "*"
}
]
}
While performing the volume status checks, if the status is insufficient-data, what does it mean?

a the checks may still be in progress on the volume
b the checks is not yet started
c the check has passed
d the check has failed
a the checks may still be in progress on the volume

Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. If all checks pass, the status of the volume is ok. If a check fails, the status of the volume is impaired. If the status is insufficient-data, the checks may still be in progress on the volume. You can view the results of volume status checks to identify any impaired volumes and take any necessary actions.
Which of the below policy provides only full access to Amazon S3 services and resources?

a { "Version": "2016-10-17", "Statement": [ { "Effect": "Allow", "Action": "*:s3", "Resource": "*" } ] }
b { "Version": "2016-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] }
c { "Version": "2016-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] }
d { "Version": "2016-10-17", "Statement": [ { "Effect": "Allow", "Action": "*:s3:*", "Resource": "*" } ] }
c { "Version": "2016-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] }

Refer: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html
You have an EC2 Security Group with several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. The new rules apply:

a To all instances, but it may take several minutes for old instances to see the changes.
b Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
c Immediately to the new instances only.
d Immediately to all instances in the security group.
d Immediately to all instances in the security group.

When you add a rule to a security group, the new rule is automatically applied to any instances associated with the security group. You can assign a security group to an instance when you launch the instance. When you add or remove rules, those changes are automatically applied to all instances to which you've assigned the security group.
When automatic failover occurs, Amazon RDS will emit a DB Instance event to inform you that automatic failover occurred. You can use the _____ to return information about events related to your DB Instance

a DescriveFailure
b DescribeEvents
c FetchFailure
d FetchEvents
b DescribeEvents

Amazon RDS will emit a DB Instance event to inform you that automatic failover occurred. You can use the DescribeEvents to return information about events related to your DB Instance, or click the "DB Events" section of the AWS Management Console
Which of the below is true about S3 Cross Region Replication(Select 2 answers)?

a By activating cross-region replication, Amazon S3 will replicate newly created objects, object updates, and object deletions from a source bucket into a destination bucket in a different region
b Cross-region replication requires that versioning is enabled only in source bucket and not needed in destination bucket
c Cross-region replication is the automatic, asynchronous copying of objects across buckets in different AWS regions
d By activating cross-region replication, actions performed by lifecycle configuration will be also replicated
e Amazon S3 console allows you to delete cross-region replication
a By activating cross-region replication, Amazon S3 will replicate newly created objects, object updates, and object deletions from a source bucket into a destination bucket in a different region
c Cross-region replication is the automatic, asynchronous copying of objects across buckets in different AWS regions

Refer: http://docs.aws.amazon.com/AmazonS3/latest/UG/cross-region-replication.html
Select the correct set of steps for exposing the snapshot only to specific AWS accounts

a Select public for all the accounts and check mark those accounts with whom you want to expose the snapshots and click save.
b SelectPublic, enter the IDs of those AWS accounts, and clickSave.
c SelectPrivate, enter the IDs of those AWS accounts, and clickSave
d SelectPublic, mark the IDs of those AWS accounts as private, and clickSave.
c SelectPrivate, enter the IDs of those AWS accounts, and clickSave

To expose the snapshot to only specific AWS accounts, choose Private, enter the ID of the AWS account (without hyphens) in the AWS Account Number field, and choose Add Permission. Repeat until you've added all the required AWS accounts.
What does the "Server Side Encryption" option on Amazon S3 provide?

a It provides an encrypted virtual disk in the Cloud.
b It encrypts the files that you send to Amazon S3, on the server side.
c It doesn't exist for Amazon S3, but only for Amazon EC2.
d It allows to upload files using an SSL endpoint, for a secure transfer.
b It encrypts the files that you send to Amazon S3, on the server side.

Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. Amazon S3 supports bucket policies that you can use if you require server-side encryption for all objects that are stored in your bucket.
A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing(OLTP)database. What services should you leverage to enable an elastic and scalable web tier?

a Elastic Load Balancing , Amazon EC2, and Auto Scaling
b Elastic Load Balancing, Amazon EC2, and Amazon RDS
c AmazonEC2,Amazon Dynamo DB, and Amazon S3
d Amazon RDS with Multi-AZ and Auto Scaling
a Elastic Load Balancing , Amazon EC2, and Auto Scaling

Elastic and Scalable are the keywords.

Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to route application traffic.
What does Amazon Cloud Formation provide?

a Create templates for the service or application architectures
b The ability to setup Auto scaling for Amazon EC2instances
c A template to map network resources for Amazon Web Services
d A container for Amazon Services
a Create templates for the service or application architectures

AWS CloudFormation simplifies provisioning and management on AWS. You can create templates for the service or application architectures you want and have AWS CloudFormation use those templates for quick and reliable provisioning of the services or applications (called "stacks"). You can also easily update or replicate the stacks as needed. This collection of sample templates will help you get started with AWS Cloud Formation and quickly build your own templates.
How are the EBS snapshots saved on Amazon S3?

a Incrementally
b Exponentially
c Are expressly prohibited under all circumstances.
d EBS snapshots are not stored in the Amazon S3
e Decrementally
a Incrementally

You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs. When you delete a snapshot, only the data unique to that snapshot is removed. Active snapshots contain all of the information needed to restore your data (from the time the snapshot was taken) to a new EBS volume.
When creation of an EBS snapshot is initiated, but not completed, the EBS volume:

a Can be used but there should be a delay in IO operations
b Cannot be detached or attached to an EC2 instance until the snapshot completes
c Can be used while the snapshot is in progress
d Can be used in read-only mode while the snapshot is in progress
c Can be used while the snapshot is in progress

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
To help you manage your Amazon EC2 instances, images, and other Amazon EC2 resources, you can assign your own meta data to each resource in the form of _____________________

a special filters
b tags
c functions
d wildcards
b tags

Tagging Your Amazon EC2 Resources

To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type.
Regarding the attaching of ENI to an instance, what does 'warm attach' refer to?

a Attaching an ENI to an instance when it is starting
b Attaching an ENI to an instance when it is stopped.
c Attaching an ENI to an instance during the launch process
d Attaching an ENI to an instance when it is running
b Attaching an ENI to an instance when it is stopped.

You can attach an elastic network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach).
You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers

a Configure the bucket ACL to set all objects to public read
b Configure the bucket policy to set all objects to public read
c Set permissions on the object to public read during upload
d Amazon S3 objects default to public read, so no action is needed
e Use AWS Identity and Access Management roles to set the bucket to public read
b Configure the bucket policy to set all objects to public read
c Set permissions on the object to public read during upload

You can use ACLs to grant permissions to individual AWS accounts; however, it is strongly recommended that you do not grant public access to your bucket using an ACL. So the recommended approach is create bucket policy, but not ACL. You must grant read permission on the specific objects to make them publicly accessible so that your users can view them on your website. You make objects publicly readable by using either the object ACL or by writing a bucket policy
You can use _____ and _____ to help secure the instances in your VPC.

a security groups and biometric authentication
b NCLs and 2-Factor authentication
c security groups and multi-factor authentication
d security groups and network ACLs
d security groups and network ACLs

Security groups — Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level
Network access control lists (ACLs) — Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level
Every user you create in the IAM system starts with __________

a Full permissions
b Partial permissions
c No permissions
d Power user permissions
c No permissions

Permissions let you specify who has access to AWS resources, and what actions they can perform on those resources. Every IAM user starts with no permissions. In other words, by default, users can do nothing, not even view their own access keys. To give a user permission to do something, you can add the permission to the user (that is, attach a policy to the user) or add the user to a group that has the desired permission.
In RDS instance, you must increase storage size in increments of at least __________ %

a 30
b 20
c 10
d 15
c 10

AllocatedStorage:The new storage capacity of the RDS instance. Changing this setting does not result in an outage and the change is applied during the next maintenance window unless ApplyImmediately is set to true for this request.
Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.
What does the AWS Storage Gateway provide?

a It allows to integrate on-premises IT environments with Cloud Storage.
b It's a backup solution that provides an on-premises Cloud storage
c A direct encrypted connection to Amazon S3
d It provides an encrypted SSL endpoint for backups in the Cloud.
a It allows to integrate on-premises IT environments with Cloud Storage.

AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the Amazon Web Services (AWS) storage infrastructure. You can use the service to store data in the AWS cloud for scalable and cost-effective storage that helps maintain data security
Resources that are created in AWS are identified by a unique identifier called an _______________________

a Amazon Resource Name
b Amazon Resource Name tag
c Amazon Resource Number
d Amazon Resource Namespace
a Amazon Resource Name

Amazon Resource Names (ARNs) uniquely identify AWS resources. We require an ARN when you need to specify a resource unambiguously across all of AWS, such as in IAM policies, Amazon Relational Database Service (Amazon RDS) tags, and API calls.
What does Amazon EC2 provide?

a A platform to run code (Java, PHP, Python), paying on an hourly basis.
b Computer Clusters in the Cloud
c Virtual servers in the Cloud.
d Physical servers, remotely managed by the customer.
c Virtual servers in the Cloud.

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
Which services allow the customer to retain full administrative privileges of the underlying EC2 instances? Choose 2 answers

a Amazon Elastic Map Reduce
b Amazon DynamoDB
c Amazon ElastiCache
d AWS Elastic Beanstalk
e Amazon Relational Database Service
a Amazon Elastic Map Reduce
d AWS Elastic Beanstalk

AWS provides the root or system privileges only for a limited set of services, which includes
Elastic Cloud Compute (EC2)
Elastic MapReduce (EMR)
Elastic BeanStalk
Opswork
AWS does not provide root privileges for managed services like RDS, DynamoDB, S3, Glacier etc
What type of block cipher does Amazon S3 offer for server side encryption?

a Blowfish
b Advanced Encryption Standard
c Triple DES
d RC5
b Advanced Encryption Standard

Server-side encryption is about protecting data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
You manually launch a NAT AMI in a public subnet. The network is properly configured. Security groups and network access control lists are property configured. Instances in a private subnet can access the NAT. The NAT can access the Internet. However, private instances cannot access the Internet. What additional step is required to allow access from the private instances?

a Enable Source/Destination Check on the NAT instance
b Enable Source/Destination Check on the private Instances.
c Disable Source/Destination Check on the private instances.
d Disable Source/Destination Check on the NAT instance.
d Disable Source/Destination Check on the NAT instance.

Each EC2 instance performs source/destination checks by default. This means that the instance must be the source or destination of any traffic it sends or receives. However, a NAT instance must be able to send and receive traffic when the source or destination is not itself. Therefore, you must disable source/destination checks on the NAT instance.You can disable the SrcDestCheck attribute for a NAT instance that's either running or stopped using the console or the command line.

To disable source/destination checking using the console
1.Open the Amazon EC2 console.
2.In the navigation pane, choose Instances.
3.Select the NAT instance, choose Actions, select Networking, and then select Change Source/Dest. Check.
4.For the NAT instance, verify that this attribute is disabled. Otherwise, choose Yes, Disable.
A t2.medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)?

a An Amazon EBS-backed Paravirtual AMI
b An Instance store Paravirtual AMI
c An Instance store Hardware Virtual Machine AMI
d An Amazon EBS-backed Hardware Virtual Machine AMI
d An Amazon EBS-backed Hardware Virtual Machine AMI

Refer: https://aws.amazon.com/amazon-linux-ami/instance-type-matrix/
Select the correct statement for Amazon RedShift Vs Amazon EMR(choose 2 answers)

a Amazon EMR is ideal for large volumes of structured data that you want to persist
b Amazon Redshift is ideal for large volumes of structured data that you want to persist
c Amazon EMR is ideal for processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift
d Amazon Redshift is ideal for processing and transforming unstructured or semi-structured data to bring in to Amazon EMR
e Amazon Redishift is much better option for data sets that are relatively transitory, not stored for long-term use
b Amazon Redshift is ideal for large volumes of structured data that you want to persist
c Amazon EMR is ideal for processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift

Amazon Redshift is ideal for large volumes of structured data that you want to persist and query using standard SQL and your existing BI tools. Amazon EMR is ideal for processing and transforming unstructured or semi-structured data to bring in to Amazon Redshift and is also a much better option for data sets that are relatively transitory, not stored for long-term use.
A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?

a Gateway-Stored volumes with snapshots scheduled to Amazon S3
b Gateway-Virtual Tape Library with snapshots to Amazon Glacier
c Gateway-Virtual Tape Library with snapshots to Amazon S3
d Gateway-Cached volumes with snapshots scheduled to Amazon S3
d Gateway-Cached volumes with snapshots scheduled to Amazon S3

Gateway-cached volumes allow you to utilize Amazon S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data. These volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create up to 32 volumes up to 32 TB in size each, for a total of 1 PB of data capacity per gateway, and mount them as iSCSI devices from your on-premises application servers. Data written to these volumes is stored in Amazon S3, with only a cache of recently written and recently read data stored locally on your on-premises storage hardware.
In the Amazon cloud watch, which metric should I be checking to ensure that your DB Instance has enough free storage space?

a FreeStorage
b FreeStorageSpace
c FreeDBStorageSpace
d FreeStorageVolume
b FreeStorageSpace

Amazon Relational Database Service sends metrics to CloudWatch for each active database instance every minute. Detailed monitoring is enabled by default.
FreeStorageSpace:The amount of available storage space.

Units: Bytes