• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/30

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

30 Cards in this Set

  • Front
  • Back
Which set of Amazon S3 features helps to prevent and recover from accidental data loss?

a Object lifecycle and service access logging
b Object versioning and Multi-factor authentication
c Access controls and server-side encryption
d Website hosting and Amazon S3 policies
b Object versioning and Multi-factor authentication

It's a version control feature for S3 that enables you to revert to older versions of an S3 object, which helps provide protection against accidental or malicious deletion. Versioning keeps multiple versions of an object in the same bucket. When you enable it on a bucket, Amazon S3 automatically adds a unique version ID to every object stored in the bucket. At that point, a simple DELETE action does not permanently delete an object version; it merely associates a delete marker with the object. If you want to permanently delete an object version, you must specify its version ID in your DELETE request
AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.
A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this?(Choose 2 answers)

a Amazon CloudWatch
b Amazon Route 53
c Amazon Simple Notification Service
d Amazon Simple Queue Service
e Amazon Simple Email Service
a Amazon CloudWatch
c Amazon Simple Notification Service

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.
Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan-out messages to large numbers of recipients. Amazon SNS makes it simple and cost effective to send push notifications to mobile device users, email recipients or even send messages to other distributed services.
A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? Choose 2 answers

a AWS Directory Service Simple AD
b AWS Directory Service AD Connector
c AWS Identity and Access Management groups
d AWS identity and Access Management roles
e AWS identity and Access Management users
b AWS Directory Service AD Connector
d AWS identity and Access Management roles

1.AD Connector, lets you simply connect your existing on-premises Active Directory to AWS.
2.An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user.
The Trusted Advisor service provides insight regarding which four categories of an AWS account?

a Security, fault tolerance, high availability, and connectivity
b Performance, cost optimization, security, and fault tolerance
c Security, access control, high availability, and performance
d Performance, cost optimization, access control, and connectivity
b Performance, cost optimization, security, and fault tolerance

AWS Trusted Advisor provides best practices (or checks) in four categories: cost optimization, security, fault tolerance, and performance improvement.
You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?

a Amazon Kinesis
b Amazon AppStream
c AWS Data Pipeline
d Amazon Simple Queue Service
a Amazon Kinesis

Use Amazon Kinesis Streams to collect and process large streams of data records in real time. You'll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.
A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S3 operations?

a SAML-based Identity Federation
b Cross-Account Access
c Web Identity Federation
d AWS Identity and Access Management roles
c Web Identity Federation

Web Identity Federation (WIF). This allows a developer to federate their application from Facebook, Google, or Amazon with their AWS account, allowing their end users to authenticate with one of these Identity Providers (IdP) and receive temporary AWS credentials. In combination with Policy Variables, WIF allows the developer to restrict end users' access to a subset of AWS resources within their account.
You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB video objects to Amazon Simple Storage Service(S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?

a Use Amazon S3 multipart upload
b Enable enhanced networking
c Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance
d Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
a Use Amazon S3 multipart upload

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. It helps you to get improved throughput.
You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

a Amazon S3 will automatically manage performance at this scale.
b Use multi-part upload.
c Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names
d Add a random prefix to the key names.
d Add a random prefix to the key names.

If you anticipate that your workload will consistently exceed 100 requests per second, you should avoid sequential key names. If you must use sequential numbers or date and time patterns in key names, add a random prefix to the key name. The randomness of the prefix more evenly distributes key names across multiple index partitions. Examples of introducing randomness are provided later in this topic.
Which of the following instance types are available as Amazon EBS-backed only? Choose 2 answers

a General purpose M3
b Compute-optimized C3
c General purpose T2
d Compute-optimized C4
e Storage-optimized 12
c General purpose T2
d Compute-optimized C4

Refer :http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html
You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

a Single Amazon S3 bucket
b Multiple Amazon EBS volume with snapshots
c Single Amazon Glacier vault
d Multiple instance stores
a Single Amazon S3 bucket

Amazon S3 is storage for the Internet. It's a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.
You need to pass a custom script to new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this?

a IAM roles
b User data
c AWS Config
d EC2Config service
b User data

You can access the user data that you supplied when launching your instance. For example, you can specify parameters for configuring your instance, or attach a simple script. You can also use this data to build more generic AMIs that can be modified by configuration files supplied at launch time. For example, if you run web servers for various small businesses, they can all use the same AMI and retrieve their content from the Amazon S3 bucket you specify in the user data at launch. To add a new customer at any time, simply create a bucket for the customer, add their content, and launch your AMI. If you launch more than one instance at the same time, the user data is available to all instances in that reservation.
Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers

a Snapshots are automatically encrypted
b Available to all instance types
c Shared volumes can be encrypted
d Supported on all Amazon EBS volume types
e Existing volumes can be encrypted
a Snapshots are automatically encrypted
d Supported on all Amazon EBS volume types

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
A US-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents?

a Use a public-facing load balancer per region to load-balance web traffic, and enable sticky sessions.
b Use a public-facing load balancer per region to load-balance web traffic, and enable HTTP health checks.
c Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions.
d Use Amazon Route 53, and apply a weighted routing policy to distribute traffic across both regions.
c Use Amazon Route 53, and apply a geolocation routing policy to distribute traffic across both regions.

Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location from which DNS queries originate. When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights. Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same endpoint.
What is the minimum time Interval for the data that Amazon CloudWatch receives and aggregates?

a One second
b Five seconds
c Three minutes
d One minute
e Five minutes
d One minute

Amazon CloudWatch metrics provide statistical results at a frequency up to one minute. This includes custom metrics. You can send custom metrics to Amazon CloudWatch as frequently as you like, but statistics will only be available at one minute granularity. You can also request statistics at a lower frequency, for example five minutes, one hour, or one day.
Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below win meet the initial requirements for the collection platform?

a Utilize S3 to collect the inbound sensor data analyse the data from S3 with a daily scheduled Data Pipeline and save the results to Redshift Cluster.
b Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
c Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
d Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
b Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.

Amazon Kinesis greatly simplifies the process of working with real-time streaming data in the AWS Cloud. Instead of setting up and running your own processing and short-term storage infrastructure, you simply create a Kinesis Stream or Kinesis Firehose, arrange to pump data in to it, and then build an application to process or analyze it.
You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2Instances. The application is designed to recover gracefully from AmazonEC2 instance failures. You are required to accomplish this task in the most cost- effective way. Which of the following will meet your requirements?

a Spot Instances
b Dedicated instances
c Reserved instances
d On-Demand instances
a Spot Instances

Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price.

Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances, if an instance fails to pass health checks, which statement will be true?

a The ELB stops sending traffic to the instance that failed its health check.
b The instance is replaced automatically by the ELB.
c The instance gets quarantined by the ELB for root cause analysis.
d The instance gets terminated automatically by the ELB.
a The ELB stops sending traffic to the instance that failed its health check.

ELBs are deigned to dynamically forward traffic to the eth0 interface of some set of ec2 instances in one or more availability zones of a single region. When monitoring is setup, the ELB will see that the instance is not responding and stop sending traffic to the failed instance.
What is the maximum write throughput I can provision for a single Dynamic DB table?

a Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first.
b 1,000 write capacity units
c 100,000 write capacity units
d 10,000 write capacity units
a Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first.

DynamoDB is designed to scale without limits However, if you wish to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must first contact Amazon through this online form. If you wish to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account you must first contact us using the form described above.
When using consolidated billing there are two account types. What are they?

a Paying account and Child account
b Parent account and Linked account
c Paying account and Linked account
d Master account and Linked account
c Paying account and Linked account

Consolidated Billing Process: You sign up for Consolidated Billing in the AWS Billing and Cost Management console, and designate your account as a payer account. Now your account can pay the charges of the other accounts, which are called linked accounts. The payer account and the accounts linked to it are called a Consolidated Billing account family.
Which Amazon service can I use to define a virtual network that closely resembles a traditional data center?

a Amazon Service Bus
b Amazon EMR
c Amazon Kinesis
d Amazon VPC
d Amazon VPC

Amazon Virtual Private Cloud (Amazon VPC) enables you to define a virtual network in your own logically isolated area within the AWS cloud, known as a virtual private cloud (VPC). You can launch your AWS resources, such as instances, into your VPC. Your VPC closely resembles a traditional network that you might operate in your own data center, with the benefits of using AWS's scalable infrastructure. You can configure your VPC; you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings
A _____ is a document that provides a formal statement of one or more permissions.

a Permission
b Role
c User
d Policy
d Policy

IAM Policy is a document that formally states one or more permissions
The _____ service is targeted at organizations with multiple users or systems that use AWS products such as AmazonEC2,Amazon SimpleDB, and the AWS Management Console.

a AWS Identity and Access Management
b AWS Integrity Management
c Amazon RDS
d AWS EMR
a AWS Identity and Access Management

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).
What will be the status of the snapshot until the snapshot is complete?

a Running
b Pending
c Working
d Progressing
b Pending

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
Before I delete an EBS volume, what can I do if I want to recreate the volume later?

a Create a copy of the EBS volume (not a snapshot)
b Store a snapshot of the volume
c Download the content to an EC2instance
d Back up the data into a physical disk
b Store a snapshot of the volume

After writing data to an EBS volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.
What is the maximum response time for a Business level Premium Support case if critical issue?

a 12 hours
b 1 hour
c 10 minutes
d 30 minutes
b 1 hour

https://aws.amazon.com/premiumsupport/features/
A company is building software on AWS that requires access to various AWS services. Which configuration should be used to ensure mat AWS credentials (i.e., Access Key ID/Secret Access Key combination) are not compromised?

a Assign an IAM role to the Amazon EC2 instance.
b Store the AWS Access Key ID/Secret Access Key combination in software comments.
c Enable Multi-Factor Authentication for your AWS root account.
d Assign an IAM user to the Amazon EC2 Instance.
a Assign an IAM role to the Amazon EC2 instance.

Use roles for applications that run on Amazon EC2 instances. Applications that run on an Amazon EC2 instance need credentials in order to access other AWS services. To provide credentials to the application in a secure way, use IAM roles. A role is an entity that has its own set of permissions, but that isn't a user or group. Roles also don't have their own permanent set of credentials the way IAM users do. In the case of Amazon EC2, IAM dynamically provides temporary credentials to the EC2 instance, and these credentials are automatically rotated for you.
You are responsible for a legacy web application whose server environment is approaching end of life. You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VM's single 10GB VMDK is almost full. The virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized. It is currently running on a highly customized Windows VM within a VMware environment: You do not have the installation media. This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?

a Use S3 to create a backup of the VM and restore the data into EC2.
b Use the EC2 VM Import Connector for vCenter to import the VM into EC2.
c Use Import/Export to import the VM as an ESS snapshot and attach to EC2.
d Use me ec2-bundle-instance API to Import an Image of the VM into EC2
b Use the EC2 VM Import Connector for vCenter to import the VM into EC2.

The Amazon EC2 VM Import Connector extends the capabilities of VMware vCenter to provide a familiar graphical user interface you can use to import your pre-existing virtual machines (VMs) to Amazon EC2. Using the Connector, importing a virtual machine is as simple as selecting a virtual machine (VM) from your vSphere infrastructure, and specifying the AWS Region, Availability Zone, operating system, instance size, security group, and VPC details (if desired) into which the VM should be imported. Once the VM has been imported, you can launch it as an instance from the AWS Management Console, and immediately take advantage of all the features of Amazon EC2.
You are building a solution for a customer to extend their on-premises data center to AWS. The customer requires a 50-Mbps dedicated and private connection to their VPC. Which AWS product or feature satisfies this requirement?

a AWS Direct Connect
b Amazon VPC peering
c Elastic IP Addresses
d Amazon VPC virtual private gateway
a AWS Direct Connect

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data centre, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customer's objects replicated?

a A single facility in eu-west-1 and a single facility in eu-central-1
b A single facility in eu-west-1 and a single facility in us-east-1
c Multiple facilities in eu-west-1
d A single facility in eu-west-1
c Multiple facilities in eu-west-1

Objects stored in a region never leave the region unless you explicitly transfer them to another region. For example, objects stored in the EU (Ireland) region never leave it. For more information about Amazon S3 regions and endpoints
You require the ability to analyze a customer's clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?

a Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic Map Reduce
b Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon
c Push web clicks by session to Amazon Kinesis and analyze behaviour using Kinesis workers
d Write click events directly to Amazon Redshift and then analyze with SQL
c Push web clicks by session to Amazon Kinesis and analyze behaviour using Kinesis workers

Amazon Kinesis greatly simplifies the process of working with real-time streaming data in the AWS Cloud. Instead of setting up and running your own processing and short-term storage infrastructure