2026 New SAP-C01 Exam Dumps with PDF and VCE Free: https://www.2passeasy.com/dumps/SAP-C01/

It is more faster and easier to pass the Amazon-Web-Services SAP-C01 exam by using Certified Amazon-Web-Services AWS Certified Solutions Architect- Professional questuins and answers. Immediate access to the Most recent SAP-C01 Exam and find the same core area SAP-C01 questions with professionally verified answers, then PASS your exam with a high score now.

Amazon-Web-Services SAP-C01 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
A company runs a dynamic mission-critical web application that has an SLA of 99.99%. Global application users access the application 24/7. The application is currently hosted on premises and routinely fails to meet its SLA, especially when millions of users access the application concurrently. Remote users complain of latency.
How should this application be redesigned to be scalable and allow for automatic failover at the lowest cost?

  • A. Use Amazon Route 53 failover routing with geolocation-based routin
  • B. Host the website on automatically scaled Amazon EC2 instances behind an Application Load Balancer with an additional Application Load Balancer and EC2 instances for the application layer in each regio
  • C. Use a Multi-AZ deployment with MySQL as the data layer.
  • D. Use Amazon Route 53 round robin routing to distribute the load evenly to several regions with health check
  • E. Host the website on automatically scaled Amazon ECS with AWS Fargate technology containers behind a Network Load Balancer, with an additional Network Load Balancer and Fargate containers for the application layer in each regio
  • F. Use Amazon Aurora replicas for the data layer.
  • G. Use Amazon Route 53 latency-based routing to route to the nearest region with health check
  • H. Host the website in Amazon S3 in each region and use Amazon API Gateway with AWS Lambda for the application laye
  • I. Use Amazon DynamoDB global tables as the data layer with Amazon DynamoDB Accelerator (DAX) for caching.
  • J. Use Amazon Route 53 geolocation-based routin
  • K. Host the website on automatically scaled AWS Fargate containers behind a Network Load Balancer with an additional Network Load Balancer and Fargate containers for the application layer in each regio
  • L. Use Amazon Aurora Multi-Master for Aurora MySQL as the data layer.

Answer: C

Explanation:
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-co

NEW QUESTION 2
An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic.
Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs ?

  • A. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
  • B. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
  • C. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
  • D. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.

Answer: A

Explanation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html

NEW QUESTION 3
A company is running a web application with On-Demand Amazon EC2 instances in Auto Scaling groups that scale dynamically based on custom metrics After extensive testing the company determines that the m5 2xlarge instance size is optimal for the workload Application data is stored in db r4 4xlarge Amazon RDS instances that are confirmed to be optimal The traffic to the web application spikes randomly during the day
What other cost-optimization methods should the company implement to further reduce costs without impacting the reliability of the application?

  • A. Double the instance count in the Auto Scaling groups and reduce the instance size to m5 large
  • B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running
  • C. Reduce the RDS instance size to db r4 xlarge and add five equivalents sized read replicas to provide reliability
  • D. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database

Answer: B

NEW QUESTION 4
A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?

  • A. Store forecast locations in an Amazon ES cluste
  • B. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • C. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
  • D. Store forecast locations in an Amazon EFS volum
  • E. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volum
  • F. Set the set cache-control timeout for 15 minutes in the CloudFront distribution.
  • G. Store forecast locations in an Amazon ES cluste
  • H. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origi
  • I. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
  • J. Store forecast locations in an Amazon S3 as individual object
  • K. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 objec
  • L. Set the cache-control timeout for 15 minutes in the CloudFront distribution.

Answer: C

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/

NEW QUESTION 5
A company is having issues with a newly deployed server less infrastructure that uses Amazon API Gateway, Amazon Lambda, and Amazon DynamoDB.
In a steady state, the application performs as expected However, during peak load, tens of thousands of simultaneous invocations are needed and user request fail multiple times before succeeding. The company has checked the logs for each component, focusing specifically on Amazon CloudWatch Logs for Lambda. There are no error logged by the services or applications.
What might cause this problem?

  • A. Lambda has very memory assigned, which causes the function to fail at peak load.
  • B. Lambda is in a subnet that uses a NAT gateway to reach out to the internet, and the function instance does not have sufficient Amazon EC2 resources in the VPC to scale with the load.
  • C. The throttle limit set on API Gateway is very low during peak load, the additional requests are not making their way through to Lambda
  • D. DynamoDB is set up in an auto scaling mod
  • E. During peak load, DynamoDB adjust capacity and through successfully.

Answer: A

NEW QUESTION 6
A company has a High Performance Computing (HPC) cluster in its on-premises data center which runs thousands of jobs in parallel for one week every month, processing petabytes of images. The images are stored on a network file server, which is replicated to a disaster recovery site. The on-premises data center has reached capacity and has started to spread the jobs out over the course of month in order to better utilize the cluster, causing a delay in the job completion.
The company has asked its Solutions Architect to design a cost-effective solution on AWS to scale beyond the current capacity of 5,000 cores and 10 petabytes of data. The solution must require the least amount of management overhead and maintain the current level of durability.
Which solution will meet the company’s requirements?

  • A. Create a container in the Amazon Elastic Container Registry with the executable file for the jo
  • B. Use Amazon ECS with Spot Fleet in Auto Scaling group
  • C. Store the raw data in Amazon EBS SC1 volumes and write the output to Amazon S3.
  • D. Create an Amazon EMR cluster with a combination of On Demand and Reserved Instance Task Nodes that will use Spark to pull data from Amazon S3. Use Amazon DynamoDB to maintain a list of jobs that need to be processed by the Amazon EMR cluster.
  • E. Store the raw data in Amazon S3, and use AWS Batch with Managed Compute Environments to create Spot Fleet
  • F. Submit jobs to AWS Batch Job Queues to pull down objects from Amazon S3 onto Amazon EBS volumes for temporary storage to be processed, and then write the results back to Amazon S3.
  • G. Submit the list of jobs to be processed to an Amazon SQS to queue the jobs that need to be processed.Create a diversified cluster of Amazon EC2 worker instances using Spot Fleet that will automatically scale based on the queue dept
  • H. Use Amazon EFS to store all the data sharing it across all instances in the cluster.

Answer: B

NEW QUESTION 7
An enterprise company is using a multi-account AWS strategy There are separate accounts tor development staging and production workloads To control costs and improve governance the following requirements have been defined:
• The company must be able to calculate the AWS costs tor each project
• The company must be able to calculate the AWS costs tor each environment development staging and production
• Commonly deployed IT services must be centrally managed
• Business units can deploy pre-approved IT services only
• Usage of AWS resources in the development account must be limited
Which combination of actions should be taken to meet these requirements? (Select THREE )

  • A. Apply environment, cost center, and application name tags to all taggable resources
  • B. Configure custom budgets and define thresholds using Cost Explorer
  • C. Configure AWS Trusted Advisor to obtain weekly emails with cost-saving estimates
  • D. Create a portfolio for each business unit and add products to the portfolios using AWS CloudFormation in AWS Service Catalog
  • E. Configure a billing alarm in Amazon CloudWatch.
  • F. Configure SCPs in AWS Organizations to allow services available using AWS

Answer: CEF

NEW QUESTION 8
A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high load, resulting in severely elevated query response times.
Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.)

  • A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
  • B. Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionalit
  • C. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
  • D. Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionalit
  • E. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
  • F. Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
  • G. Configure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.

Answer: BE

NEW QUESTION 9
A company with multiple accounts is currently using a configuration that does not meet the following security governance policies
• Prevent ingress from port 22 to any Amazon EC2 instance
• Require billing and application tags for resources
• Encrypt all Amazon EBS volumes
A Solutions Architect wants to provide preventive and detective controls including notifications about a specific resource, if there are policy deviations.
Which solution should the Solutions Architect implement?

  • A. Create an AWS CodeCommit repository containing policy-compliant AWS Cloud Formation templates.Create an AWS Service Catalog portfolio Import the Cloud Formation templates by attaching the CodeCommit repository to the portfolio Restrict users across all accounts to items from the AWSService Catalog portfolio Use AWS Config managed rules to detect deviations from the policie
  • B. Configure an Amazon CloudWatch Events rule for deviations, and associate a CloudWatch alarm to send notifications when the TriggeredRules metric is greater than zero.
  • C. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account Restrict users across all accounts lo AWS Service Catalog products Share a compliant portfolio to other accounts Use AWS Config managed rules to detect deviations from the policies Configure an Amazon CloudWatch Events rule to send a notification when a deviation occurs
  • D. Implement policy-compliant AWS Cloud Formation templates for each account and ensure that all provisioning is completed by Cloud Formation Configure Amazon Inspector to perform regular checks against resources Perform policy validation and write the assessment output to Amazon CloudWatch Log
  • E. Create a CloudWatch Logs metric filter to increment a metric when a deviation occurs Configure a CloudWatch alarm to send notifications when the configured metric is greater than zero
  • F. Restrict users and enforce least privilege access using AWS I A
  • G. Consolidate all AWS CloudTrail logs into a single account Send the CloudTrail logs to Amazon Elasticsearch Service (Amazon ES). Implement monitoring alerting, and reporting using the Kibana dashboard in Amazon ES and with Amazon SNS.

Answer: C

NEW QUESTION 10
A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and Northern Virginia (us-east-1) as the Disaster Recovery (DR) Region. The data is continuously replicated from the primary Region to the DR Region by a single instance on the public subnet in both Regions. The replication messages between the Regions have a significant backlog during certain times of the day. The backlog clears on its own after a short time, but it affects the application’s RPO.
Which of the following solutions should help remediate this performance problem? (Select TWO)

  • A. Increase the size of the instances.
  • B. Have the instance in the primary Region write the data to an Amazon SQS queue in the primary Region instead, and have the instance in the DR Region poll from this queue.
  • C. Use multiple instances on the primary and DR Regions to send and receive the replication data.
  • D. Change the DR Region to Oregon (us-west-2) instead of the current DR Region.
  • E. Attach an additional elastic network interface to each of the instances in both Regions and set up load balancing between the network interfaces.

Answer: AC

NEW QUESTION 11
A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to the company’s AWS infrastructure with a 50 Mbps VPN connection.
The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The company now needs to migrate the database and wants to go live on AWS within 3 weeks.
Which of the following approaches meets the schedule with LEAST downtime?

  • A. 1. Use the VM Import/Export service to import a snapshot on the on-premises database into AWS.2.Launch a new EC2 instance from the snapshot.3. Set up ongoing database replication from on premises to the EC2 database over the VPN.4. Change the DNS entry to point to the EC2 database.5. Stop the replication.
  • B. 1. Launch an AWS DMS instance.2. Launch an Amazon RDS Aurora MySQL DB instance.3. Configure the AWS DMS instance with on-premises and Amazon RDS database information.4. Start the replication task within AWS DMS over the VPN.5. Change the DNS entry to point to the Amazon RDS MySQL database.6. Stop the replication.
  • C. 1. Create a database export locally using database-native tools.2. Import that into AWS using AWS Snowball.3. Launch an Amazon RDS Aurora DB instance.4. Load the data in the RDS Aurora DB instance from the export.5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.6. Change the DNS entry to point to the RDS Aurora DB instance.7. Stop the replication.
  • D. 1. Take the on-premises application offline.2. Create a database export locally using database-native tools.3. Import that into AWS using AWS Snowball.4. Launch an Amazon RDS Aurora DB instance.5. Load the data in the RDS Aurora DB instance from the export.6. Change the DNS entry to point to the Amazon RDS Aurora DB instance.7. Put the Amazon EC2 hosted application online.

Answer: C

NEW QUESTION 12
A company is finalizing the architecture for its backup solution for applications running on AWS. All of the applications run on AWS and use at least two Availability Zones in each tier.
Company policy requires IT to durably store nightly backups for all its data in at least two locations: production and disaster recovery. The locations must be in different geographic regions. The company also needs the backup to be available to restore immediately at the production data center, and within 24 hours at the disaster recovery location. All backup processes must be fully automated.
What is the MOST cost-effective backup solution that will meet all requirements?

  • A. Back up all the data to a large Amazon EBS volume attached to the backup media server in the production regio
  • B. Run automated scripts to snapshot these volumes nightly, and copy these snapshots to the disaster recovery region.
  • C. Back up all the data to Amazon S3 in the disaster recovery regio
  • D. Use a lifecycle policy to move this data to Amazon Glacier in the production region immediatel
  • E. Only the data is replicated; remove the data from the S3 bucket in the disaster recovery region.
  • F. Back up all the data to Amazon Glacier in the production regio
  • G. Set up cross-region replication of this data to Amazon Glacier in the disaster recovery regio
  • H. Set up a lifecycle policy to delete any data older than 60 days.
  • I. Back up all the data to Amazon S3 in the production regio
  • J. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data to Amazon Glacier.

Answer: D

NEW QUESTION 13
A company manages more than 200 separate internet-facing web applications. All of the applications are deployed to AWS in a single AWS Region The fully qualified domain names (FQDNs) of all of the applications are made available through HTTPS using Application Load Balancers (ALBs). The ALBs are configured to use public SSL/TLS certificates.
A Solutions Architect needs to migrate the web applications to a multi-region architecture. All HTTPS services should continue to work without interruption.
Which approach meets these requirements?

  • A. Request a certificate for each FQDN using AWS KM
  • B. Associate the certificates with the ALBs in the primary AWS Regio
  • C. Enable cross-region availability in AWS KMS for the certificates and associate the certificates with the ALBs in the secondary AWS Region.
  • D. Generate the key pairs and certificate requests for each FQDN using AWS KM
  • E. Associate the certificates with the ALBs in both the primary and secondary AWS Regions.
  • F. Request a certificate for each FQDN using AWS Certificate Manage
  • G. Associate the certificates with the ALBs in both the primary and secondary AWS Regions.
  • H. Request certificates for each FQDN in both the primary and secondary AWS Regions using AWS Certificate Manage
  • I. Associate the certificates with the corresponding ALBs in each AWS Region.

Answer: D

Explanation:
https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html
Certificates in ACM are regional resources. To use a certificate with Elastic Load Balancing for the same fully qualified domain name (FQDN) or set of FQDNs in more than one AWS region, you must request or import a certificate for each region. For certificates provided by ACM, this means you must revalidate each domain name in the certificate for each region. You cannot copy a certificate between regions.

NEW QUESTION 14
A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The company would like to prevent this from happening in the future.
What is the MOST efficient way of monitoring and managing all service limits in the company’s accounts?

  • A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using Amazon SNS if the limits are close to exceeding the threshold.
  • B. Reach out to AWS Support to proactively increase the limits across all account
  • C. That way, the customer avoids creating and managing infrastructure just to raise the service limits.
  • D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, programmatically increase the limits that are close to exceeding the threshold.
  • E. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and use Amazon SNS for notifications if a limit is close to exceeding the threshol
  • F. Ensure that the accounts are using the AWS Business Support plan at a minimum.

Answer: D

Explanation:
https://github.com/awslabs/aws-limit-monitor https://aws.amazon.com/solutions/limit-monitor/

NEW QUESTION 15
A company runs a video processing platform. Files are uploaded by users who connect to a web server, which stores them on an Amazon EFS share. This web server is running on a single Amazon EC2 instance. A different group of instances, running in an Auto Scaling group, scans the EFS share directory structure for new files to process and generates new videos (thumbnails, different resolution, compression, etc.) according to the instructions file, which is uploaded along with the video files. A different application running on a group of instances managed by an Auto Scaling group processes the video files and then deletes them from the EFS share. The results are stored in an S3 bucket. Links to the processed video files are emailed to the customer.
The company has recently discovered that as they add more instances to the Auto Scaling Group, many files are processed twice, so image processing speed is not improved. The maximum size of these video files is 2GB.
What should the Solutions Architect do to improve reliability and reduce the redundant processing of video files?

  • A. Modify the web application to upload the video files directly to Amazon S3. Use Amazon CloudWatch Events to trigger an AWS Lambda function every time a file is uploaded, and have this Lambda function put a message into an Amazon SQS queu
  • B. Modify the video processing application to read from SQS queue for new files and use the queue depth metric to scale instances in the video processing Auto Scaling group.
  • C. Set up a cron job on the web server instance to synchronize the contents of the EFS share into Amazon S3. Trigger an AWS Lambda function every time a file is uploaded to process the video file and store the results in Amazon S3. Using Amazon CloudWatch Events trigger an Amazon SES job to send an email to the customer containing the link to the processed file.
  • D. Rewrite the web application to run directly from Amazon S3 and use Amazon API Gateway to upload the video files to an S3 bucke
  • E. Use an S3 trigger to run an AWS Lambda function each time a file is uploaded to process and store new video files in a different bucke
  • F. Using CloudWatch Events, trigger an SES job to send an email to the customer containing the link to the processed file.
  • G. Rewrite the web application to run from Amazon S3 and upload the video files to an S3 bucke
  • H. Each time a new file is uploaded, trigger an AWS Lambda function to put a message in an SQS queue containing the link and the instruction
  • I. Modify the video processing application to read from the SQS queue and the S3 bucke
  • J. Use the queue depth metric to adjust the size of the Auto Scaling group for video processing instances.

Answer: A

NEW QUESTION 16
A company has deployed an application to multiple environments in AWS, including production and testing. The company has separate accounts for production and testing, and users are allowed to create additional application users for team members or services, as needed. The Security team has asked the Operations team for better isolation between production and testing with centralized controls on security credentials and improved management of permissions between environments.
Which of the following options would MOST securely accomplish this goal?

  • A. Create a new AWS account to hold user and service accounts, such as an identity accoun
  • B. Create users and groups in the identity accoun
  • C. Create roles with appropriate permissions in the production and testing account
  • D. Add the identity account to the trust policies for the roles.
  • E. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the Operations tea
  • F. Set a strong IAM password policy on each accoun
  • G. Create new IAM users and groups in each account to limit developer access to just the services required to complete their job function.
  • H. Create a script that runs on each account that checks user accounts for adherence to a security policy.Disable any user or service accounts that do not comply.
  • I. Create all user accounts in the production accoun
  • J. Create roles for access in the production account and testing account
  • K. Grant cross-account access from the production account to the testing account.

Answer: A

Explanation:
https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-develop

NEW QUESTION 17
A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but significant spikes in the system load The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?

  • A. Store the data in Amazon DocumentDB Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda Assign the company's domain as an alternate domain for the distributio
  • B. and configure Amazon Route 53 with an alias to the CloudFront distribution
  • C. Store the data in replicated Amazon S3 buckets in two Regions Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region Assign the company's domain as an alternate domain for both distributions and configure Amazon Route 53 with a failover routing policy between them
  • D. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB) In Amazon Route 53, configure an alias record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs

Answer: A

NEW QUESTION 18
A company has an Amazon VPC that is divided into a public subnet and a private subnet A web application runs in Amazon VPC, and each subnet has its own NACL The public subnet has a CIDR of 10.0.0.0/24. An Application Load Balancer is deployed to the public subnet. The private subnet has a CIDR of 10.0.1.0/24. Amazon EC2 instances that run a web server on port 80 are launched into the private subnet.
Only network traffic that is required for the Application Load Balancer to access the web application can be allowed to travel between the public and private subnets
What collection of rules should be written to ensure that the private subnet's NACL meets the requirement? (Select TWO.)

  • A. An inbound rule for port 80 from source 0.0.0 0/0
  • B. An inbound rule for port 80 from source 10.0.0.0/24
  • C. An outbound rule for port 80 to destination 0.0.0.0/0
  • D. An outbound rule for port 80 to destination 10.0.0.0/24
  • E. An outbound rule for ports 1024 through 65535 to destination 10.0.0.0/24

Answer: BE

NEW QUESTION 19
A company currently uses a single 1 Gbps AWS Direct Connect connection to establish connectivity between an AWS Region and its data center. The company has five Amazon VPCs, all of which are connected to the data center using the same Direct Connect connection. The Network team is worried about the single point of failure and is interested in improving the redundancy of the connections to AWS while keeping costs to a minimum.
Which solution would improve the redundancy of the connection to AWS while meeting the cost requirements?

  • A. Provision another 1 Gbps Direct Connect connection and create new VIFs to each of the VPCs.Configure the VIFs in a load balancing fashion using BGP.
  • B. Set up VPN tunnels from the data center to each VP
  • C. Terminate each VPN tunnel at the virtual private gateway (VGW) of the respective VPC and set up BGP for route management.
  • D. Set up a new point-to-point Multiprotocol Label Switching (MPLS) connection to the AWS Region that’s being use
  • E. Configure BGP to use this new circuit as passive, so that no traffic flows through this unless the AWS Direct Connect fails.
  • F. Create a public VIF on the Direct Connect connection and set up a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VI
  • G. Use BGP to handle the failover to the VPN connection.

Answer: B

NEW QUESTION 20
A company is planning to migrate an application from on-premises to AWS. The application currently uses an Oracle database and the company can tolerate a brief downtime of 1 hour when performing the switch to the new infrastructure. As part of the migration, the database engine will be changed to MySQL. A Solutions Architect needs to determine which AWS services can be used to perform the migration while minimizing the amount of work and time required.
Which of the following will meet the requirements?

  • A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migratio
  • B. Use AWS DMS to analyse the current schema and provide a recommendation for the optimal database engin
  • C. Then, use AWS DMS to migrate to the recommended enginee
  • D. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
  • E. Use AWS SCT to generate the schema scripts and apply them on the target prior to migratio
  • F. Use AWS DMS to begin moving data from the on-premises database to AW
  • G. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new databas
  • H. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
  • I. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RD
  • J. Then, use AWS DMS to migrate to the platfor
  • K. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
  • L. Use AWS DMS to begin moving data from the on-premises database to AW
  • M. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new databas
  • N. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.

Answer: B

NEW QUESTION 21
A company has asked a Solutions Architect to design a secure content management solution that can be accessed by API calls by external customer applications. The company requires that a customer administrator must be able to submit an API call and roll back changes to existing files sent to the content management solution, as needed.
What is the MOST secure deployment design that meets all solution requirements?

  • A. Use Amazon S3 for object storage with versioning and bucket access logging enabled, and an IAM role and access policy for each customer applicatio
  • B. Encrypt objects using SSE-KM
  • C. Develop the content management application to use a separate AWS KMS key for each customer.
  • D. Use Amazon WorkDocs for object storag
  • E. Leverage WorkDocs encryption, user access management, and version contro
  • F. Use AWS CloudTrail to log all SDK actions and create reports of hourly access by using the Amazon CloudWatch dashboar
  • G. Enable a revert function in the SDK based on a static Amazon S3 webpage that shows the output of the CloudWatch dashboard.
  • H. Use Amazon EFS for object storage, using encryption at rest for the Amazon EFS volume and a customer managed key stored in AWS KM
  • I. Use IAM roles and Amazon EFS access policies to specify separate encryption keys for each customer applicatio
  • J. Deploy the content management application to store all new versions as new files in Amazon EFS and use a control API to revert a specific file to a previous version.
  • K. Use Amazon S3 for object storage with versioning and enable S3 bucket access loggin
  • L. Use an IAM role and access policy for each customer applicatio
  • M. Encrypt objects using client-side encryption, and distribute an encryption key to all customers when accessing the content management application.

Answer: A

NEW QUESTION 22
A company operating a website on AWS requires high levels of scalability, availability and performance. The company is running a Ruby on Rails application on Amazon EC2. It has a data tier on MySQL 5.6 on Amazon EC2 using 16 TB of Amazon EBS storage. Amazon CloudFront is used to cache application content. The Operations team is reporting continuous and unexpected growth of EBS volumes assigned to the MySQL database. The Solutions Architect has been asked to design a highly scalable, highly available, and high-performing solution.
Which solution is the MOST cost-effective at scale?

  • A. Implement Multi-AZ and Auto Scaling for all EC2 instances in the current configuratio
  • B. Ensure that all EC2 instances are purchased as reserved instance
  • C. Implement new elastic Amazon EBS volumes for the data tier.
  • D. Design and implement the Docker-based containerized solution for the application using Amazon EC
  • E. Migrate to an Amazon Aurora MySQL Multi-AZ cluste
  • F. Implement storage checks for Aurora MySQL storage utilization and an AWS Lambda function to grow the Aurora MySQL storage, as necessar
  • G. Ensure that Multi-AZ architectures are implemented.
  • H. Ensure that EC2 instances are right-sized and behind an Elastic Load Balancing load balancer.Implement Auto Scaling with EC2 instance
  • I. Ensure that the reserved instances are purchased for fixed capacity and that Auto Scaling instances run on deman
  • J. Migrate to an Amazon Aurora MySQLMulti-AZ cluste
  • K. Ensure that Multi-AZ architectures are implemented.
  • L. Ensure that EC2 instances are right-sized and behind an Elastic Load Balance
  • M. Implement Auto Scaling with EC2 instance
  • N. Ensure that Reserved instances are purchased for fixed capacity and that Auto Scaling instances run on deman
  • O. Migrate to an Amazon Aurora MySQL Multi-AZ cluste
  • P. Implement storage checks for Aurora MySQL storage utilization and an AWS Lambda function to grow Aurora MySQL storage, as necessar
  • Q. Ensure Multi-AZ architectures are implemented.

Answer: C

NEW QUESTION 23
A company has released a new version of a website to target an audience in Asia and South America. The website’s media assets are hosted on Amazon S3 and have an Amazon CloudFront distribution to improve end-user performance. However, users are having a poor login experience the authentication service is only available in the us-east-1 AWS Region.
How can the Solutions Architect improve the login experience and maintain high security and performance with minimal management overhead?

  • A. Replicate the setup in each new geography and use Amazon Route 53 geo-based routing to route traffic to the AWS Region closest to the users.
  • B. Use an Amazon Route 53 weighted routing policy to route traffic to the CloudFront distributio
  • C. Use CloudFront cached HTTP methods to improve the user login experience.
  • D. Use Amazon Lambda@Edge attached to the CloudFront viewer request trigger to authenticate and authorize users by maintaining a secure cookie token with a session expiry to improve the user experience in multiple geographies.
  • E. Replicate the setup in each geography and use Network Load Balancers to route traffic to the authentication service running in the closest region to users.

Answer: C

Explanation:
There are several benefits to using Lambda@Edge for authorization operations. First, performance is improved by running the authorization function using Lambda@Edge closest to the viewer, reducing latency and response time to the viewer request. The load on your origin servers is also reduced by offloading CPU-intensive operations such as verification of JSON Web Token (JWT) signatures. Finally, there are security benefits such as filtering out unauthorized requests before they reach your origin infrastructure.
https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-to-use-lambdaedge-and-

NEW QUESTION 24
......

Thanks for reading the newest SAP-C01 exam dumps! We recommend you to try the PREMIUM Simply pass SAP-C01 dumps in VCE and PDF here: https://www.simply-pass.com/Amazon-Web-Services-exam/SAP-C01-dumps.html (179 Q&As Dumps)