Propel your SAP-C02 exam prep with the freshest VCE and PDF study aids

Begin your certification saga, empowered by the profound insights offered by the SAP-C02 dumps. Crafted with finesse to mirror the vast expanse of the curriculum, the SAP-C02 dumps offer a panorama of practice questions, anchoring a deep-rooted understanding. Be it the intuitive design of PDFs that appeals or the engrossing narrative of the VCE format that captivates, the SAP-C02 dumps are unparalleled. A pivotal study guide, the heart and soul of the SAP-C02 dumps, elucidates intricate concepts, ensuring unerring clarity. Confident in the transformative potential of these tools, we unhesitatingly endorse our 100% Pass Guarantee.

[New Update] Maximize your exam potential with the free SAP-C02 PDF and Exam Questions, committing to 100% pass

Question 1:

A company is migrating a legacy application from an on-premises data center to AWS. The application uses MangeDB as a key-value database According to the company\’s technical guidelines, all Amazon EC2 instances must be hosted in a private subnet without an internet connection In addition, all connectivity between applications and databases must be encrypted. The database must be able to scale based on demand

Which solution will meet these requirements?

A. Create new Amazon DocumentDB (with MangeDB compatibility) tables for the application with Provisioned IOPS volumes Use the instance endpoint to connect to Amazon DocumentDB

B. Create new Amazon DynamoDB tables for the application with on-demand capacity Use a gateway VPC endpoint for DynamoDB to connect lo the DynamoDB tables

C. Create new Amazon DynamoDB tables for the application with on-demand capacity Use an interface VPC endpoint for DynamoDB to connect to the DynamoDB tables

D. Create new Amazon DocumentDB (with MangeDB compatibility) tables for the application with Provisioned IOPS volumes Use the cluster endpoint to connect to Amazon DocumentDB

Correct Answer: B


Question 2:

An AWS partner company is building a service in AWS Organizations using Its organization named org. This service requires the partner company to have access to AWS resources in a customer account, which is in a separate organization named org2

The company must establish least privilege security access using an API or command line tool to the customer account

What is the MOST secure way to allow org1 to access resources h org2?

A. The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks

B. The customer should create an IAM user and assign the required permissions to the IAM user The customer should then provide the credentials to the partner company to log In and perform the required tasks.

C. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM rote\’s Amazon Resource Name (ARN) when requesting access to perform the required tasks

D. The customer should create an IAM rote and assign the required permissions to the IAM rote. The partner company should then use the IAM rote\’s Amazon Resource Name (ARN). Including the external ID in the IAM role\’s trust pokey, when requesting access to perform the required tasks

Correct Answer: D

https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html


Question 3:

A company uses AWS Organizations to manage its AWS accounts. The company needs a list of all its Amazon EC2 instances that have underutilized CPU or memory usage. The company also needs recommendations for how to downsize these underutilized instances.

Which solution will meet these requirements with the LEAST effort?

A. Install a CPU and memory monitoring tool from AWS Marketplace on all the EC2 Instances. Store the findings in Amazon S3. Implement a Python script to identify underutilized instances. Reference EC2 instance pricing information for recommendations about downsizing options.

B. Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager. Retrieve the resource op! nization recommendations from AWS Cost Explorer in the organization\’s management account. Use the recommendations to downsize underutilized instances in all accounts of the organization.

C. Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager. Retrieve the resource optimization recommendations from AWS Cost Explorer in each account of the organization. Use the recommendations to downsize underutilized instances in all accounts of the organization.

D. Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager Create an AWS Lambda function to extract CPU and memory usage from all the EC2 instances. Store the findings as files in Amazon S3. Use Amazon Athena to find underutilized instances. Reference EC2 instance pricing information for recommendations about downsizing options.

Correct Answer: B


Question 4:

A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region.

Which solution will meet these requirements?

A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.

B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.

C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.

D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.

Correct Answer: C

https://docs.aws.amazon.com/apigateway/latest/developerguide/dns-failover.html


Question 5:

A company\’s solution architect is designing a diasaster recovery (DR) solution for an application that runs on AWS. The application uses PostgreSQL 11.7 as its database. The company has an PRO of 30 seconds. The solutions architect must design a DR solution with the primary database in the us-east-1 Region and the database in the us-west-2 Region.

What should the solution architect do to meet these requirements with minimum application change?

A. Migrate the database to Amazon RDS for PostgreSQL in us-east-1. Set up a read replica up a read replica in us-west-2. Set the managed PRO for the RDS database to 30 seconds.

B. Migrate the database to Amazon for PostgreSQL in us-east-1. Set up a standby replica in an Availability Zone in us-west-2, Set the managed PRO for the RDS database to 30 seconds.

C. Migrate the database to an Amazon Aurora PostgreSQL global database with the primary Region as us-east-1 and the secondary Region as us-west-2. Set the managed PRO for the Aurora database to 30 seconds.

D. Migrate the database to Amazon DynamoDB in us-east-1. Set up global tables with replica tables that are created in us-west-2.

Correct Answer: A


Question 6:

A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances.

To meet regulatory and business requirements, the company must make the following changes for data backups:

1.

Backups must be retained based on custom daily, weekly, and monthly requirements.

2.

Backups must be replicated to at least one other AWS Region immediately after capture.

3.

The backup solution must provide a single source of backup status across the AWS environment.

4.

The backup solution must send immediate notifications upon failure of any resource backup.

Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Select THREE.)

A. Create an AWS Backup plan with a backup rule for each of the retention requirements.

B. Configure an AWS Backup plan to copy backups to another Region.

C. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.

D. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETEO.

E. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.

F. Set up RDS snapshots on each database.

Correct Answer: BDE


Question 7:

A company is moving a business-critical multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A solutions architect must re-architect the application to ensure that it can meet or exceed the SLA.

The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load balanced between multiple virtual machines. Remote users complain about slow load times while using this latency-sensitive application.

Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?

A. Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon Workspaces Workspace for each end user to improve the user experience.

B. Migrate the database to an Amazon RDS Aurora PostgreSQL configuration. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.

C. Migrate the database to an Amazon RDS PostgreSQL Mulli-AZ configuration. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.

D. Migrate the database to an Amazon Redshift cluster with at least two nodes. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.

Correct Answer: B

Aurora would improve availability that can replicate to multiple AZ (6 copies). Auto scaling would improve the performance together with a ALB. AppStream is like Citrix that deliver hosted Apps to users.


Question 8:

A company is using multiple AWS accounts. The company has a shared services account and several other accounts (or different projects.

A team has a VPC in a project account. The team wants to connect this VPC to a corporate network through an AWS Direct Connect gateway that exists in the shared services account. The team wants to automatically perform a virtual private gateway association with the Direct Connect gateway by using an already-tested AWS Lambda function while deploying its VPC networking stack. The Lambda function code can assume a role by using AWS Security Token Service (AWS STS). The team is using AWS Cloud Formation to deploy its infrastructure.

Which combination of steps will meet these requirements? (Select THREE.)

A. Deploy the Lambda function to the project account. Update the Lambda function\’s IAM role with the directconnect:* permission

B. Create a cross-account IAM role in the shared services account that grants the Lambda function the directconnect:” permission. Add the sts:AssumeRo!e permission to the IAM role that is associated with the Lambda function in the shared services account.

C. Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the project account.

D. Deploy the Lambda function that is performing the association to the shared services account. Update the Lambda function\’s IAM role with the directconnect:\’ permission.

E. Create a cross-account IAM role in the shared services account that grants the sts: Assume Role permission to the Lambda function with the directconnect:” permission acting as a resource. Add the sts AssumeRole permission with this cross- account IAM role as a resource to the IAM role that belongs to the Lambda function in the project account.

F. Add a custom resource to the Cloud Formation networking stack that references the Lambda function in the shared services account.

Correct Answer: BCE


Question 9:

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data from an external vendor API, stores data in an Amazon

DynamoDB global table, and retrieves data from the DynamoDB global table. The API key for the vendor\’s API is stored in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key Management Service (AWS

KMS).

The company has deployed its own API into a single AWS Region.

A solutions architect needs to change the API components of the company\’s API to ensure that the components can run across multiple Regions in an active-active configuration. Which combination of changes will meet this requirement with

the LEAST operational overhead? (Choose three.)

A. Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue answer routing policy.

B. Create a new KMS multi-Region customer managed key. Create a new KMS customer managed replica key in each in-scope Region.

C. Replicate the existing Secrets Manager secret to other Regions. For each in-scope Region\’s replicated secret, select the appropriate KMS key.

D. Create a new AWS managed KMS key in each in-scope Region. Convert an existing key to a multi- Region key. Use the multi-Region key in other Regions.

E. Create a new Secrets Manager secret in each in-scope Region. Copy the secret value from the existing Region to the new secret in each in-scope Region.

F. Modify the deployment process for the Lambda function to repeat the deployment across in-scope Regions. Turn on the multi-Region option for the existing API. Select the Lambda function that is deployed in each Region as the backend for the multi-Region API.

Correct Answer: ABC

References:

1: Creating a regional API endpoint – Amazon API Gateway

2: Multivalue answer routing policy – Amazon Route 53

3: Multi-Region keys in AWS KMS – AWS Key Management Service

4: Creating multi-Region keys – AWS Key Management Service

5: Replicate an AWS Secrets Manager secret to other AWS Regions

6: How to replicate secrets in AWS Secrets Manager to multiple Regions | AWS Security Blog


Question 10:

A company wants to migrate its on-premises application to AWS. The database for the application stores structured product data and temporary user session data. The company needs to decouple the product data from the user session data. The company also needs to implement replication in another AWS Region for disaster recovery.

Which solution will meet these requirements with the HIGHEST performance?

A. Create an Amazon RDS DB instance with separate schemas to host the product data and the user session data. Configure a read replica for the DB instance in another Region.

B. Create an Amazon RDS DB instance to host the product data. Configure a read replica for the DB instance in another Region. Create a global datastore in Amazon ElastiCache for Memcached to host the user session data.

C. Create two Amazon DynamoDB global tables. Use one global table to host the product data Use the other global table to host the user session data. Use DynamoDB Accelerator (DAX) for caching.

D. Create an Amazon RDS DB instance to host the product data. Configure a read replica for the DB instance in another Region. Create an Amazon DynamoDB global table to host the user session data

Correct Answer: B


Question 11:

A software development company has multiple engineers who are working remotely. The company is running Active Directory Domain Services (AD DS) on an Amazon EC2 instance. The company\’s security policy states that all internal, nonpublic services that are deployed in a VPC must be accessible through a VPN Multi-factor authentication (MFA) must be used for access to a VPN.

Whet should a solution architect do to meet these requirements?

A. Create an AWS Site-to-Site VPN connection Configure integration between a VPN and AD DS. Use an Amazon Workspaces client with MFA support enabled to establish a VPN connection.

B. Create an AWS Client VPN endpoint Create an AD Connector directory for integration with AD DS Enable MFA for AD Connector Use AWS Client VPN to establish a VPN connection.

C. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub Configure integration between AWS VPN CloudHub and AD DS Use AWS Cop4ot to establish a VPN connection.

D. Create an Amazon WorkLink endpoint Configure integration between Amazon WorkLink and AD DS. Enable MFA in Amazon WorkLink Use AWS Client VPN to establish a VPN connection.

Correct Answer: B


Question 12:

A company is planning to migrate its on-premises VMware cluster of 120 VMS to AWS. The VMS have many different operating systems and many custom software packages installed. The company also has an on-premises NFS server that is 10 TB in size. The company has set up a 10 GbpsAWS Direct Connect connection to AWS for the migration

Which solution will complete the migration to AWS in the LEAST amount of time?

A. Export the on-premises VMS and copy them to an Amazon S3 bucket. Use VM Import/Export to create AMIS from the VM images that are stored in Amazon S3. Order an AWS Snowball Edge device. Copy the NFS server data to the device. Restore the NFS server data to an Amazon EC2 instance that has NFS configured.

B. Configure AWS Application Migration Service with a connection to the VMware cluster. Create a replication job for the VMS. Create an Amazon Elastic File System (Amazon EFS) file system. Configure AWS DataSync to copy the NFS server data to the EFS file system over the Direct Connect connection.

C. Recreate the VMS on AWS as Amazon EC2 instances. Install all the required software packages. Create an Amazon FSx for Lustre file system. Configure AWS DataSync to copy the NFS server data to the FSx for Lustre file system over the Direct Connect connection.

D. Order two AWS Snowball Edge devices. Copy the VMS and the NFS server data to the devices. Run VM Import/Export after the data from the devices is loaded to an Amazon S3 bucket. Create an Amazon Elastic File System (Amazon EFS) file system. Copy the NFS server data from Amazon S3 to the EFS file system.

Correct Answer: B

This option will complete the migration to AWS in the least amount of time because it uses two AWS services that are designed to simplify and accelerate data transfers and migrations. AWS Application Migration Service (AWS MGN) is a highly automated lift-and- shift solution that helps you migrate applications from any source infrastructure that runs supported operating systems to AWS1. It replicates your source servers into your AWS account and automatically converts and launches them on AWS so you can quickly benefit from the cloud1. You can use AWS MGN to migrate your on-premises VMware VMs to AWS by configuring a connection to your VMware cluster and creating a replication job for the VMs2. This process will minimize the time-intensive, error-prone manual processes of exporting and importing VM images. AWS DataSync is an online data movement and discovery service that simplifies and accelerates data migrations to AWS and helps you move data quickly and securely between on-premises storage, edge locations, other cloud providers, and AWS Storage3. It can transfer data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self- managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems, Amazon FSx for OpenZFS file systems, and Amazon FSx for NetApp ONTAP file systems3. You can use AWS DataSync to copy your on-premises NFS server data to an Amazon EFS file system over the Direct Connect connection4. This process will leverage the high bandwidth and low latency of Direct Connect and the encryption and data integrity validation of DataSync.


Question 13:

An external audit of a company\’s serverless application reveals IAM policies that grant too many permissions. These policies are attached to the company\’s AWS Lambda execution roles. Hundreds of the company\’s Lambda functions have broad access permissions, such as full access to Amazon S3 buckets and Amazon DynamoDB tables. The company wants each function to have only the minimum permissions that the function needs to complete its task.

A solutions architect must determine which permissions each Lambda function needs.

What should the solutions architect do to meet this requirement with the LEAST amount of effort?

A. Set up Amazon CodeGuru to profile the Lambda functions and search for AWS API calls. Create an inventory of the required API calls and resources for each Lambda function. Create new IAM access policies for each Lambda function. Review the new policies to ensure that they meet the company\’s business requirements.

B. Turn on AWS CloudTrail logging for the AWS account. Use AWS Identity and Access Management Access Analyzer to generate IAM access policies based on the activity recorded in the CloudTrail log. Review the generated policies to ensure that they meet the company\’s business requirements.

C. Turn on AWS CloudTrail logging for the AWS account. Create a script to parse the CloudTrail log, search for AWS API calls by Lambda execution role, and create a summary report. Review the report. Create IAM access policies that provide more restrictive permissions for each Lambda function.

D. Turn on AWS CloudTrail logging for the AWS account. Export the CloudTrail logs to Amazon S3. Use Amazon EMR to process the CloudTrail logs in Amazon S3 and produce a report of API calls and resources used by each execution role. Create a new IAM access policy for each role. Export the generated roles to an S3 bucket. Review the generated policies to ensure that they meet the company\’s business requirements.

Correct Answer: B

IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk. IAM Access Analyzer identifies resources shared with external principals by using logic-based reasoning to analyze the resource-based policies in your AWS environment.https://docs.aws.amazon.com/IAM/latest/UserGuide/ what-is-access- analyzer.html


Question 14:

A solutions architect is reviewing an application\’s resilience before launch. The application runs on an Amazon EC2 instance that is deployed in a private subnet of a VPC.

The EC2 instance is provisioned by an Auto Scaling group that has a minimum capacity of I and a maximum capacity of I. The application stores data on an Amazon RDS for MySQL DB instance. The VPC has subnets configured in three Availability Zones and is configured with a single NAT gateway.

The solutions architect needs to recommend a solution to ensure that the application will operate across multiple Availability Zones.

Which solution will meet this requirement?

A. Deploy an additional NAT gateway in the other Availability Zones. Update the route tables with appropriate routes. Modify the RDS for MySQL DB instance to a Multi-AZ configuration. Configure the Auto Scaling group to launch instances across Availability Zones. Set the minimum capacity and maximum capacity of the Auto Scaling group to 3.

B. Replace the NAT gateway with a virtual private gateway. Replace the RDS for MySQL DB instance with an Amazon Aurora MySQL DB cluster. Configure the Auto Scaling group to launch instances across all subnets in the VPC. Set the minimum capacity and maximum capacity of the Auto Scaling group to 3.

C. Replace the NAT gateway with a NAT instance. Migrate the RDS for MySQL DB instance to an RDS for PostgreSQL DB instance. Launch a new EC2 instance in the other Availability Zones.

D. Deploy an additional NAT gateway in the other Availability Zones. Update the route tables with appropriate routes. Modify the RDS for MySQL DB instance to turn on automatic backups and retain the backups for 7 days. Configure the Auto Scaling group to launch instances across all subnets in the VPC. Keep the minimum capacity and the maximum capacity of the Auto Scaling group at 1.

Correct Answer: A


Question 15:

An enterprise company is building an infrastructure services platform for its users. The company has the following requirements:

1.

Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services.

2.

Use a central account to manage the creation of infrastructure services. Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations.

3.

Provide the ability to enforce tags on any infrastructure that is started by users.

Which combination of actions using AWS services will meet these requirements? (Choose three.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM roles or users that require access to the S3 bucket policy.

B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company.

C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to local AWS accounts, copy the TagOption assign users access and apply launch constraints.

E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service Catalog products or portfolios.

F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be created for users.

Correct Answer: BDE

Developing infrastructure services using AWS CloudFormation templates and uploading them as AWS Service Catalog products to portfolios created in a central AWS account will enable the company to centrally manage the creation of

infrastructure services and control who can use them1. AWS Service Catalog allows you to create and manage catalogs of IT services that are approved for use on AWS2. You can organize products into portfolios, which are collections of

products along with configuration information3. You can share portfolios with other accounts in your organization using AWS Organizations4. Allowing user IAM roles to have ServiceCatalogEndUserAccess permissions only and using an

automation script to import the central portfolios to local AWS accounts, copy the TagOption, assign users access, and apply launch constraints will enable the company to provide least privilege access to users when launching AWS

infrastructure services. ServiceCatalogEndUserAccess is a managed IAM policy that grants users permission to list and view products and launch product instances. An automation script can help import the shared portfolios from the central

account to the local accounts, copy the TagOption from the central account, assign users access to the portfolios, and apply launch constraints that specify which IAM role or user can provision a product. Using the AWS Service Catalog

TagOption Library to maintain a list of tags required by the company and applying the TagOption to AWS Service Catalog products or portfolios will enable the company to enforce tags on any infrastructure that is started by users. TagOptions

are key-value pairs that you can use to classify your AWS Service Catalog resources. You can create a TagOption Library that contains all the tags that you want to use across your organization. You can apply TagOptions to products or

portfolios, and they will be automatically applied to any provisioned product instances.

References:

Creating a product from an existing CloudFormation template What is AWS Service Catalog?

Working with portfolios

Sharing a portfolio with AWS Organizations

[Providing least privilege access for users]

[AWS managed policies for job functions]

[Importing shared portfolios]

[Enforcing tag policies]

[Working with TagOptions]

[Creating a TagOption Library]

[Applying TagOptions]


Leave a Reply

Your email address will not be published. Required fields are marked *