Expert-designed SAP-C02 exam questions can help you ace your test

Chart your trajectory towards certification brilliance, buoyed by the treasure trove that is the SAP-C02 dumps. Curated with care to reflect the multifaceted landscape of the curriculum, the SAP-C02 dumps extend a plethora of practice questions, ensuring academic prowess. Be it the crisp coherence of PDFs that appeals or the animated tapestry of the VCE format that captivates, the SAP-C02 dumps are indispensable. A detailed study guide, at the heart of the SAP-C02 dumps, illuminates complex paradigms, ensuring thorough comprehension. Rooted deeply in the utility of these resources, we resolutely endorse our 100% Pass Guarantee.

[Just Updated] Ensure your exam success by downloading the SAP-C02 PDF and Exam Questions for free

Question 1:

An enterprise company is building an infrastructure services platform for its users. The company has the following requirements:

1.

Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services.

2.

Use a central account to manage the creation of infrastructure services. Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations.

3.

Provide the ability to enforce tags on any infrastructure that is started by users.

Which combination of actions using AWS services will meet these requirements? (Choose three.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM roles or users that require access to the S3 bucket policy.

B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company.

C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to local AWS accounts, copy the TagOption assign users access and apply launch constraints.

E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service Catalog products or portfolios.

F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be created for users.

Correct Answer: BDE

Developing infrastructure services using AWS CloudFormation templates and uploading them as AWS Service Catalog products to portfolios created in a central AWS account will enable the company to centrally manage the creation of

infrastructure services and control who can use them1. AWS Service Catalog allows you to create and manage catalogs of IT services that are approved for use on AWS2. You can organize products into portfolios, which are collections of

products along with configuration information3. You can share portfolios with other accounts in your organization using AWS Organizations4. Allowing user IAM roles to have ServiceCatalogEndUserAccess permissions only and using an

automation script to import the central portfolios to local AWS accounts, copy the TagOption, assign users access, and apply launch constraints will enable the company to provide least privilege access to users when launching AWS

infrastructure services. ServiceCatalogEndUserAccess is a managed IAM policy that grants users permission to list and view products and launch product instances. An automation script can help import the shared portfolios from the central

account to the local accounts, copy the TagOption from the central account, assign users access to the portfolios, and apply launch constraints that specify which IAM role or user can provision a product. Using the AWS Service Catalog

TagOption Library to maintain a list of tags required by the company and applying the TagOption to AWS Service Catalog products or portfolios will enable the company to enforce tags on any infrastructure that is started by users. TagOptions

are key-value pairs that you can use to classify your AWS Service Catalog resources. You can create a TagOption Library that contains all the tags that you want to use across your organization. You can apply TagOptions to products or

portfolios, and they will be automatically applied to any provisioned product instances.

References:

Creating a product from an existing CloudFormation template What is AWS Service Catalog?

Working with portfolios

Sharing a portfolio with AWS Organizations

[Providing least privilege access for users]

[AWS managed policies for job functions]

[Importing shared portfolios]

[Enforcing tag policies]

[Working with TagOptions]

[Creating a TagOption Library]

[Applying TagOptions]


Question 2:

An application is using an Amazon RDS for MySQL Multi-AZ DB instance in the us-east-1 Region. After a failover test, the application lost the connections to the database and could not re-establish the connections. After a restart of the application, the application re- established the connections.

A solutions architect must implement a solution so that the application can re-establish connections to the database without requiring a restart.

Which solution will meet these requirements?

A. Create an Amazon Aurora MySQL Serverless v1 DB instance. Migrate the RDS DB instance to the Aurora Serverless v1 DB instance. Update the connection settings in the application to point to the Aurora reader endpoint.

B. Create an RDS proxy. Configure the existing RDS endpoint as a target. Update the connection settings in the application to point to the RDS proxy endpoint.

C. Create a two-node Amazon Aurora MySQL DB cluster. Migrate the RDS DB instance to the Aurora DB cluster. Create an RDS proxy. Configure the existing RDS endpoint as a target. Update the connection settings in the application to point to the RDS proxy endpoint.

D. Create an Amazon S3 bucket. Export the database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Configure Amazon Athena to use the S3 bucket as a data store. Install the latest Open Database Connectivity (ODBC) driver for the application. Update the connection settings in the application to point to the Athena endpoint

Correct Answer: B

Creating an RDS Proxy and configuring the existing RDS endpoint as a target, and then updating the connection settings in the application to point to the RDS proxy endpoint will meet the requirement of the application being able to reestablish connections to the database without requiring a restart.

Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon RDS that makes applications more scalable, more resilient to database failures, and more secure. With RDS Proxy, applications can pool and share

connections to RDS databases, reducing the number of connections each RDS instance needs to handle. This can improve the performance and scalability of the application. In the event of a failover or interruption, RDS Proxy automatically

redirects connections to the new primary instance, so the application can continue to function without interruption. RDS Proxy also provides connection pooling, which reduces the number of connections to the primary RDS instance, so the

primary instance can handle more traffic. Here is an example of how to set up an RDS proxy and configure it to work with an existing RDS instance: 1.Create an RDS proxy in the AWS Management Console, and configure it to use the

existing RDS instance as a target.

Update the connection settings in the application to use the RDS proxy endpoint instead of the RDS instance endpoint.

Reference:

https://aws.amazon.com/rds/proxy/

https://aws.amazon.com/blogs/database/using-amazon-rds-proxy-with-amazon-rds-for-mysql-and-amazon-aurora-mysql-to-improve-app-scalability-and-availability/


Question 3:

A solutions architect is designing a solution to connect a company\’s on-premises network with all the company\’s current and future VPCs on AWS The company is running VPCs in five different AWS Regions and has at least 15 VPCs in each Region.

The company\’s AWS usage is constantly increasing and will continue to grow Additionally, all the VPCs throughout all five Regions must be able to communicate with each other

The solution must maximize scalability and ease of management

Which solution meets these requirements\’?

A. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and the transit gateway in the Region that is closest to the on-premises network Peer all the transit gateways with each other Connect all the VPCs to the transit gateway in their Region

B. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to the on-premises network Deploy the CloudFormation template for each VPC Set up VPC peering between all the VPCs for VPC-to-VPC communication

C. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and each transit gateway Route traffic between the different Regions through the company\’s on-premises firewalls Connect all the VPCs to the transit gateway in their Region

D. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to the on-premises network Deploy the CloudFormation template for each VPC Route traffic between the different Regions through the company\’s on-premises firewalls

Correct Answer: A


Question 4:

A company has an environment that has a single AWS account. A solutions architect is reviewing the environment to recommend what the company could improve specifically in terms of access to the AWS Management Console. The company\’s IT support workers currently access the console for administrative tasks, authenticating with named IAM users that have been mapped to their job role.

The IT support workers no longer want to maintain both their Active Directory and IAM user accounts. They want to be able to access the console by using their existing Active Directory credentials. The solutions architect is using AWS Single Sign-On (AWS SSO) to implement this functionality.

Which solution will meet these requirements MOST cost-effectively?

A. Create an organization in AWS Organizations. Turn on the AWS SSO feature in Organizations Create and configure a directory in AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) with a two-way trust to the company\’s on- premises Active Directory. Configure AWS SSO and set the AWS Managed Microsoft AD directory as the identity source. Create permission sets and map them to the existing groups within the AWS Managed Microsoft AD directory.

B. Create an organization in AWS Organizations. Turn on the AWS SSO feature in Organizations Create and configure an AD Connector to connect to the company\’s on- premises Active Directory. Configure AWS SSO and select the AD Connector as the identity source. Create permission sets and map them to the existing groups within the company\’s Active Directory.

C. Create an organization in AWS Organizations. Turn on all features for the organization. Create and configure a directory in AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) with a two-way trust to the company\’s on-premises Active Directory. Configure AWS SSO and select the AWS Managed Microsoft AD directory as the identity source. Create permission sets and map them to the existing groups within the AWS Managed Microsoft AD directory.

D. Create an organization in AWS Organizations. Turn on all features for the organization. Create and configure an AD Connector to connect to the company\’s on-premises Active Directory. Configure AWS SSO and select the AD Connector as the identity source. Create permission sets and map them to the existing groups within the company\’s Active Directory.

Correct Answer: D

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-prereqs-considerations.html


Question 5:

A company wants to use Amazon S3 to back up its on-premises file storage solution. The company s on-premises file storage solution supports NFS and the company wants its new solution to support NFS The company wants to archive the backup Files after 5 days If the company needs archived files for disaster recovery, the company is willing to wait a few days for the retrieval of those Files.

Which solution meets these requirements MOST cost-effectively?

A. Deploy an AWS Storage Gateway files gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the file to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

B. Deploy an AWS Storage Gateway volume gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the volume gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.

C. Deploy an AWS Storage Gateway tape gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

D. Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

E. Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.

Correct Answer: E

File gateway support NFS protocol, while volume gateway support iCSI protocol. And we need glacier deep archive to save cost, cause the company willing to wait for few days retrival time.


Question 6:

A company is running its solution on AWS in a manually created VPC. The company is using AWS Cloud Formation to provision other parts of the infrastructure. According to a new requirement, the company must manage all infrastructure in an automatic way.

What should the company do to meet this new requirement with the LEAST effort?

A. Create a new AWS Cloud Development Kit (AWS CDK) stack that stnctly provisions the existing VPC resources and configuration. Use AWS CDK to import the VPC into the stack and to manage the VPC.

B. Create a CloudFormation stack set that creates the VPC. Use the stack set to import the VPC into the stack.

C. Create a new CloudFormation template that strictly provisions the existing VPC resources and configuration. From the CloudFormation console, create a new stack by importing the existing resources.

D. Create a new CloudFormation template that creates the VPC. Use the AWS Serverless Application Model {AWS SAM) CLI to import the VPC.

Correct Answer: D


Question 7:

A company is running a critical stateful web application on two Linux Amazon EC2 instances behind an Application Load Balancer (ALB) with an Amazon RDS for MySQL database The company hosts the DNS records for the application in Amazon Route 53 A solutions architect must recommend a solution to improve the resiliency of the application

The solution must meet the following objectives:

1.

Application tier RPO of 2 minutes. RTO of 30 minutes

2.

Database tier RPO of 5 minutes RTO of 30 minutes

The company does not want to make significant changes to the existing application architecture The company must ensure optimal latency after a failover

Which solution will meet these requirements?

A. Configure the EC2 instances to use AWS Elastic Disaster Recovery Create a cross- Region read replica for the RDS DB instance Create an ALB in a second AWS Region Create an AWS Global Accelerator endpoint and associate the endpoint with the ALBs Update DNS records to point to the Global Accelerator endpoint

B. Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes Configure RDS automated backups Configure backup replication to a second AWS Region Create an ALB in the second Region Create an AWS Global Accelerator endpoint, and associate the endpoint with the ALBs Update DNS records to point to the Global Accelerator endpoint

C. Create a backup plan in AWS Backup for the EC2 instances and RDS DB instance Configure backup replication to a second AWS Region Create an ALB in the second Region Configure an Amazon CloudFront distribution in front of the ALB Update DNS records to point to CloudFront

D. Configure the EC2 instances to use Amazon Data Lifecycle Manager (Amazon DLM) to take snapshots of the EBS volumes Create a cross-Region read replica for the RDS DB instance Create an ALB in a second AWS Region Create an AWS Global Accelerator endpoint and associate the endpoint with the ALBs

Correct Answer: B

This option meets the RPO and RTO requirements for both the application and database tiers and uses tools like Amazon DLM and RDS automated backups to create and manage the backups. Additionally, it uses Global Accelerator to ensure low latency after failover by directing traffic to the closest healthy endpoint.


Question 8:

A digital marketing company has multiple AWS accounts that belong to various teams. The creative team uses an Amazon S3 bucket in its AWS account to securely store images and media files that are used as content for the company\’s marketing campaigns. The creative team wants to share the S3 bucket with the strategy team so that the strategy team can view the objects.

A solutions architect has created an IAM role that is named strategy_reviewer in the Strategy account. The solutions architect also has set up a custom AWS Key Management Service (AWS KMS) key in the Creative account and has associated the key with the S3 bucket. However, when users from the Strategy account assume the IAM role and try to access objects in the S3 bucket, they receive an Account.

The solutions architect must ensure that users in the Strategy account can access the S3 bucket. The solution must provide these users with only the minimum permissions that they need. Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

A. Create a bucket policy that includes read permissions for the S3 bucket. Set the principal of the bucket policy to the account ID of the Strategy account

B. Update the strategy_reviewer IAM role to grant full permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key.

C. Update the custom KMS key policy in the Creative account to grant decrypt permissions to the strategy_reviewer IAM role.

D. Create a bucket policy that includes read permissions for the S3 bucket. Set the principal of the bucket policy to an anonymous user.

E. Update the custom KMS key policy in the Creative account to grant encrypt permissions to the strategy_reviewer IAM role.

F. Update the strategy_reviewer IAM role to grant read permissions for the S3 bucket and to grant decrypt permissions for the custom KMS key

Correct Answer: ACF

https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-denied-error-s3/


Question 9:

A company recently acquired several other companies. Each company has a separate AWS account with a different billing and reporting method. The acquiring company has consolidated all the accounts into one organization in AWS Organizations. However, the acquiring company has found it difficult to generate a cost report that contains meaningful groups for all the teams.

The acquiring company\’s finance team needs a solution to report on costs for all the companies through a self-managed application.

Which solution will meet these requirements?

A. Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a table in Amazon Athena. Create an Amazon QuickSight dataset based on the Athena table. Share the dataset with the finance team.

B. Create an AWS Cost and Usage Report for the organization. Define tags and cost categories in the report. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.

C. Create an Amazon QuickSight dataset that receives spending information from the AWS Price List Query API. Share the dataset with the finance team.

D. Use the AWS Price List Query API to collect account spending information. Create a specialized template in AWS Cost Explorer that the finance department will use to build reports.

Correct Answer: A

Creating an AWS Cost and Usage Report for the organization and defining tags and cost categories in the report will allow for detailed cost reporting for the different companies that have been consolidated into one organization. By creating a table in Amazon Athena and an Amazon QuickSight dataset based on the Athena table, the finance team will be able to easily query and generate reports on the costs for all the companies. The dataset can then be shared with the finance team for them to use for their reporting needs.

Option B is not correct because it does not provide a way to query and generate reports on the costs for all the companies.

Option C is not correct because it only provides spending information from the AWS Price List Query API and does not provide detailed cost reporting for the different companies.

Option D is not correct because it only uses the AWS Price List Query API and does not provide a way to query and generate reports on the costs for all the companies.


Question 10:

A company is using AWS Organizations lo manage multiple accounts. Due to regulatory requirements, the company wants to restrict specific member accounts to certain AWS Regions, where they are permitted to deploy resources. The resources in the accounts must be tagged, enforced based on a group standard, and centrally managed with minimal configuration.

What should a solutions architect do to meet these requirements?

A. Create an AWS Config rule in the specific member accounts to limit Regions and apply a tag policy.

B. From the AWS Billing and Cost Management console, in the master account, disable Regions for the specific member accounts and apply a tag policy on the root.

C. Associate the specific member accounts with the root. Apply a tag policy and an SCP using conditions to limit Regions.

D. Associate the specific member accounts with a new OU. Apply a tag policy and an SCP using conditions to limit Regions.

Correct Answer: D

https://aws.amazon.com/es/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tag-policies-and-service-control-policies-scps/


Question 11:

A new startup is running a serverless application using AWS Lambda as the primary source of compute New versions of the application must be made available to a subset of users before deploying changes to all users Developers should also have the ability to stop the deployment and have access to an easy rollback mechanism A solutions architect decides to use AWS CodeDeploy to deploy changes when a new version is available.

Which CodeDeploy configuration should the solutions architect use?

A. A blue/green deployment

B. A linear deployment

C. A canary deployment

D. An all-at-once deployment

Correct Answer: C


Question 12:

An international delivery company hosts a delivery management system on AWS. Drivers use the system to upload confirmation of delivery. Confirmation includes the recipient\’s signature or a photo of the package with the recipient. The driver\’s handheld device uploads signatures and photos through FTP to a single Amazon EC2 instance. Each handheld device saves a file in a directory based on the signed-in user, and the file name matches the delivery number. The EC2 instance then adds metadata to the file after querying a central database to pull delivery information. The file is then placed in Amazon S3 for archiving.

As the company expands, drivers report that the system is rejecting connections. The FTP server is having problems because of dropped connections and memory issues. In response to these problems, a system engineer schedules a cron task to reboot the EC2 instance every 30 minutes. The billing team reports that files are not always in the archive and that the central system is not always updated.

A solutions architect needs to design a solution that maximizes scalability to ensure that the archive always receives the files and that systems are always updated. The handheld devices cannot be modified, so the company cannot deploy a new application.

Which solution will meet these requirements?

A. Create an AMI of the existing EC2 instance. Create an Auto Scaling group of EC2 instances behind an Application Load Balancer. Configure the Auto Scaling group to have a minimum of three instances.

B. Use AWS Transfer Family to create an FTP server that places the files in Amazon Elastic File System (Amazon EFS). Mount the EFS volume to the existing EC2 instance. Point the EC2 instance to the new path for file processing.

C. Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

D. Update the handheld devices to place the files directly in Amazon S3. Use an S3 event notification through Amazon Simple Queue Service (Amazon SQS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

Correct Answer: C

The files are then stored in S3 bucket, this eliminates the need for the EC2 instance and FTP server, which are the sources of the scalability problems. An S3 event notification through Amazon Simple Notification Service (Amazon SNS) is used to invoke an AWS Lambda function. The Lambda function can then be configured to add the metadata and update the central system. This approach will ensure that the archive always receives the files and that the central system is always updated. The use of SNS and Lambda function will allow for automatic metadata addition and updating of the central system, which eliminates the need for the EC2 instance and FTP server.


Question 13:

A new application is running on Amazon Elastic Container Service (Amazon ECS) with AWS Fargate The application uses an Amazon Aurora MySQL database The application and the database run m the same subnets of a VPC with distinct security groups that are configured.

The password (or the database is stored m AWS Secrets Manager and is passed to the application through the D8_PASSWORD environment variable The hostname of the database is passed to the application through the DB_HOST environment variable The application Is failing to access the database.

Which combination of actions should a solutions architect take to resolve this error? (Select THREE )

A. Ensure that the container has the environment variable with name “DB_PASSWORD” specified with a “ValueFrom” and the ARN of the secret

B. Ensure that the container has the environment variable with name *D8_PASSWORD” specified with a “ValueFrom” and the secret name of the secret.

C. Ensure that the Fargate service security group allows inbound network traffic from the Aurora MySQL database on the MySQL TCP port 3306.

D. Ensure that the Aurora MySQL database security group allows inbound network traffic from the Fargate service on the MySQL TCP port 3306.

E. Ensure that the container has the environment variable with name “D8_HOST” specified with the hostname of a DB instance endpoint.

F. Ensure that the container has the environment variable with name “DB_HOST” specified with the hostname of the OB duster endpoint.

Correct Answer: ADE


Question 14:

A company processes environment data. The has a set up sensors to provide a continuous stream of data from different areas in a city. The data is available in JSON format.

The company wants to use an AWS solution to send the data to a database that does not require fixed schemas for storage. The data must be send in real time.

Which solution will meet these requirements?

A. Use Amazon Kinesis Data Firehouse to send the data to Amazon Redshift.

B. Use Amazon Kinesis Data streams to send the data to Amazon DynamoDB.

C. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to send the data to Amazon Aurora.

D. Use Amazon Kinesis Data firehouse to send the data to Amazon Keyspaces (for Apache Cassandra).

Correct Answer: B

: Amazon Kinesis Data Streams is a service that enables real-time data ingestion and processing. Amazon DynamoDB is a NoSQL database that does not require fixed schemas for storage. By using Kinesis Data Streams and DynamoDB,

the company can send the JSON data to a database that can handle schemaless data in real time.

References:

https://docs.aws.amazon.com/streams/latest/dev/introduction.html https://docs.aws.amazon.com/ amazondynamodb/latest/developerguide/Introductio n.html


Question 15:

A company wants to deploy an AWS WAF solution to manage AWS WAF rules across multiple AWS accounts. The accounts are managed under different OUs in AWS Organizations.

Administrators must be able to add or remove accounts or OUs from managed AWS WAF rule sets as needed. Administrators also must have the ability to automatically update and remediate noncompliant AWS WAF rules in all accounts

Which solution meets these requirements with the LEAST amount of operational overhead?

A. Use AWS Firewall Manager to manage AWS WAF rules across accounts in the organization. Use an AWS Systems Manager Parameter Store parameter to store account numbers and OUs to manage Update the parameter as needed to add or remove accounts or OUs Use an Amazon EventBridge (Amazon CloudWatch Events) rule to identify any changes to the parameter and to invoke an AWS Lambda function to update the security policy in the Firewall Manager administrative account

B. Deploy an organization-wide AWS Conng rule that requires all resources in the selected OUs to associate the AWS WAF rules. Deploy automated remediation actions by using AWS Lambda to fix noncompliant resources. Deploy AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied.

C. Create AWS WAF rules in the management account of the organization. Use AWS Lambda environment variables to store account numbers and OUs to manage Update environment variables as needed to add or remove accounts or OUs Create cross-account IAM roles in member accounts. Assume the roles by using AWS Security Token Service (AWS STS) in the Lambda function to create and update AWS WAF rules in the member accounts

D. Use AWS Control Tower to manage AWS WAF rules across accounts in the organization. Use AWS Key Management Service (AWS KMS) to store account numbers and OUs to manage Update AWS KMS as needed to add or remove accounts or OUs. Create IAM users in member accounts Allow AWS Control Tower in the management account to use the access key and secret access key to create and update AWS WAF rules in the member accounts

Correct Answer: A

In this solution, AWS Firewall Manager is used to manage AWS WAF rules across accounts in the organization. An AWS Systems Manager Parameter Store parameter is used to store account numbers and OUs to manage. This parameter can be updated as needed to add or remove accounts or OUs. An Amazon EventBridge rule is used to identify any changes to the parameter and to invoke an AWS Lambda function to update the security policy in the Firewall Manager administrative account. This solution allows for easy management of AWS WAF rules across multiple accounts with minimal operational overhead.


Leave a Reply

Your email address will not be published. Required fields are marked *