[Amazon] SCS-C02 - Security Specialty Exam Dumps & Study Guide
# Complete Study Guide for the AWS Certified Security - Specialty (SCS-C02) Exam
The AWS Certified Security - Specialty (SCS-C02) is one of the most prestigious and challenging certifications in the Amazon Web Services ecosystem. It validates your expertise in securing AWS environments and implementing security controls. Whether you are a security architect, a security engineer, or a systems administrator, this certification proves you can handle the complexities of cloud security.
## Why Pursue the AWS Security Specialty Certification?
In an era of increasing cyber threats, security is at the heart of any successful organization. Earning the AWS Security Specialty badge demonstrates that you can:
- Design, implement, and automate security controls and governance processes.
- Implement security solutions for hybrid and multi-cloud environments.
- Define and implement logging, monitoring, and incident response procedures.
- Ensure compliance with regulatory requirements and industry standards.
- Design and implement infrastructure security and data protection solutions.
## Exam Overview
The SCS-C02 exam consists of 65 multiple-choice and multiple-response questions. You are given 170 minutes to complete the exam, and the passing score is typically 750 out of 1000.
### Key Domains Covered:
1. **Threat Detection and Incident Response (14%):** This domain focuses on your ability to detect and respond to security threats. You'll need to understand AWS services like Amazon GuardDuty, AWS Security Hub, and Amazon EventBridge.
2. **Security Logging and Monitoring (18%):** Here, the focus is on monitoring your AWS environments. You must be proficient with AWS CloudWatch, AWS CloudTrail, and AWS Config to monitor and log security-related events.
3. **Infrastructure Security (20%):** This section covers your ability to secure your AWS infrastructure. You’ll need to understand VPC security, Network ACLs, Security Groups, and how to use AWS WAF and AWS Shield.
4. **Identity and Access Management (16%):** This domain tests your knowledge of AWS IAM and how to implement the principle of least privilege. You’ll need to understand IAM roles, policies, and multi-factor authentication.
5. **Data Protection (18%):** This section covers your ability to protect data at rest and in transit. You must be familiar with AWS KMS, AWS CloudHSM, and AWS Secrets Manager.
6. **Management and Security Governance (14%):** This domain covers the automation of security controls and governance processes. You'll need to understand AWS Organizations, AWS Trusted Advisor, and AWS Artifact.
## Top Resources for SCS-C02 Preparation
Successfully passing the SCS-C02 requires a mix of theoretical knowledge and hands-on experience. Here are some of the best resources:
- **Official AWS Training:** AWS offers specialized digital and classroom training specifically for the Security Specialty.
- **AWS Whitepapers and Documentation:** Dive deep into the AWS Security Best Practices and whitepapers on incident response and data protection.
- **Hands-on Practice:** There is no substitute for building. Set up complex security architectures, experiment with GuardDuty findings, and implement automated remediation.
- **Practice Exams:** High-quality practice questions are essential for understanding the specialty-level exam format. Many candidates recommend using resources like [notjustexam.com](https://notjustexam.com) for their realistic and challenging exam simulations.
## Critical Topics to Master
To excel in the SCS-C02, you should focus your studies on these high-impact areas:
- **AWS KMS:** Understand how to manage encryption keys, create and rotate CMKs, and implement envelope encryption.
- **Amazon GuardDuty:** Master the nuances of threat detection, including how to analyze GuardDuty findings and automate responses.
- **AWS WAF and Shield:** Be able to protect your applications from common web exploits and DDoS attacks.
- **AWS IAM:** Know how to create and manage complex IAM policies and roles, including cross-account access and identity federation.
- **VPC Security:** Master the differences between security groups and network ACLs and how to use VPC endpoints for secure connectivity.
## Exam Day Strategy
1. **Time Management:** With 170 minutes for 65 questions, you have ample time. If a question is too complex, flag it and move on.
2. **Read the Scenarios Carefully:** Specialty-level questions are often scenario-based. Pay attention to keywords like "most secure," "least operational overhead," and "most cost-effective."
3. **Eliminate Obviously Wrong Choices:** Even if you aren't sure of the right choice, eliminating the wrong ones significantly increases your chances.
## Conclusion
The AWS Certified Security - Specialty (SCS-C02) is a significant investment in your career. It requires dedication and a deep understanding of security principles and AWS services. By following a structured study plan, leveraging high-quality practice exams from [notjustexam.com](https://notjustexam.com), and gaining hands-on experience, you can master the complexities of AWS security and join the elite group of certified security specialists.
Free [Amazon] SCS-C02 - Security Specialty Practice Questions Preview
-
Question 1
A company has an AWS Lambda function that creates image thumbnails from larger images. The Lambda function needs read and write access to an Amazon S3 bucket in the same AWS account.
Which solutions will provide the Lambda function this access? (Choose two.)
- A. Create an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3.
- B. Generate an Amazon EC2 key pair. Store the private key in AWS Secrets Manager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3.
- C. Create an IAM role for the Lambda function. Attach an IAM policy that allows access to the S3 bucket.
- D. Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function's IAM role as the principal.
- E. Create a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID.
Correct Answer:
CD
Explanation:
I agree with the suggested answer of CD.
The recommended solutions to grant an AWS Lambda function read and write access to an Amazon S3 bucket are:
-
C: Create an IAM role for the Lambda function. Attach an IAM policy that allows access to the S3 bucket.
This is the standard and most secure way to grant permissions to Lambda functions. By assigning an IAM role, the Lambda function assumes the permissions defined in the attached policy when it's executed. This avoids hardcoding credentials.
-
D: Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function's IAM role as the principal.
This is also a valid approach. A bucket policy can grant permissions to specific IAM roles. By specifying the Lambda function's IAM role as the principal in the bucket policy, you grant the function access to the bucket. This approach provides centralized control over bucket access.
Reasons for not choosing the other answers:
-
A: Create an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3.
This is highly discouraged. Storing access keys in environment variables is a security risk. If the Lambda function is compromised, the keys could be exposed. IAM roles are the preferred method.
-
B: Generate an Amazon EC2 key pair. Store the private key in AWS Secrets Manager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3.
EC2 key pairs are used for SSH access to EC2 instances, not for granting Lambda functions access to S3 buckets. This is the wrong tool for the job.
-
E: Create a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID.
Security groups are used to control network traffic, not to grant access to S3 buckets. Security groups are associated with VPCs and ENIs, and Lambda functions are not directly associated with security groups unless they are configured to run within a VPC. Even then, security groups would control network access, not S3 access.
In summary, using IAM roles (options C and D) is the recommended security best practice for granting Lambda functions access to AWS resources.
Citations:
- IAM roles for Lambda functions, https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html
- S3 bucket policies, https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html
-
Question 2
A security engineer is configuring a new website that is named example.com. The security engineer wants to secure communications with the website by requiring users to connect to example.com through HTTPS.
Which of the following is a valid option for storing SSL/TLS certificates?
- A. Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS)
- B. Default SSL certificate that is stored in Amazon CloudFront
- C. Custom SSL certificate that is stored in AWS Certificate Manager (ACM)
- D. Default SSL certificate that is stored in Amazon S3
Correct Answer:
C
Explanation:
I agree with the suggested answer.
The correct answer is C: Custom SSL certificate that is stored in AWS Certificate Manager (ACM). ACM is the preferred AWS service for provisioning, managing, and deploying SSL/TLS certificates for use with AWS services and internal connected resources.
Reasoning:
-
ACM is specifically designed to manage SSL/TLS certificates, making it the ideal choice for storing and managing certificates for HTTPS communication.
-
Using ACM simplifies the process of obtaining, renewing, and deploying certificates, and it integrates seamlessly with other AWS services like Elastic Load Balancing, CloudFront, and API Gateway.
Reasons for eliminating other options:
-
A: Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS). While KMS can store secrets, it is not designed for storing SSL/TLS certificates. ACM is the service specifically built for this purpose.
-
B: Default SSL certificate that is stored in Amazon CloudFront. CloudFront can use SSL/TLS certificates, but it doesn't store default certificates. ACM or IAM are used to store and manage the certificates used by CloudFront. Also, the question specifies a "custom SSL certificate," so a default certificate is not applicable.
-
D: Default SSL certificate that is stored in Amazon S3. S3 is an object storage service and is not designed to store SSL/TLS certificates. Storing certificates in S3 would not provide the necessary security and management features offered by ACM.
In Summary: ACM is the best option for managing SSL/TLS certificates within the AWS ecosystem due to its dedicated functionality and integration with other AWS services.
Citations:
- AWS Certificate Manager (ACM), https://aws.amazon.com/certificate-manager/
-
Question 3
A security engineer needs to develop a process to investigate and respond to potential security events on a company's Amazon EC2 instances. All the EC2 instances are backed by Amazon Elastic Block Store (Amazon EBS). The company uses AWS Systems Manager to manage all the EC2 instances and has installed Systems Manager Agent (SSM Agent) on all the EC2 instances.
The process that the security engineer is developing must comply with AWS security best practices and must meet the following requirements:
A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes.
A compromised EC2 instance's metadata must be updated with corresponding incident ticket information.
A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware.
Any investigative activity during the collection of volatile data must be captured as part of the process.
Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose three.)
- A. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
- B. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Move the instance to an isolation subnet that denies all source and destination traffic. Associate the instance with the subnet to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
- C. Use Systems Manager Run Command to invoke scripts that collect volatile data.
- D. Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data.
- E. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.
- F. Create a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information.
Correct Answer:
ACE
Explanation:
I agree with the suggested answer of ACE. Here's a breakdown of why these options are correct and why the others are not, along with supporting documentation.
A: Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
This is a crucial first step. Gathering metadata provides context. Enabling termination protection prevents accidental termination. Isolating the instance via security groups is a standard security practice to prevent lateral movement. Removing the instance from Auto Scaling groups and ELB prevents it from being replaced or receiving new traffic.
C: Use Systems Manager Run Command to invoke scripts that collect volatile data.
Using Systems Manager Run Command is the correct way to collect volatile data. It avoids the need to establish interactive SSH or RDP sessions (Option D), which can alter the state of the instance and potentially expose credentials. Systems Manager provides a secure and auditable way to execute commands remotely.
E: Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.
Creating an EBS snapshot preserves the non-volatile memory for forensic analysis. Tagging the instance with metadata and incident ticket information provides context and traceability.
Why other options are incorrect:
- B: Move the instance to an isolation subnet that denies all source and destination traffic. Moving a running EC2 instance to a different subnet is not a supported operation in AWS. An instance can only be launched into a specific subnet.
- D: Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data. Establishing an SSH or RDP session directly to a compromised instance is risky. It can alter the state of the instance, expose credentials, and is not as auditable as using Systems Manager Run Command.
- F: Create a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information. While technically feasible to use State Manager, it's more suited for configuration management and not ideal for a one-time forensic snapshot. State Manager is also not the best approach for immediately tagging the instance *during* the incident response process; manual tagging or a Run Command script triggered during the incident is more appropriate.
In summary, the combination of A, C, and E provides the most effective and secure way to investigate and respond to security events on EC2 instances while minimizing operational overhead and adhering to AWS security best practices.
Citations:
- AWS Systems Manager, https://aws.amazon.com/systems-manager/
- Amazon EC2 Security Groups, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
- Amazon EBS Snapshots, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
-
Question 4
A company has an organization in AWS Organizations. The company wants to use AWS CloudFormation StackSets in the organization to deploy various AWS design patterns into environments. These patterns consist of Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, Amazon RDS databases, and Amazon Elastic Kubernetes Service (Amazon EKS) clusters or Amazon Elastic Container Service (Amazon ECS) clusters.
Currently, the company’s developers can create their own CloudFormation stacks to increase the overall speed of delivery. A centralized CI/CD pipeline in a shared services AWS account deploys each CloudFormation stack.
The company's security team has already provided requirements for each service in accordance with internal standards. If there are any resources that do not comply with the internal standards, the security team must receive notification to take appropriate action. The security team must implement a notification solution that gives developers the ability to maintain the same overall delivery speed that they currently have.
Which solution will meet these requirements in the MOST operationally efficient way?
- A. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create a custom AWS Lambda function that will run the aws cloudformation validate-template AWS CLI command on all CloudFormation templates before the build stage in the CI/CD pipeline. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
- B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create custom rules in CloudFormation Guard for each resource configuration. In the CI/CD pipeline, before the build stage, configure a Docker image to run the cfn-guard command on the CloudFormation template. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
- C. Create an Amazon Simple Notification Service (Amazon SNS) topic and an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the security team's email addresses to the SNS topic. Create an Amazon S3 bucket in the shared services AWS account. Include an event notification to publish to the SQS queue when new objects are added to the S3 bucket. Require the developers to put their CloudFormation templates in the S3 bucket. Launch EC2 instances that automatically scale based on the SQS queue depth. Configure the EC2 instances to use CloudFormation Guard to scan the templates and deploy the templates if there are no issues. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
- D. Create a centralized CloudFormation stack set that includes a standard set of resources that the developers can deploy in each AWS account. Configure each CloudFormation template to meet the security requirements. For any new resources or configurations, update the CloudFormation template and send the template to the security team for review. When the review is completed, add the new CloudFormation stack to the repository for the developers to use.
Correct Answer:
B
Explanation:
Based on the question and discussion, I agree with the suggested answer B.
Reasoning:
The most operationally efficient solution involves automating the security validation process within the existing CI/CD pipeline to maintain the current delivery speed. Option B leverages `cfn-guard`, which is specifically designed to validate CloudFormation templates against custom rules, allowing the security team to define and enforce compliance standards. By integrating `cfn-guard` as a Docker image within the CI/CD pipeline, the company can automatically check for compliance issues before deployment and notify the security team via SNS if any issues are found. This approach provides a streamlined and automated way to enforce security standards without significantly impacting the developers' workflow.
Reasons for not choosing other options:
-
Option A is incorrect because `aws cloudformation validate-template` only validates the syntax and structure of the CloudFormation template, not its compliance with security requirements. Therefore, it does not meet the requirements of the security team.
-
Option C is incorrect because it introduces unnecessary complexity by requiring developers to put their CloudFormation templates in an S3 bucket and launching EC2 instances to scan the templates. This approach is less efficient and more difficult to manage than Option B.
-
Option D is incorrect because it centralizes the CloudFormation stack sets and requires the security team to review all changes. This approach will slow down the delivery speed and is not operationally efficient.
The discussion consensus supports the usage of cfn-guard for its ability to define compliance rules, which aligns with the requirement for validating CloudFormation templates against internal security standards. The proposed solution offers the best balance between security enforcement and operational efficiency by automating the validation process within the CI/CD pipeline.
Citations:
- AWS CloudFormation Guard, https://github.com/aws-cloudformation/cloudformation-guard
-
Question 5
A company is migrating one of its legacy systems from an on-premises data center to AWS. The application server will run on AWS, but the database must remain in the on-premises data center for compliance reasons. The database is sensitive to network latency. Additionally, the data that travels between the on-premises data center and AWS must have IPsec encryption.
Which combination of AWS solutions will meet these requirements? (Choose two.)
- A. AWS Site-to-Site VPN
- B. AWS Direct Connect
- C. AWS VPN CloudHub
- D. VPC peering
- E. NAT gateway
Correct Answer:
AB
Explanation:
I agree with the suggested answer of AB.
Reasoning:
The question requires a solution that provides both low latency and IPsec encryption for data transfer between AWS and an on-premises data center.
-
AWS Direct Connect (B): This establishes a dedicated network connection from on-premises to AWS, offering lower latency and more predictable network performance compared to internet-based connections. This addresses the latency requirement.
-
AWS Site-to-Site VPN (A): This creates an encrypted tunnel using IPsec over the Direct Connect link (or even over the internet if Direct Connect is not feasible or as a backup). This addresses the encryption requirement.
Why other options are incorrect:
-
C. AWS VPN CloudHub: AWS VPN CloudHub is used to securely connect multiple sites to each other through a hub-and-spoke model. While it uses VPNs, it doesn't directly address the need for a low-latency connection to a specific on-premises data center. It focuses more on interconnecting multiple VPN connections.
-
D. VPC peering: VPC peering is used to connect two VPCs. It cannot be used to connect to an on-premises data center.
-
E. NAT gateway: A NAT gateway allows instances in a private subnet to connect to the internet or other AWS services, but it does not facilitate a secure, low-latency connection to an on-premises data center.
Citations:
- AWS Direct Connect Encryption: https://docs.aws.amazon.com/directconnect/latest/UserGuide/encryption-in-transit.html
-
Question 6
A company has an application that uses dozens of Amazon DynamoDB tables to store data. Auditors find that the tables do not comply with the company's data protection policy.
The company's retention policy states that all data must be backed up twice each month: once at midnight on the 15th day of the month and again at midnight on the 25th day of the month. The company must retain the backups for 3 months.
Which combination of steps should a security engineer take to meet these requirements? (Choose two.)
- A. Use the DynamoDB on-demand backup capability to create a backup plan. Configure a lifecycle policy to expire backups after 3 months.
- B. Use AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of 3 months.
- C. Use AWS Backup to create a backup plan. Add a backup rule that includes a retention period of 3 months.
- D. Set the backup frequency by using a cron schedule expression. Assign each DynamoDB table to the backup plan.
- E. Set the backup frequency by using a rate schedule expression. Assign each DynamoDB table to the backup plan.
Correct Answer:
CD
Explanation:
Based on the question's requirements for scheduled DynamoDB backups and retention policies, and the discussion, the best combination of steps is C and D. I agree with the suggested answer.
Reasoning:
The question specifies two key requirements:
1. Scheduled backups twice a month (on the 15th and 25th).
2. A retention period of 3 months.
AWS Backup is the correct service to use because it allows you to create backup plans with specific schedules and retention policies. DynamoDB on-demand backup capability is needed to assign each DynamoDB table to the backup plan.
Detailed Explanation:
- C - Use AWS Backup to create a backup plan. Add a backup rule that includes a retention period of 3 months: AWS Backup is designed for centralized backup management. You can define backup plans with specific schedules (cron expressions) and retention periods. This directly addresses the requirement to retain backups for 3 months.
- D - Set the backup frequency by using a cron schedule expression. Assign each DynamoDB table to the backup plan: AWS Backup uses cron expressions to define backup schedules. By using a cron expression, the backups can be set to run on the 15th and 25th of each month at midnight. Then assign each DynamoDB table to the backup plan to ensure that all tables are backed up according to the schedule and retention policy defined in the backup plan.
Reasons for excluding other options:
- A - Use the DynamoDB on-demand backup capability to create a backup plan. Configure a lifecycle policy to expire backups after 3 months: While DynamoDB on-demand backups can be used, they don't inherently provide scheduling capabilities like AWS Backup. This option also does not have automated scheduling.
- B - Use AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of 3 months: AWS DataSync is primarily for data transfer and synchronization, not backup and retention. It is not the appropriate service for creating scheduled backups of DynamoDB tables with retention policies.
- E - Set the backup frequency by using a rate schedule expression. Assign each DynamoDB table to the backup plan: A rate expression defines how frequently a rule is executed, and it cannot be used to schedule at a specific time (such as midnight on the 15th and 25th of each month), but rather executes the backup after a specified number of units.
-
Question 7
A company needs a security engineer to implement a scalable solution for multi-account authentication and authorization. The solution should not introduce additional user-managed architectural components. Native AWS features should be used as much as possible. The security engineer has set up AWS Organizations with all features activated and AWS IAM Identity Center (AWS Single Sign-On) enabled.
Which additional steps should the security engineer take to complete the task?
- A. Use AD Connector to create users and groups for all employees that require access to AWS accounts. Assign AD Connector groups to AWS accounts and link to the IAM roles in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the AWS Directory Service user portal.
- B. Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Assign groups to AWS accounts and link to permission sets in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the IAM Identity Center user portal.
- C. Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Link IAM Identity Center groups to the IAM users present in all accounts to inherit existing permissions. Instruct employees to access AWS accounts by using the IAM Identity Center user portal.
- D. Use AWS Directory Service for Microsoft Active Directory to create users and groups for all employees that require access to AWS accounts. Enable AWS Management Console access in the created directory and specify IAM Identity Center as a source of information for integrated accounts and permission sets. Instruct employees to access AWS accounts by using the AWS Directory Service user portal.
Correct Answer:
B
Explanation:
Based on the question and discussion, I agree with the suggested answer B.
Reasoning:
The question explicitly states the need for a scalable, multi-account authentication and authorization solution that leverages native AWS features and avoids additional user-managed components. AWS IAM Identity Center (successor to AWS Single Sign-On) is already enabled. Option B suggests using the IAM Identity Center's default directory for managing users and groups, assigning groups to AWS accounts, and linking them to permission sets. This approach aligns perfectly with the requirements as it:
- Utilizes a native AWS feature (IAM Identity Center).
- Provides centralized user and group management.
- Offers scalable multi-account access control through permission sets.
- Avoids introducing additional architectural components.
Why other options are not suitable:
- Option A: Using AD Connector implies integrating with an on-premises Active Directory. While possible, the question doesn't mention or suggest the presence of an on-premises directory, and using AD Connector introduces an unnecessary dependency. Also, using the AWS Directory Service user portal is not the intended access method with IAM Identity Center.
- Option C: Linking IAM Identity Center groups to IAM users present in all accounts is an anti-pattern. It bypasses the centralized permission management provided by IAM Identity Center's permission sets and could lead to inconsistent and difficult-to-manage permissions across accounts.
- Option D: Similar to Option A, using AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) is not necessary unless there's a specific requirement to use a managed Active Directory service. The question emphasizes using native AWS features, and the IAM Identity Center default directory is a simpler and more appropriate solution in this scenario. Also, directing users to the AWS Directory Service portal is incorrect; IAM Identity Center has its own dedicated user portal.
Therefore, Option B is the most suitable choice for implementing a scalable, multi-account authentication and authorization solution using native AWS features and IAM Identity Center.
Answer: B
-
Question 8
A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company's AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur.
Which solution will meet these requirements?
- A. Configure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.
- B. Configure GuardDuty to send the event to Amazon EventBridge. Deploy an AWS WAF web ACL. Process the event with an AWS Lambda function that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance.
- C. Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge. Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.
- D. Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.
Correct Answer:
C
Explanation:
Based on the question's requirements and the discussion, I agree with the suggested answer C.
Reasoning:
- The question requires blocking RDP brute force attacks originating from EC2 instances. AWS Network Firewall is the most suitable service for this, as it operates at the network layer and can block traffic based on IP addresses, ports, and protocols.
- Option C effectively implements the required automation:
- GuardDuty detects the threat.
- Security Hub centralizes findings and sends the event to EventBridge.
- A Lambda function triggered by EventBridge configures AWS Network Firewall to block the traffic.
Reasons for not choosing other options:
- Option A: Network ACLs are stateless and more difficult to manage than Network Firewall for this purpose. Kinesis Data Analytics is also an unnecessary complexity.
- Option B: AWS WAF is designed to protect web applications from HTTP/HTTPS attacks. RDP uses a different protocol, so WAF is not the appropriate tool.
- Option D: Security Groups are stateful and act as a virtual firewall for associated resources. Although Security Groups can block inbound traffic, it is not the right choice. Additionally, Security Hub and Kinesis Data Stream adds unnecessary complexity.
Citations:
- AWS Network Firewall, https://aws.amazon.com/network-firewall/
- AWS Security Hub, https://aws.amazon.com/security-hub/
- Amazon GuardDuty, https://aws.amazon.com/guardduty/
-
Question 9
A company has an AWS account that hosts a production application. The company receives an email notification that Amazon GuardDuty has detected an Impact:IAMUser/AnomalousBehavior finding in the account. A security engineer needs to run the investigation playbook for this security incident and must collect and analyze the information without affecting the application.
Which solution will meet these requirements MOST quickly?
- A. Log in to the AWS account by using read-only credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
- B. Log in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use Amazon Detective to review the API calls in context.
- C. Log in to the AWS account by using administrator credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
- D. Log in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use AWS CloudTrail Insights and AWS CloudTrail Lake to review the API calls in context.
Correct Answer:
B
Explanation:
I agree with the suggested answer, which is option B.
Reasoning: The scenario requires a quick investigation without affecting the production application. Option B suggests using read-only credentials and Amazon Detective. Read-only credentials ensure that the investigation doesn't inadvertently modify or disrupt the application. Amazon Detective is specifically designed for security investigations and integrates well with GuardDuty, providing a contextual view of API calls related to the finding, thus speeding up the analysis.
Why other options are incorrect:
- A: Applying a DenyAll policy to the IAM principal might affect the application if that principal is in use. The question specifies the investigation should not affect the application.
- C: Logging in with administrator credentials isn't necessary for investigation and goes against the principle of least privilege. It also introduces a higher risk of accidentally impacting the application. Applying a DenyAll policy as in Option A also poses risk to the application.
- D: While CloudTrail Insights and CloudTrail Lake can be used for analysis, they are not as directly integrated with GuardDuty as Amazon Detective. This option requires more manual configuration and might not be as quick as using Detective. Additionally, CloudTrail Lake might involve a steeper learning curve for rapid investigation compared to the readily integrated Detective.
Therefore, Option B provides the fastest and least intrusive method for investigating the GuardDuty finding.
Citations:
- Amazon Detective, https://aws.amazon.com/detective/
- Amazon GuardDuty, https://aws.amazon.com/guardduty/
-
Question 10
Company A has an AWS account that is named Account A. Company A recently acquired Company B, which has an AWS account that is named Account B. Company B stores its files in an Amazon S3 bucket. The administrators need to give a user from Account A full access to the S3 bucket in Account B.
After the administrators adjust the IAM permissions for the user in Account A to access the S3 bucket in Account B, the user still cannot access any files in the S3 bucket.
Which solution will resolve this issue?
- A. In Account B, create a bucket ACL to allow the user from Account A to access the S3 bucket in Account B.
- B. In Account B, create an object ACL to allow the user from Account A to access all the objects in the S3 bucket in Account B.
- C. In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B.
- D. In Account B, create a user policy to allow the user from Account A to access the S3 bucket in Account B.
Correct Answer:
C
Explanation:
I agree with the suggested answer C.
The issue is that the user in Account A cannot access the S3 bucket in Account B, even after adjusting IAM permissions for the user in Account A. This indicates a cross-account access problem that needs to be addressed on the resource side (i.e., the S3 bucket).
Reasoning:
- Option C: Creating a bucket policy in Account B to allow the user from Account A to access the S3 bucket in Account B is the correct solution. Bucket policies are the standard and recommended way to grant cross-account access to S3 buckets. The bucket policy explicitly defines which principals (users, accounts, etc.) are allowed to perform which actions on the bucket.
Reasons for not choosing the other options:
- Option A: Bucket ACLs (Access Control Lists) are an older mechanism for controlling access to S3 buckets. While they can be used for granting permissions, they are less flexible and less powerful than bucket policies. Also, ACLs are being deprecated by AWS. Using ACLs for cross-account access is not a best practice.
- Option B: Object ACLs are similar to bucket ACLs but apply to individual objects within the bucket. Modifying object ACLs for every object in the bucket is impractical and inefficient. Like bucket ACLs, object ACLs are also being deprecated.
- Option D: User policies are attached to IAM users or roles and define what actions those users or roles are allowed to perform within the account. While the user in Account A needs appropriate permissions to access S3, the problem described is related to Account B not trusting Account A. Therefore, modifying a user policy in Account B does not make sense.
In summary, bucket policies are the recommended and most appropriate way to manage cross-account access to S3 buckets. Therefore, the correct solution is to create a bucket policy in Account B that allows the user from Account A to access the S3 bucket.
Citations:
- AWS S3 Bucket Policies, https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html
- AWS S3 ACL, https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html