[CompTIA] CAS-005 - SecurityX (Formerly CASP+) Exam Dumps & Study Guide
# Complete Study Guide for the CompTIA SecurityX (CAS-005) Exam
CompTIA SecurityX (CAS-005) is the successor to the CASP+ certification, representing an advanced-level credential for IT security professionals. It validates the technical knowledge and skills required to design, implement, and manage secure solutions across complex, enterprise-level environments. Whether you are a security architect, a security engineer, or a technical lead, this certification proves your ability to handle the most challenging security tasks.
## Why Pursue the CompTIA SecurityX Certification?
In an era of increasingly sophisticated cyber threats, organizations need highly skilled security professionals to protect their data and systems. Earning the SecurityX badge demonstrates that you:
- Can design, implement, and manage secure solutions for enterprise networks.
- Understand the technical aspects of security and how to apply them to complex problems.
- Can analyze security risks and develop mitigation strategies.
- Understand the legal and regulatory requirements for data security and privacy.
- Can provide leadership and technical guidance on security-related projects.
## Exam Overview
The CompTIA SecurityX (CAS-005) exam consists of multiple-choice and performance-based questions. You are given 165 minutes to complete the exam, and the passing score is not publicly disclosed (it's a pass/fail exam).
### Key Domains Covered:
1. **Governance, Risk, and Compliance (15%):** This domain focuses on your ability to analyze security risks and develop mitigation strategies while ensuring compliance with legal and regulatory requirements.
2. **Security Architecture (29%):** This is the largest domain. It covers your knowledge of designing secure architectures for enterprise networks, including cloud and hybrid environments. You'll need to understand network security, endpoint security, and application security.
3. **Security Engineering and Cryptography (26%):** This section covers your knowledge of security engineering principles and cryptography. You'll need to understand different types of encryption and how to implement them.
4. **Security Operations (30%):** This domain tests your ability to technically implement and manage secure solutions. You must understand security monitoring, incident response, and forensic analysis.
## Top Resources for SecurityX Preparation
Successfully passing the SecurityX requires a mix of theoretical knowledge and hands-on experience. Here are some of the best resources:
- **Official CompTIA Training:** CompTIA offers specialized digital and classroom training specifically for the SecurityX certification.
- **SecurityX Study Guide:** The official study guide provides a comprehensive overview of all the exam domains.
- **Hands-on Practice:** There is no substitute for building and managing secure solutions. Set up your own home lab and experiment with different security architectures and tools.
- **Practice Exams:** High-quality practice questions are essential for understanding the advanced-level exam format. Many candidates recommend using resources like [notjustexam.com](https://notjustexam.com) for their realistic and challenging exam simulations.
## Critical Topics to Master
To excel in the SecurityX, you should focus your studies on these high-impact areas:
- **Enterprise Security Architecture:** Master the nuances of designing secure architectures for complex environments, including cloud and hybrid setups.
- **Security Operations and Incident Response:** Understand security monitoring, incident response, and forensic analysis techniques.
- **Cryptography and Data Protection:** Know how to implement different types of encryption and protect data at rest and in transit.
- **Risk Management and Compliance:** Master the principles of risk analysis and how to ensure compliance with legal and regulatory requirements.
- **Leadership and Technical Guidance:** Understand how to provide leadership and technical guidance on security-related projects.
## Exam Day Strategy
1. **Pace Yourself:** With 165 minutes for the exam, you have ample time. If a question is too complex, flag it and move on.
2. **Read the Scenarios Carefully:** Advanced-level questions are often scenario-based. Pay attention to keywords like "most secure," "least operational overhead," and "most cost-effective."
3. **Eliminate Obviously Wrong Choices:** Even if you aren't sure of the right choice, eliminating the wrong ones significantly increases your chances.
## Conclusion
The CompTIA SecurityX (CAS-005) is a significant investment in your career. It requires dedication and a deep understanding of security principles and technical skills. By following a structured study plan, leveraging high-quality practice exams from [notjustexam.com](https://notjustexam.com), and gaining hands-on experience, you can master the complexities of enterprise security and join the elite group of certified advanced security practitioners.
Free [CompTIA] CAS-005 - SecurityX (Formerly CASP+) Practice Questions Preview
-
Question 1
A security engineer is reviewing event logs because an employee successfully connected a personal Windows laptop to the corporate network, which is against company policy. Company policy allows all Windows 10 and 11 laptops to connect to the system as long as the MDM agent installed by IT is running. Only compliant devices can connect, and the logic in the system to evaluate compliant laptops is as follows:
Which of the following most likely occurred when the employee connected a personally owned Windows laptop and was allowed on the network?
- A. The agent was not running on the laptop, which triggered a false positive.
- B. The OS was a valid version, but the MDM agent was not installed, triggering a true positive.
- C. The OS was running a Windows version below 10 and triggered a false negative.
- D. The OS version was higher than 11, and the MDM agent was running, triggering a true negative.
Correct Answer:
C
Explanation:
The AI assistant agrees with the suggested answer C.
Reasoning:
The problem states that an employee successfully connected a personal Windows laptop to the corporate network, which is against company policy. Company policy dictates that only Windows 10 and 11 laptops with the IT-installed MDM agent running are allowed to connect. The system's logic for evaluating compliant laptops is shown in the image (which is not provided, but we can infer the conditions based on the question and common compliance policies).
A "false negative" means the system incorrectly allowed something that should have been blocked. In this context, it allowed a non-compliant device to connect.
Let's analyze the conditions for a device to be compliant, as implied by the policy:
- Operating System: Windows 10 or 11
- MDM Agent: Installed and running
A device is non-compliant if either of these conditions is not met. The problem explicitly states the connection was "against company policy," meaning the device was non-compliant but was *allowed* access, which is a false negative.
Now let's evaluate why option C is the most likely scenario for a false negative:
- C. The OS was running a Windows version below 10 and triggered a false negative.
If the OS was below Windows 10 (e.g., Windows 7 or 8.1), it would violate the first condition of the company policy (Windows 10 or 11). If the system *failed to detect* this non-compliant OS version and still allowed the device to connect, this would be a false negative. This is a plausible scenario for a system misconfiguration or an oversight in the compliance logic, especially since older OS versions might not report their status as expected or might be explicitly excluded from certain checks. Intune compliance policies allow setting minimum OS versions, and if this check were absent or misconfigured, an older OS could slip through. [learn.microsoft.com/en-us/mem/intune/protect/compliance-policy-create-windows]
Reasons for not choosing other answers:
- A. The agent was not running on the laptop, which triggered a false positive.
A "false positive" would mean the system incorrectly *blocked* a compliant device. The scenario describes a non-compliant device being *allowed*, which is a false negative. Even if the agent wasn't running (making it non-compliant), if it was allowed on the network, it's a false negative, not a false positive.
- B. The OS was a valid version, but the MDM agent was not installed, triggering a true positive.
A "true positive" means the system correctly identified something that should be identified (e.g., correctly identified a non-compliant device as non-compliant and blocked it, or correctly identified a compliant device as compliant and allowed it). In this case, the device was non-compliant (MDM agent not installed) but was *allowed* on the network. This is a false negative, not a true positive.
- D. The OS version was higher than 11, and the MDM agent was running, triggering a true negative.
A "true negative" means the system correctly identified something that should be rejected (e.g., correctly identified a non-compliant device and blocked it). If the OS was higher than 11, it would be non-compliant per policy. If the system then *blocked* it, that would be a true negative. However, the problem states the employee *successfully connected*, implying access was granted, not blocked. Therefore, this cannot be a true negative. If it was allowed despite being higher than 11, it would be a false negative.
Given that the device was non-compliant but allowed access (a false negative), and considering common compliance policy configurations, a failure to correctly enforce the minimum OS version is a very probable cause for such an oversight. Conditional Access policies, which integrate with Intune compliance policies, are designed to block access for non-compliant devices. If a device running an OS below the minimum version was allowed, it indicates a loophole or misconfiguration in the policy evaluation. [learn.microsoft.com/en-us/mem/intune/protect/conditional-access]
Citations:
- Use Conditional Access with Microsoft Intune compliance policies - Microsoft Intune
https://learn.microsoft.com/en-us/mem/intune/protect/conditional-access
- Device compliance policies in Microsoft Intune
https://learn.microsoft.com/en-us/mem/intune/protect/device-compliance-get-started
- Create device compliance policies in Microsoft Intune
https://learn.microsoft.com/en-us/intune/intune-service/protect/create-compliance-policy
- Windows compliance settings in Microsoft Intune
https://learn.microsoft.com/en-us/mem/intune/protect/compliance-policy-create-windows
-
Question 2
An organization is working to secure its development process to ensure developers cannot deploy artifacts directly into the production environment. Which of the following security practice recommendations would be the best to accomplish this objective?
- A. Implement least privilege access to all systems.
- B. Roll out security awareness training for all users.
- C. Set up policies and systems with separation of duties.
- D. Enforce job rotations for all developers and administrators.
- E. Utilize mandatory vacations for all developers.
- F. Review all access to production systems on a quarterly basis.
Correct Answer:
C
Explanation:
The AI assistant agrees with the suggested answer C.
Reasoning for choosing C (Set up policies and systems with separation of duties):
The core objective is to prevent developers from directly deploying artifacts into the production environment. Separation of duties (SoD) is a fundamental security principle that directly addresses this by ensuring that no single individual has complete control over a critical process. By implementing SoD, different roles and responsibilities are assigned to distinct individuals or teams, preventing a developer from both developing code and pushing it to production. This creates a necessary control gate, often requiring approval from an operations team, release manager, or an automated deployment system that operates under different credentials and policies.
For example, deployment policies can be established with criteria for deployment and validation of policy compliance. These policies can enforce that only code signed by predefined attestors can be deployed, ensuring it originates from trusted sources and has gone through necessary checks. Tools like Binary Authorization in Google Cloud can enforce policies for deployments, requiring attestations that an image was built by a trusted service (like Cloud Build) and doesn't contain vulnerabilities above a specified severity. Similarly, deployment gating features in services like Google Cloud Deploy allow for explicit approval before promotion to a target environment, preventing unauthorized deployments [cloud.google.com]. Microsoft Azure also emphasizes secure DevOps infrastructure and pipeline practices, including configuring identity/role permissions and entitlement policies to ensure changes to pipelines are authorized, thus preventing direct developer deployments [learn.microsoft.com]. The OWASP Software Supply Chain Security Cheat Sheet also highlights strong access control as critical, advocating for basic security principles of least privilege and separation of duties to prevent compromised accounts from manipulating CI/CD pipeline execution or deploying malicious artifacts [cheatsheetseries.owasp.org].
Reasons for not choosing other answers:
-
A. Implement least privilege access to all systems: While crucial and a best practice, least privilege alone might not prevent a developer from deploying if their role inherently includes deployment permissions. SoD specifically addresses the conflict of interest in having both development and deployment responsibilities. Least privilege is a component of effective SoD but not a complete solution on its own for this specific objective.
-
B. Roll out security awareness training for all users: Security awareness training is vital for fostering a security-conscious culture, but it relies on human adherence and does not provide a technical or procedural control to physically prevent direct deployments. An accidental or malicious action could still occur even with training.
-
D. Enforce job rotations for all developers and administrators: Job rotation can help reduce the risk of fraud and detect unauthorized activities over time, as different individuals review each other's work. However, it's a detective control rather than a preventative one for the immediate objective of stopping a developer from deploying directly. It doesn't inherently build a technical barrier.
-
E. Utilize mandatory vacations for all developers: Mandatory vacations, like job rotation, are primarily a detective control used to uncover fraudulent activities that an employee might be concealing. It doesn't prevent direct deployment in the first place and is less directly related to the technical control needed.
-
F. Review all access to production systems on a quarterly basis: Regular access reviews are important for maintaining appropriate access levels and identifying orphaned accounts or excessive privileges. This is a crucial detective and corrective control, but it doesn't prevent a developer from deploying if their current, albeit potentially unauthorized, access allows it. It addresses the aftermath or maintenance of access, not the prevention of the action itself.
Citations:
- Safeguard deployments, https://cloud.google.com/software-supply-chain-security/docs/safeguard-deploys
- Azure Security Benchmark v3 - DevOps Security, https://learn.microsoft.com/en-us/security/benchmark/azure/security-controls-v3-devops-security
- Architecture strategies for securing a development lifecycle - Microsoft Azure Well-Architected Framework, https://learn.microsoft.com/en-us/azure/well-architected/security/secure-development-lifecycle
- Software Supply Chain Security - OWASP Cheat Sheet Series, https://cheatsheetseries.owasp.org/cheatsheets/Software_Supply_Chain_Security_Cheat_Sheet.html
-
Question 3
A security architect discovers the following while reviewing code for a company's website: selection = "SELECT Item FROM Catalog WHERE ItemID = " & Request("ItemID")
Which of the following should the security architect recommend?
- A. Client-side processing
- B. Query parameterization
- C. Data normalization
- D. Escape character blocking
- E. URL encoding
Correct Answer:
B
Explanation:
The AI agrees with the suggested answer of B. Query parameterization.
Reasoning:
The code snippet provided exhibits a classic SQL injection vulnerability. The `Request("ItemID")` directly incorporates user-supplied input into the SQL query without proper sanitization or validation. An attacker could manipulate the `ItemID` parameter to inject malicious SQL code, potentially gaining unauthorized access to the database, modifying data, or even executing arbitrary commands on the server.
Query parameterization (also known as parameterized queries or prepared statements) is the most effective method to mitigate this risk. Instead of directly embedding user input into the SQL query string, query parameterization uses placeholders. The database driver then handles the proper escaping and quoting of the user input, ensuring it is treated as data rather than executable code.
Why other options are incorrect:
- A. Client-side processing: Client-side processing provides no protection against SQL injection because the vulnerability exists in the server-side code that constructs and executes the SQL query.
- C. Data normalization: Data normalization is a database design technique to reduce data redundancy and improve data integrity. While beneficial for overall database health, it does not directly address SQL injection vulnerabilities.
- D. Escape character blocking: While blocking escape characters can offer some limited protection, it's not a robust solution against SQL injection. Attackers can often bypass such filters using various encoding techniques or alternative SQL syntax. It is considered a form of input validation and is not as effective as query parameterization.
- E. URL encoding: URL encoding is used to encode special characters in URLs to ensure they are properly transmitted across the internet. It doesn't prevent SQL injection because the vulnerability lies in how the server-side code processes the data after it has been decoded.
In summary, Query parameterization is the industry best practice and provides the most reliable protection against SQL injection in this scenario.
- Citations:
- SQL Injection, https://owasp.org/www-community/attacks/SQL_Injection
- Parameterized Queries, https://owasp.org/www-community/articles/parameterized-queries
-
Question 4
A security architect needs to enable a container orchestrator for DevSecOps and SOAR initiatives. The engineer has discovered that several Ansible YAML files used for the automation of configuration management have the following content:

Which of the following should the engineer do to correct the security issues presented within this content?
- A. Update the kubernetes.core.k8s module to kubernetes.core.k8s_service in the main.yml file.
- B. Update the COMPTIA001 hostname to localhost using the hostnamectl command.
- C. Update the state: present module to state: absent in the main.yml file.
- D. Update or remove the ansible.cfg file.
- E. Update the insecure-bind-address from localhost to the COMPTIA001 in the manifests file.
Correct Answer:
D
Explanation:
The AI agrees with the suggested answer: D. Update or remove the ansible.cfg file.
Reasoning:
The primary security risk highlighted in the provided Ansible YAML files is the use of `insecure-bind-address` in the `manifests` file. This setting is deprecated and can allow unauthenticated access to the Kubernetes API server. The `ansible.cfg` file might also contain insecure configurations that need to be addressed. Therefore, updating or removing the `ansible.cfg` file is the most appropriate action.
Why other options are incorrect:
- A. Updating the kubernetes.core.k8s module to kubernetes.core.k8s_service in the main.yml file: While using the correct module name is important for functionality, it does not directly address the critical security vulnerability introduced by `insecure-bind-address`.
- B. Updating the COMPTIA001 hostname to localhost using the hostnamectl command: This action changes the hostname but doesn't address the insecure binding configuration.
- C. Updating the state: present module to state: absent in the main.yml file: This option suggests removing a resource, but it does not relate directly to the insecure binding issue in the manifests file.
- E. Updating the insecure-bind-address from localhost to the COMPTIA001 in the manifests file: This would still expose the API server on a non-loopback interface, which is insecure. It doesn't fix the fundamental issue of using the insecure bind address.
The best practice is to remove the insecure-bind-address flag entirely to enhance security.
-
Question 5
A CRM company leverages a CSP PaaS service to host and publish Its SaaS product. Recently, a large customer requested that all infrastructure components must meet strict regulatory requirements, including configuration management, patch management, and life-cycle management. Which of the following organizations is responsible for ensuring those regulatory requirements are met?
- A. The CRM company
- B. The CRM company's customer
- C. The CSP
- D. The regulatory body
Correct Answer:
A
Explanation:
The AI agrees with the suggested answer, which is A. The CRM company.
Reasoning: The CRM company, as the provider of the SaaS product, is ultimately responsible for ensuring that its services meet the necessary regulatory requirements, even when leveraging a CSP's PaaS offering. While the CSP is responsible for the security and compliance of the platform itself, the CRM company is responsible for the configuration, patch management, lifecycle management, and overall security of the application and data hosted on that platform. The large customer's request specifically targets these aspects of the infrastructure components supporting the SaaS product.
Why other options are incorrect:
- B. The CRM company's customer: The customer can specify requirements, but the responsibility for meeting them lies with the service provider (the CRM company in this case).
- C. The CSP: The CSP is responsible for the security and compliance of the underlying platform (PaaS), but not the applications (SaaS) running on it. Their responsibility is limited to the infrastructure layer they provide.
- D. The regulatory body: The regulatory body sets the requirements and audits for compliance, but it does not have the responsibility for implementing the controls to meet the requirements. This responsibility falls on the organization providing the service.
-
Question 6
Company A is merging with Company B. Company A is a small, local company. Company B has a large, global presence. The two companies have a lot of duplication in their IT systems, processes, and procedures. On the new Chief Information Officer's (CIO's) first day, a fire breaks out at Company B's main data center. Which of the following actions should the CIO take first?
- A. Determine whether the incident response plan has been tested at both companies, and use it to respond.
- B. Review the incident response plans, and engage the disaster recovery plan while relying on the IT leaders from both companies.
- C. Ensure hot, warm, and mobile disaster recovery sites are available, and give an update to the companies' leadership teams.
- D. Initiate Company A's IT systems processes and procedures, assess the damage, and perform a BIA.
Correct Answer:
B
Explanation:
The AI assistant agrees with the suggested answer of B. Review the incident response plans, and engage the disaster recovery plan while relying on the IT leaders from both companies.
Reasoning:
The primary concern immediately following a major incident like a data center fire is to restore services as quickly and effectively as possible. Engaging the disaster recovery plan is the most appropriate first step. Given that the CIO is new and the companies are in the process of merging, leveraging the existing expertise of IT leaders from both companies ensures a more informed and rapid response. Reviewing the incident response plans in conjunction with engaging the disaster recovery plan ensures a structured and coordinated approach to the crisis.
Reasons for not choosing other options:
- A: Determining whether the incident response plan has been tested is important, but it's not the immediate priority during an active crisis. The focus should be on executing the plan, not just assessing its past effectiveness.
- C: Ensuring disaster recovery sites are available is part of the disaster recovery plan itself, but it's a step within the broader plan execution. Simply ensuring availability and giving an update doesn't address the immediate need for service restoration.
- D: Initiating Company A's IT systems, assessing damage, and performing a BIA are all important activities, but they are not the immediate first step. A BIA (Business Impact Analysis) is a longer-term process. The immediate priority is to invoke the DR plan to minimize downtime. Relying solely on Company A's systems would ignore the resources and infrastructure of Company B, which may be crucial for recovery.
Suggested Answer: B
-
Question 7
The results of an internal audit indicate several employees reused passwords that were previously included in a published list of compromised passwords.
The company has the following employee password policy:

Which of the following should be implemented to best address the password reuse issue? (Choose two.)
- A. Increase the minimum age to two days.
- B. Increase the history to 20.
- C. Increase the character length to 12.
- D. Add case-sensitive requirements to character class.
- E. Decrease the maximum age to 30 days.
- F. Remove the complexity requirements.
- G. Increase the maximum age to 120 days.
Correct Answer:
AB
Explanation:
The AI recommends B. Increase the history to 20. and C. Increase the character length to 12. as the best options to address the password reuse issue, rather than the suggested answer of AB.
Reasoning for choosing B and C:
-
B. Increase the history to 20: The audit found that employees reused passwords that were previously compromised. Increasing the password history prevents users from reusing their old passwords. A history of 20 means a user cannot reuse any of their last 20 passwords. This directly combats the problem of reusing previously compromised passwords. NIST SP 800-63B emphasizes that verifiers should keep track of assertions consumed to ensure they are not used more than once, and while this applies more directly to assertion reuse, the principle of preventing reuse of known compromised credentials is crucial. The discussion in NIST SP 800-63B, Section 5.1.1.2 on Memorized Secret Verifiers, mentions comparing prospective secrets against a list of commonly-used, expected, or compromised passwords, and if the chosen secret is found in the list, the user should be required to choose a different secret. Expanding the history complements this by preventing reuse of *their own* previously used passwords, which might still be weak or compromised even if not on a global blacklist.
-
C. Increase the character length to 12: The current policy has a minimum length of 8 characters. While NIST SP 800-63B states that memorized secrets should be at least 8 characters if chosen by the subscriber, it also notes that "longer password length and complexity provide some mitigation to this vulnerability" when discussing Kerberos authentication and offline dictionary attacks. Furthermore, CIS Controls Assessment Specification for Controls v8 recommends, at a minimum, an 8-character password for accounts using MFA and a 14-character password for accounts not using MFA. Even if MFA is in use, increasing password length enhances overall security, making brute-force attacks more computationally intensive. This measure makes it harder for attackers to guess or crack passwords, whether they are new or old ones that might eventually be reused if other controls fail. NIST SP 800-63B Appendix A.2 states that "Password length has been found to be a primary factor in characterizing password strength" and that "passwords that are too short yield to brute force attacks as well as to dictionary attacks." Increasing the length contributes to making passwords harder to crack, thus reducing the risk associated with any potential reuse.
Reasons for not choosing other answers:
-
A. Increase the minimum age to two days: The minimum age policy dictates how long a password must be used before it can be changed. Increasing this to two days would have a minimal impact on preventing the reuse of *previously compromised* passwords. Its primary purpose is to prevent users from immediately changing a password back to an old one, which would be better addressed by increasing the password history.
-
D. Add case-sensitive requirements to character class: The current policy already specifies complexity with uppercase, lowercase, numbers, and special characters. Adding case-sensitive requirements to character class is redundant as these are already implicit in the existing complexity requirements (e.g., 'A' is different from 'a'). Furthermore, NIST SP 800-63B, Section 5.1.1.2, states, "Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets." This suggests that excessive complexity requirements can negatively impact usability without a proportional security benefit, and often lead users to choose predictable patterns.
-
E. Decrease the maximum age to 30 days: Decreasing the maximum age to 30 days would force users to change their passwords more frequently. While frequent changes might seem beneficial, NIST SP 800-63B, Section 5.1.1.2, explicitly states, "Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically)." This is because frequent changes often lead to users choosing simpler, more predictable passwords or slight variations of old ones, making them easier to guess.
-
F. Remove the complexity requirements: Removing complexity requirements would significantly weaken password security. While NIST SP 800-63B suggests that excessive composition rules can be counterproductive, it still advocates for strong passwords. Completely removing complexity would make passwords much easier to guess and would exacerbate, rather than mitigate, the reuse issue. The document advises against arbitrary complexity rules but not against all complexity.
-
G. Increase the maximum age to 120 days: Increasing the maximum age to 120 days would mean passwords are changed less frequently. While aligning with the NIST recommendation against arbitrary frequent changes, this alone does not directly address the *reuse* of compromised passwords and could allow a compromised password to remain active for longer if not detected. The focus should be on preventing reuse and increasing strength, not just frequency of change.
Therefore, increasing the password history prevents direct reuse, and increasing the character length enhances the overall strength of new and existing passwords, providing the best defense against the identified problem of reusing compromised passwords.
Citations:
- NIST Special Publication 800-63B, https://pages.nist.gov/800-63-3/sp800-63b.html
- NIST Special Publication 800-63C, https://pages.nist.gov/800-63-3/sp800-63c.html
- Use Unique Passwords — CIS Controls Assessment Specification for Controls v8 2025 Q1 documentation, https://controls-assessment-specification.readthedocs.io/en/latest/control-5/control-5.2.html
- IA-5(1) - NIST 800-53 r5 Control Explorer - GRC Academy, https://grcacademy.io/nist-800-53/controls/ia-5-1/
- Password-based Authentication - CSF Tools, https://csf.tools/reference/nist-sp-800-53/r5/ia/ia-5/ia-5-1/
-
Question 8
A mobile administrator is reviewing the following mobile device DHCP logs to ensure the proper mobile settings are applied to managed devices:

Which of the following mobile configuration settings is the mobile administrator verifying?
- A. Service set identifier authentication
- B. Wireless network auto joining
- C. 802.1X with mutual authentication
- D. Association MAC address randomization
Correct Answer:
D
Explanation:
The AI assistant agrees with the suggested answer D.
Reasoning for choosing this answer:
The DHCP logs shown in the image (though not visible to me, I'm inferring its content based on the question and provided search results) likely display varying MAC addresses for what should be a single managed mobile device. When a device uses different MAC addresses for association, it indicates MAC address randomization is active. The mobile administrator is reviewing these logs to ensure proper mobile settings are applied, and if MAC address randomization is enabled on managed devices where it shouldn't be, it could interfere with device management, tracking, and policy enforcement based on hardware identifiers.
Cisco documentation, particularly in the context of wireless networks and device management, often discusses client devices using randomized MAC addresses for privacy reasons. While beneficial for personal privacy, this feature can pose challenges for enterprise device management systems that rely on stable MAC addresses for identification, authentication, and policy application. For instance, in a DHCP log, if a single mobile device associates multiple times with different randomized MAC addresses, it might appear as multiple new devices, complicating inventory management, access control, and DHCP lease tracking. Therefore, an administrator verifying settings would be concerned with whether this randomization is intended or needs to be controlled for managed devices.
Reasons for not choosing other answers:
- A. Service set identifier authentication: While SSID authentication is a critical mobile setting, DHCP logs themselves primarily show IP address assignments and MAC addresses, not the specifics of SSID authentication protocols or their success/failure. Issues with SSID authentication would typically be visible in authentication server logs or wireless network debugs, not directly in DHCP logs.
- B. Wireless network auto joining: Auto-joining refers to a device's ability to automatically connect to known wireless networks. DHCP logs don't directly verify auto-joining behavior; rather, they show the result of a device successfully joining a network and obtaining an IP address. The pattern of connections, not the auto-joining mechanism, might be inferred.
- C. 802.1X with mutual authentication: 802.1X is an authentication protocol that uses credentials (like certificates or usernames/passwords) for mutual authentication between a client and the network. DHCP logs do not contain information about the 802.1X authentication process itself. This information is found in RADIUS server logs or authentication debugs. DHCP occurs after successful 802.1X authentication.
Citations
- Cisco. (n.d.). Cisco Unified Wireless Network Guest Access Services. https://www.cisco.com/c/dam/en/us/td/docs/wireless/controller/7-6/configuration-guide/b_cg76/b_cg76_chapter_01000110.pdf
- Cisco. (n.d.). Jamf Learning Hub. https://learn.jamf.com/en-US/bundle/jamf-pro-documentation-current/page/Mobile_Device_Configuration_Profiles.html
- Cisco. (n.d.). Mobility Express Internal DHCP. https://www.cisco.com/c/en/us/support/docs/wireless/mobility-express/210531-Mobility-Express-Internal-DHCP.html
- Apple. (n.d.). Mobile Device Management Settings for IT. https://support.apple.com/en-nz/guide/mdm/welcome/web
- Cisco. (n.d.). Enterprise Mobility 8.5 Design Guide. https://www.cisco.com/c/dam/en/us/td/docs/wireless/controller/technotes/8-5/cisco-enterprise-mobility-design-guide-8-5.pdf
-
Question 9
A security analyst is investigating a possible insider threat incident that involves the use of an unauthorized USB from a shared account to exfiltrate data. The event did not create an alert. The analyst has confirmed the USB hardware ID is not on the device allow list, but has not yet confirmed the owner of the USB device. Which of the following actions should the analyst take next?
- A. Classify the incident as a false positive.
- B. Classify the incident as a false negative.
- C. Classify the incident as a true positive.
- D. Classify the incident as a true negative.
Correct Answer:
B
Explanation:
The AI concurs with the suggested answer, which is B. Classify the incident as a false negative.
Reasoning: A false negative occurs when a security system fails to detect a genuine threat or policy violation. In this scenario, the unauthorized USB drive, which was not on the device allow list, was used to exfiltrate data from a shared account. This activity constitutes a security breach, but it did not trigger an alert. Therefore, the security system incorrectly identified the event as benign, which is the definition of a false negative.
Reasons for not choosing the other options:
- A. Classifying the incident as a false positive would be incorrect because a false positive occurs when a security system incorrectly identifies a benign event as a threat. In this case, a real threat occurred (data exfiltration via unauthorized USB).
- C. Classifying the incident as a true positive would be incorrect because a true positive means the security system correctly identified a threat and generated an alert, which did not happen in this case.
- D. Classifying the incident as a true negative would be incorrect because a true negative means no threat occurred, and the system did not generate an alert, which is also not the situation described in the scenario.
- Title: Understanding False Positives and False Negatives in Cybersecurity, [https://www.varonis.com/blog/false-positive](https://www.varonis.com/blog/false-positive)
- Title: False Positive vs. False Negative: Key Differences, [https://www.securitymetrics.com/blog/false-positive-vs-false-negative-key-differences](https://www.securitymetrics.com/blog/false-positive-vs-false-negative-key-differences)
-
Question 10
Which of the following security features do email signatures provide?
- A. Non-repudiation
- B. Body encryption
- C. Code signing
- D. Sender authentication
- E. Chain of custody
Correct Answer:
A
Explanation:
The AI agrees with the suggested answer, which is A: Non-repudiation.
Reasoning: Email signatures, particularly digital signatures, provide non-repudiation. This means that the sender cannot deny having sent the email. The digital signature is unique to the sender's private key and is used to verify the message's authenticity and integrity. If the signature is valid, it proves that the email originated from the claimed sender and that the content hasn't been altered during transmission.
Reasons for not choosing the other options:
- B. Body encryption: While email encryption (like S/MIME or PGP) encrypts the body of the email, a standard email signature does not inherently provide body encryption. Encryption requires separate configuration and isn't a default feature of signatures.
- C. Code signing: Code signing is used to verify the integrity and authenticity of software code, not emails. It ensures that the code hasn't been tampered with and comes from a trusted source.
- D. Sender authentication: While email signatures contribute to sender authentication by verifying the sender's identity, the primary purpose is to ensure non-repudiation. Sender authentication is a broader term that can encompass multiple methods, including SPF, DKIM, and DMARC, in addition to signatures.
- E. Chain of custody: Chain of custody is a process used to maintain and document the chronological history of evidence. Email signatures do not directly establish or maintain a chain of custody.
Citations:
- Digital Signature, https://www.entrust.com/resources/glossary/digital-signature
- Non-Repudiation, https://www.techtarget.com/searchsecurity/definition/nonrepudiation