The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
How does security in a distributed file system using mutual authentication differ from file security in a multi-user host?
Access control can rely on the Operating System (OS), but eavesdropping is
Access control cannot rely on the Operating System (OS), and eavesdropping
Access control can rely on the Operating System (OS), and eavesdropping is
Access control cannot rely on the Operating System (OS), and eavesdropping
Security in a distributed file system using mutual authentication differs from file security in a multi-user host in that access control cannot rely on the Operating System (OS), and eavesdropping is possible. A distributed file system is a system that allows users to access files stored on remote servers over a network. Mutual authentication is a process where both the client and the server verify each other’s identity before establishing a connection. In a distributed file system, access control cannot rely on the OS, because the OS may not have the same security policies or mechanisms as the remote server. Therefore, access control must be implemented at the application layer, using protocols such as Kerberos or SSL/TLS. Eavesdropping is also possible in a distributed file system, because the network traffic may be intercepted or modified by malicious parties. Therefore, encryption and integrity checks must be used to protect the data in transit. A multi-user host is a system that allows multiple users to access files stored on a local server. In a multi-user host, access control can rely on the OS, because the OS can enforce security policies and mechanisms such as permissions, groups, and roles. Eavesdropping is less likely in a multi-user host, because the network traffic is confined to the local server. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 373-374; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Architecture and Engineering, p. 149-150.
What is the expected outcome of security awareness in support of a security awareness program?
Awareness activities should be used to focus on security concerns and respond to those concerns
accordingly
Awareness is not an activity or part of the training but rather a state of persistence to support the program
Awareness is training. The purpose of awareness presentations is to broaden attention of security.
Awareness is not training. The purpose of awareness presentation is simply to focus attention on security.
The expected outcome of security awareness in support of a security awareness program is that awareness is not training, but the purpose of awareness presentation is simply to focus attention on security. A security awareness program is a set of activities and initiatives that aim to raise the awareness and understanding of the security policies, standards, procedures, and guidelines among the employees, contractors, partners, or customers of an organization. A security awareness program can provide some benefits for security, such as improving the knowledge and the skills of the parties, changing the attitudes and the behaviors of the parties, and empowering the parties to make informed and secure decisions regarding the security activities. A security awareness program can involve various methods and techniques, such as posters, newsletters, emails, videos, quizzes, games, or rewards. Security awareness is not training, but the purpose of awareness presentation is simply to focus attention on security. Security awareness is the state or condition of being aware or conscious of the security issues and incidents, and the importance and implications of security. Security awareness is not the same as training, as it does not aim to teach or instruct the parties on how to perform specific tasks or functions related to security, but rather to inform and remind the parties of the security policies, standards, procedures, and guidelines, and their roles and responsibilities in complying and supporting them. The purpose of awareness presentation is simply to focus attention on security, as it does not provide detailed or comprehensive information or guidance on security, but rather to highlight or emphasize the key or relevant points or messages of security, and to motivate or persuade the parties to pay attention and care about security. Awareness activities should be used to focus on security concerns and respond to those concerns accordingly, awareness is not an activity or part of the training but rather a state of persistence to support the program, and awareness is training, the purpose of awareness presentations is to broaden attention of security are not the expected outcomes of security awareness in support of a security awareness program, although they may be related or possible statements. Awareness activities should be used to focus on security concerns and respond to those concerns accordingly is a statement that describes one of the possible objectives or functions of awareness activities, but it is not the expected outcome of security awareness, as it does not define or differentiate security awareness from training, and it does not specify the purpose of awareness presentation. Awareness is not an activity or part of the training but rather a state of persistence to support the program is a statement that partially defines security awareness, but it is not the expected outcome of security awareness, as it does not differentiate security awareness from training, and it does not specify the purpose of awareness presentation. Awareness is training, the purpose of awareness presentations is to broaden attention of security is a statement that contradicts the definition of security awareness, as it confuses security awareness with training, and it does not specify the purpose of awareness presentation.
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
Which of the following is the MOST efficient mechanism to account for all staff during a speedy nonemergency evacuation from a large security facility?
Large mantrap where groups of individuals leaving are identified using facial recognition technology
Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each exitdoor
Emergency exits with push bars with coordinates at each exit checking off the individual against a
predefined list
Card-activated turnstile where individuals are validated upon exit
Section: Security Operations
Which of the following is the MOST effective method to mitigate Cross-Site Scripting (XSS) attacks?
Use Software as a Service (SaaS)
Whitelist input validation
Require client certificates
Validate data output
The most effective method to mitigate Cross-Site Scripting (XSS) attacks is to use whitelist input validation. XSS attacks occur when an attacker injects malicious code, usually in the form of a script, into a web application that is then executed by the browser of an unsuspecting user. XSS attacks can compromise the confidentiality, integrity, and availability of the web application and the user’s data. Whitelist input validation is a technique that checks the user input against a predefined set of acceptable values or characters, and rejects any input that does not match the whitelist. Whitelist input validation can prevent XSS attacks by filtering out any malicious or unexpected input that may contain harmful scripts. Whitelist input validation should be applied at the point of entry of the user input, and should be combined with output encoding or sanitization to ensure that any input that is displayed back to the user is safe and harmless. Use Software as a Service (SaaS), require client certificates, and validate data output are not the most effective methods to mitigate XSS attacks, although they may be related or useful techniques. Use Software as a Service (SaaS) is a model that delivers software applications over the Internet, usually on a subscription or pay-per-use basis. SaaS can provide some benefits for web security, such as reducing the attack surface, outsourcing the maintenance and patching of the software, and leveraging the expertise and resources of the service provider. However, SaaS does not directly address the issue of XSS attacks, as the service provider may still have vulnerabilities or flaws in their web applications that can be exploited by XSS attackers. Require client certificates is a technique that uses digital certificates to authenticate the identity of the clients who access a web application. Client certificates are issued by a trusted certificate authority (CA), and contain the public key and other information of the client. Client certificates can provide some benefits for web security, such as enhancing the confidentiality and integrity of the communication, preventing unauthorized access, and enabling mutual authentication. However, client certificates do not directly address the issue of XSS attacks, as the client may still be vulnerable to XSS attacks if the web application does not properly validate and encode the user input. Validate data output is a technique that checks the data that is sent from the web application to the client browser, and ensures that it is correct, consistent, and safe. Validate data output can provide some benefits for web security, such as detecting and correcting any errors or anomalies in the data, preventing data leakage or corruption, and enhancing the quality and reliability of the web application. However, validate data output is not sufficient to prevent XSS attacks, as the data output may still contain malicious scripts that can be executed by the client browser. Validate data output should be complemented with output encoding or sanitization to ensure that any data output that is displayed to the user is safe and harmless.
Match the functional roles in an external audit to their responsibilities.
Drag each role on the left to its corresponding responsibility on the right.
Select and Place:
The correct matching of the functional roles and their responsibilities in an external audit is:
Comprehensive Explanation: An external audit is an independent and objective examination of an organization’s financial statements, systems, processes, or performance by an external party. The functional roles and their responsibilities in an external audit are:
References: CISSP All-in-One Exam Guide
Which factors MUST be considered when classifying information and supporting assets for risk management, legal discovery, and compliance?
System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements
Data stewardship roles, data handling and storage standards, data lifecycle requirements
Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements
System authorization roles and responsibilities, cloud computing standards, lifecycle requirements
The factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance are data stewardship roles, data handling and storage standards, and data lifecycle requirements. Data stewardship roles are the roles and responsibilities of the individuals or entities who are accountable for the creation, maintenance, protection, and disposal of the information and supporting assets. Data stewardship roles include data owners, data custodians, data users, and data stewards. Data handling and storage standards are the policies, procedures, and guidelines that define how the information and supporting assets should be handled and stored, based on their classification level, sensitivity, and value. Data handling and storage standards include data labeling, data encryption, data backup, data retention, and data disposal. Data lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Data lifecycle requirements include data collection, data processing, data analysis, data sharing, data archiving, and data deletion. System owner roles and responsibilities, data handling standards, storage and secure development lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System owner roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for the operation, performance, and security of the system that hosts or processes the information and supporting assets. System owner roles and responsibilities include system authorization, system configuration, system monitoring, and system maintenance. Data handling standards are the policies, procedures, and guidelines that define how the information should be handled, but not how the supporting assets should be stored. Data handling standards are a subset of data handling and storage standards. Storage and secure development lifecycle requirements are the requirements that specify the stages and processes that the storage and development systems should go through, from their inception to their decommissioning. Storage and secure development lifecycle requirements include storage design, storage implementation, storage testing, storage deployment, storage operation, storage maintenance, and storage disposal. Compliance office roles and responsibilities, classified material handling standards, storage system lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. Compliance office roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for ensuring that the organization complies with the applicable laws, regulations, standards, and policies. Compliance office roles and responsibilities include compliance planning, compliance assessment, compliance reporting, and compliance improvement. Classified material handling standards are the policies, procedures, and guidelines that define how the information and supporting assets that are classified by the government or military should be handled and stored, based on their security level, such as top secret, secret, or confidential. Classified material handling standards are a subset of data handling and storage standards. Storage system lifecycle requirements are the requirements that specify the stages and processes that the storage system should go through, from its inception to its decommissioning. Storage system lifecycle requirements are a subset of storage and secure development lifecycle requirements. System authorization roles and responsibilities, cloud computing standards, lifecycle requirements are not the factors that must be considered when classifying information and supporting assets for risk management, legal discovery, and compliance, although they may be related or relevant concepts. System authorization roles and responsibilities are the roles and responsibilities of the individuals or entities who are accountable for granting or denying access to the system that hosts or processes the information and supporting assets. System authorization roles and responsibilities include system identification, system authentication, system authorization, and system auditing. Cloud computing standards are the standards that define the requirements, specifications, and best practices for the delivery of computing services over the internet, such as infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS). Cloud computing standards include cloud service level agreements (SLAs), cloud interoperability, cloud portability, and cloud security. Lifecycle requirements are the requirements that specify the stages and processes that the information and supporting assets should go through, from their creation to their destruction. Lifecycle requirements are the same as data lifecycle requirements.
Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow?
Memory review
Code review
Message division
Buffer division
Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious code or data into the buffer, and altering the execution flow of the program to execute the malicious code or data. Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow, as it involves examining the source code of the program to identify and fix any errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not perform any boundary checking on the buffer, and replace them with safer or more secure alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to the buffer. Code review can also help to enforce and verify the use of secure coding practices and standards, such as input validation, output encoding, error handling, or memory management, that can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message division, and buffer division are not techniques that would minimize the ability of an attacker to exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a technique, but a process of analyzing the memory layout or content of a program, such as the stack, the heap, or the registers, to understand or debug its behavior or performance. Memory review may help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or fixed-size segments or blocks, such as in cryptography or networking. Message division may help to improve the security or efficiency of the message transmission or processing, but it does not prevent or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
Refer to the information below to answer the question.
During the investigation of a security incident, it is determined that an unauthorized individual accessed a system which hosts a database containing financial information.
If it is discovered that large quantities of information have been copied by the unauthorized individual, what attribute of the data has been compromised?
Availability
Integrity
Accountability
Confidentiality
The attribute of the data that has been compromised, if it is discovered that large quantities of information have been copied by the unauthorized individual, is the confidentiality. The confidentiality is the property or the characteristic of the data that ensures that the data is only accessible or disclosed to the authorized individuals or entities, and that the data is protected from the unauthorized or the malicious access or disclosure. The confidentiality of the data can be compromised when the data is copied, stolen, leaked, or exposed by an unauthorized individual or a malicious actor, such as the one who accessed the system hosting the database. The compromise of the confidentiality of the data can violate the privacy, the rights, or the interests of the data owners, subjects, or users, and can cause damage or harm to the organization’s operations, reputation, or objectives. Availability, integrity, and accountability are not the attributes of the data that have been compromised, if it is discovered that large quantities of information have been copied by the unauthorized individual, as they are related to the accessibility, the accuracy, or the responsibility of the data, not the secrecy or the protection of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 263. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 279.
A minimal implementation of endpoint security includes which of the following?
Trusted platforms
Host-based firewalls
Token-based authentication
Wireless Access Points (AP)
A minimal implementation of endpoint security includes host-based firewalls. Endpoint security is the practice of protecting the devices that connect to a network, such as laptops, smartphones, tablets, or servers, from malicious attacks or unauthorized access. Endpoint security can involve various technologies and techniques, such as antivirus, encryption, authentication, patch management, or device control. Host-based firewalls are one of the basic and essential components of endpoint security, as they provide network-level protection for the individual devices. Host-based firewalls are software applications that monitor and filter the incoming and outgoing network traffic on a device, based on a set of rules or policies. Host-based firewalls can prevent or mitigate some types of attacks, such as denial-of-service, port scanning, or unauthorized connections, by blocking or allowing the packets that match or violate the firewall rules. Host-based firewalls can also provide some benefits for endpoint security, such as enhancing the visibility and the auditability of the network activities, enforcing the compliance and the consistency of the firewall policies, and reducing the reliance and the burden on the network-based firewalls. Trusted platforms, token-based authentication, and wireless access points (AP) are not the components that are included in a minimal implementation of endpoint security, although they may be related or useful technologies. Trusted platforms are hardware or software components that provide a secure and trustworthy environment for the execution of applications or processes on a device. Trusted platforms can involve various mechanisms, such as trusted platform modules (TPM), secure boot, or trusted execution technology (TXT). Trusted platforms can provide some benefits for endpoint security, such as enhancing the confidentiality and integrity of the data and the code, preventing unauthorized modifications or tampering, and enabling remote attestation or verification. However, trusted platforms are not a minimal or essential component of endpoint security, as they are not widely available or supported on all types of devices, and they may not be compatible or interoperable with some applications or processes. Token-based authentication is a technique that uses a physical or logical device, such as a smart card, a one-time password generator, or a mobile app, to generate or store a credential that is used to verify the identity of the user who accesses a network or a system. Token-based authentication can provide some benefits for endpoint security, such as enhancing the security and reliability of the authentication process, preventing password theft or reuse, and enabling multi-factor authentication (MFA). However, token-based authentication is not a minimal or essential component of endpoint security, as it does not provide protection for the device itself, but only for the user access credentials, and it may require additional infrastructure or support to implement and manage. Wireless access points (AP) are hardware devices that allow wireless devices, such as laptops, smartphones, or tablets, to connect to a wired network, such as the Internet or a local area network (LAN). Wireless access points (AP) can provide some benefits for endpoint security, such as extending the network coverage and accessibility, supporting the encryption and authentication mechanisms, and enabling the segmentation and isolation of the wireless network. However, wireless access points (AP) are not a component of endpoint security, as they are not installed or configured on the individual devices, but on the network infrastructure, and they may introduce some security risks, such as signal interception, rogue access points, or unauthorized connections.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which of the following MUST be scalable to address security concerns raised by the integration of third-party
identity services?
Mandatory Access Controls (MAC)
Enterprise security architecture
Enterprise security procedures
Role Based Access Controls (RBAC)
Enterprise security architecture is the framework that defines the security policies, standards, guidelines, and controls that govern the security of an organization’s information systems and assets. Enterprise security architecture must be scalable to address the security concerns raised by the integration of third-party identity services, such as Identity as a Service (IDaaS) or federated identity management. Scalability means that the enterprise security architecture can accommodate the increased complexity, diversity, and volume of identity and access management transactions and interactions that result from the integration of external identity providers and consumers. Scalability also means that the enterprise security architecture can adapt to the changing security requirements and threats that may arise from the integration of third-party identity services.
Which of the following is MOST appropriate to collect evidence of a zero-day attack?
Firewall
Honeypot
Antispam
Antivirus
A honeypot is a decoy system that is designed to attract and trap attackers. A honeypot can be used to collect evidence of a zero-day attack, which is an attack that exploits a previously unknown vulnerability. A honeypot can capture the attacker’s actions, tools, and techniques, and provide valuable information for analysis and mitigation. A honeypot can also divert the attacker’s attention from the real targets and waste their time and resources. A firewall, an antispam, and an antivirus are not effective in detecting or preventing zero-day attacks, as they rely on known signatures or rules that may not match the new attack. References: CISSP Official Study Guide, 9th Edition, page 1010; CISSP All-in-One Exam Guide, 8th Edition, page 1089
Which of the following assures that rules are followed in an identity management architecture?
Policy database
Digital signature
Policy decision point
Policy enforcement point
The component that assures that rules are followed in an identity management architecture is the policy enforcement point. A policy enforcement point is a device or software that implements and enforces the security policies and rules defined by the policy decision point. A policy decision point is a device or software that evaluates and makes decisions about the access requests and privileges of the users or devices based on the security policies and rules. A policy enforcement point can be a firewall, a router, a switch, a proxy, or an application that controls the access to the network or system resources. A policy database, a digital signature, and a policy decision point are not the components that assure that rules are followed in an identity management architecture, as they are related to the storage, verification, or definition of the security policies and rules, not the implementation or enforcement of them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 664. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 680.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
What is the BEST location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access?
In a dedicated Demilitarized Zone (DMZ)
In its own separate Virtual Local Area Network (VLAN)
At the Internet Service Provider (ISP)
Outside the external firewall
The best location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access is in a dedicated Demilitarized Zone (DMZ). A DMZ is a network segment that is located between the internal network and the external network, such as the internet. A DMZ is used to host the services or devices that need to be accessed by both the internal and external users, such as web servers, email servers, or VPN devices. A VPN device is a device that enables the establishment of a VPN, which is a secure and encrypted connection between two networks or endpoints over a public network, such as the internet. Placing the VPN devices in a dedicated DMZ can help to improve the security and performance of the remote access, as well as to isolate the VPN devices from the internal network and the external network. Placing the VPN devices in its own separate VLAN, at the ISP, or outside the external firewall are not the best locations, as they may expose the VPN devices to more risks, reduce the control over the VPN devices, or create a single point of failure for the remote access. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 509.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which security access policy contains fixed security attributes that are used by the system to determine a
user’s access to a file or object?
Mandatory Access Control (MAC)
Access Control List (ACL)
Discretionary Access Control (DAC)
Authorized user control
The security access policy that contains fixed security attributes that are used by the system to determine a user’s access to a file or object is Mandatory Access Control (MAC). MAC is a type of access control model that assigns permissions to users and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the users. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The users are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The users can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the users that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a user can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. MAC contains fixed security attributes that are used by the system to determine a user’s access to a file or object, by using the following methods:
Which of the following System and Organization Controls (SOC) report types should an organization request if they require a period of time report covering security and availability for a particular system?
SOC 1 Type1
SOC 1Type2
SOC 2 Type 1
SOC 2 Type 2
An organization should request a SOC 2 Type 2 report if they require a period of time report covering security and availability for a particular system. A SOC 2 report is a type of System and Organization Controls (SOC) report that evaluates the controls of a service organization based on the Trust Services Criteria (TSC), which are security, availability, processing integrity, confidentiality, and privacy. A SOC 2 report can be either Type 1 or Type 2. A SOC 2 Type 1 report describes the design and implementation of the controls at a point in time, while a SOC 2 Type 2 report tests the operating effectiveness of the controls over a period of time, usually six or twelve months. A SOC 2 Type 2 report provides more assurance and credibility than a SOC 2 Type 1 report, as it demonstrates how well the controls performed over time. A SOC 2 report can be customized to include only the relevant TSC for a particular system or service. If an organization requires a report covering security and availability, they should request a SOC 2 report that includes only those two TSC. References:
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
The third party needs to have
processes that are identical to that of the organization doing the outsourcing.
access to the original personnel that were on staff at the organization.
the ability to maintain all of the applications in languages they are familiar with.
access to the skill sets consistent with the programming languages used by the organization.
The third party needs to have access to the skill sets consistent with the programming languages used by the organization. The programming languages are the tools or the methods of creating, modifying, testing, and supporting the software applications that perform the functions or the tasks required by the organization. The programming languages can vary in their syntax, semantics, features, or paradigms, and they can require different levels of expertise or experience to use them effectively or efficiently. The third party needs to have access to the skill sets consistent with the programming languages used by the organization, as it can ensure the quality, the compatibility, and the maintainability of the software applications that the third party is responsible for. The third party does not need to have processes that are identical to that of the organization doing the outsourcing, access to the original personnel that were on staff at the organization, or the ability to maintain all of the applications in languages they are familiar with, as they are related to the methods, the resources, or the preferences of the software development, not the skill sets consistent with the programming languages used by the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1000. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1016.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
The BEST way to check for good security programming practices, as well as auditing for possible backdoors, is to conduct
log auditing.
code reviews.
impact assessments.
static analysis.
Code reviews are the best way to check for good security programming practices, as well as auditing for possible backdoors, in a software system. Code reviews involve examining the source code of the software for any errors, vulnerabilities, or malicious code that could compromise the security or functionality of the system. Code reviews can be performed manually by human reviewers, or automatically by tools that scan and analyze the code. The other options are not as effective as code reviews, as they either do not examine the source code directly (A and C), or only detect syntactic or semantic errors, not logical or security flaws (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 463; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 555.
Which security action should be taken FIRST when computer personnel are terminated from their jobs?
Remove their computer access
Require them to turn in their badge
Conduct an exit interview
Reduce their physical access level to the facility
The first security action that should be taken when computer personnel are terminated from their jobs is to remove their computer access. Computer access is the ability to log in, use, or modify the computer systems, networks, or data of the organization3. Removing computer access can prevent the terminated personnel from accessing or harming the organization’s information assets, or from stealing or leaking sensitive or confidential data. Removing computer access can also reduce the risk of insider threats, such as sabotage, fraud, or espionage. Requiring them to turn in their badge, conducting an exit interview, and reducing their physical access level to the facility are also important security actions that should be taken when computer personnel are terminated from their jobs, but they are not as urgent or critical as removing their computer access. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 249.
An organization allows ping traffic into and out of their network. An attacker has installed a program on the network that uses the payload portion of the ping packet to move data into and out of the network. What type of attack has the organization experienced?
Data leakage
Unfiltered channel
Data emanation
Covert channel
The organization has experienced a covert channel attack, which is a technique of hiding or transferring data within a communication channel that is not intended for that purpose. In this case, the attacker has used the payload portion of the ping packet, which is normally used to carry diagnostic data, to move data into and out of the network. This way, the attacker can bypass the network security controls and avoid detection. Data leakage (A) is a general term for the unauthorized disclosure of sensitive or confidential data, which may or may not involve a covert channel. Unfiltered channel (B) is a term for a communication channel that does not have any security mechanisms or filters applied to it, which may allow unauthorized or malicious traffic to pass through. Data emanation © is a term for the unintentional radiation or emission of electromagnetic signals from electronic devices, which may reveal sensitive or confidential information to eavesdroppers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 179; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 189.
The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks provide
data integrity.
defense in depth.
data availability.
non-repudiation.
Defense in depth is a security strategy that involves applying multiple layers of protection to a system or network to prevent or mitigate attacks. The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks are examples of defense in depth measures that can enhance the security of the system or network.
A, C, and D are incorrect because they are not the best terms to describe the security strategy. Data integrity is a property of data that ensures its accuracy, consistency, and validity. Data availability is a property of data that ensures its accessibility and usability. Non-repudiation is a property of data that ensures its authenticity and accountability. While these properties are important for security, they are not the same as defense in depth.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include
hardened building construction with consideration of seismic factors.
adequate distance from and lack of access to adjacent buildings.
curved roads approaching the data center.
proximity to high crime areas of the city.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include proximity to high crime areas of the city. This factor increases the risk of theft, vandalism, sabotage, or other malicious acts that could damage or disrupt the data center operations. The other options are factors that decrease the level of vulnerability to physical threats, as they provide protection or deterrence against natural or human-made hazards. Hardened building construction with consideration of seismic factors (A) reduces the impact of earthquakes or other natural disasters. Adequate distance from and lack of access to adjacent buildings (B) prevents unauthorized entry or fire spread from neighboring structures. Curved roads approaching the data center © slow down the speed of vehicles and make it harder for attackers to ram or bomb the data center. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 637; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 699.
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
Which of the following statements is TRUE of black box testing?
Only the functional specifications are known to the test planner.
Only the source code and the design documents are known to the test planner.
Only the source code and functional specifications are known to the test planner.
Only the design documents and the functional specifications are known to the test planner.
Black box testing is a method of software testing that does not require any knowledge of the internal structure or code of the software1. The test planner only knows the functional specifications, which describe what the software is supposed to do, and tests the software based on the expected inputs and outputs. Black box testing is useful for finding errors in the functionality, usability, or performance of the software, but it cannot detect errors in the code or design. White box testing, on the other hand, requires the test planner to have access to the source code and the design documents, and tests the software based on the internal logic and structure2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21, page 13132: CISSP For Dummies, 7th Edition, Chapter 8, page 215.
The stringency of an Information Technology (IT) security assessment will be determined by the
system's past security record.
size of the system's database.
sensitivity of the system's datA.
age of the system.
The stringency of an Information Technology (IT) security assessment will be determined by the sensitivity of the system’s data, as this reflects the level of risk and impact that a security breach could have on the organization and its stakeholders. The more sensitive the data, the more stringent the security assessment should be, as it should cover more aspects of the system, use more rigorous methods and tools, and provide more detailed and accurate results and recommendations. The system’s past security record, size of the system’s database, and age of the system are not the main factors that determine the stringency of the security assessment, as they do not directly relate to the value and importance of the data that the system processes, stores, or transmits . References: 3: Common Criteria for Information Technology Security Evaluation 4: Information technology security assessment - Wikipedia
The birthday attack is MOST effective against which one of the following cipher technologies?
Chaining block encryption
Asymmetric cryptography
Cryptographic hash
Streaming cryptography
The birthday attack is most effective against cryptographic hash, which is one of the cipher technologies. A cryptographic hash is a function that takes an input of any size and produces an output of a fixed size, called a hash or a digest, that represents the input. A cryptographic hash has several properties, such as being one-way, collision-resistant, and deterministic3. A birthday attack is a type of brute-force attack that exploits the mathematical phenomenon known as the birthday paradox, which states that in a set of randomly chosen elements, there is a high probability that some pair of elements will have the same value. A birthday attack can be used to find collisions in a cryptographic hash, which means finding two different inputs that produce the same hash. Finding collisions can compromise the integrity or the security of the hash, as it can allow an attacker to forge or modify the input without changing the hash. Chaining block encryption, asymmetric cryptography, and streaming cryptography are not as vulnerable to the birthday attack, as they are different types of encryption algorithms that use keys and ciphers to transform the input into an output. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 3, page 133. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 143.
Alternate encoding such as hexadecimal representations is MOST often observed in which of the following forms of attack?
Smurf
Rootkit exploit
Denial of Service (DoS)
Cross site scripting (XSS)
Alternate encoding such as hexadecimal representations is most often observed in cross site scripting (XSS) attacks. XSS is a type of web application attack that involves injecting malicious code or scripts into a web page or a web application, usually through user input fields or parameters. The malicious code or script is then executed by the victim’s browser, and can perform various actions, such as stealing cookies, session tokens, or credentials, redirecting to malicious sites, or displaying fake content. Alternate encoding is a technique that is used by attackers to bypass input validation or filtering mechanisms, and to conceal or obfuscate the malicious code or script. Alternate encoding can use hexadecimal, decimal, octal, binary, or Unicode representations of the characters or symbols in the code or script . References: : What is Cross-Site Scripting (XSS)? : XSS Filter Evasion Cheat Sheet
What principle requires that changes to the plaintext affect many parts of the ciphertext?
Diffusion
Encapsulation
Obfuscation
Permutation
Diffusion is the principle that requires that changes to the plaintext affect many parts of the ciphertext. Diffusion is a property of a good encryption algorithm that aims to spread the influence of each plaintext bit over many ciphertext bits, so that a small change in the plaintext results in a large change in the ciphertext2. Diffusion can increase the security of the encryption by making it harder for an attacker to analyze the statistical patterns or correlations between the plaintext and the ciphertext. Encapsulation, obfuscation, and permutation are not principles that require that changes to the plaintext affect many parts of the ciphertext, as they are related to different aspects of encryption or security. References: 2: CISSP For Dummies, 7th Edition, Chapter 3, page 65.
Which of the following Disaster Recovery (DR) sites is the MOST difficult to test?
Hot site
Cold site
Warm site
Mobile site
A cold site is a backup facility with little or no hardware equipment installed. It is the most cost-effective option among the three disaster recovery sites, but it takes a lot of time to properly set it up and resume business operations. Therefore, testing a cold site is the most difficult and time-consuming task.
Which of the following is a potential risk when a program runs in privileged mode?
It may serve to create unnecessary code complexity
It may not enforce job separation duties
It may create unnecessary application hardening
It may allow malicious code to be inserted
A potential risk when a program runs in privileged mode is that it may allow malicious code to be inserted. Privileged mode, also known as kernel mode or supervisor mode, is a mode of operation that grants the program full access and control over the hardware and software resources of the system, such as memory, disk, CPU, and devices. A program that runs in privileged mode can perform any action or instruction without any restriction or protection. This can be exploited by an attacker who can inject malicious code into the program, such as a rootkit, a backdoor, or a keylogger, and gain unauthorized access or control over the system . References: : What is Privileged Mode? : Privilege Escalation - OWASP Cheat Sheet Series
Internet Protocol (IP) source address spoofing is used to defeat
address-based authentication.
Address Resolution Protocol (ARP).
Reverse Address Resolution Protocol (RARP).
Transmission Control Protocol (TCP) hijacking.
Internet Protocol (IP) source address spoofing is used to defeat address-based authentication, which is a method of verifying the identity of a user or a system based on their IP address. IP source address spoofing involves forging the IP header of a packet to make it appear as if it came from a trusted or authorized source, and bypassing the authentication check. IP source address spoofing can be used for various malicious purposes, such as denial-of-service attacks, man-in-the-middle attacks, or session hijacking34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5274: CISSP For Dummies, 7th Edition, Chapter 5, page 153.
What is the MOST effective countermeasure to a malicious code attack against a mobile system?
Sandbox
Change control
Memory management
Public-Key Infrastructure (PKI)
A sandbox is a security mechanism that isolates a potentially malicious code or application from the rest of the system, preventing it from accessing or modifying any sensitive data or resources1. A sandbox can be implemented at the operating system, application, or network level, and can provide a safe environment for testing, debugging, or executing untrusted code. A sandbox is the most effective countermeasure to a malicious code attack against a mobile system, as it can prevent the code from spreading, stealing, or destroying any information on the device. Change control, memory management, and PKI are not directly related to preventing or mitigating malicious code attacks on mobile systems. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 507.
The Chief Information Security Officer (CISO) of an organization has requested that a Service Organization Control (SOC) report be created to outline the security and availability of a
particular system over a 12-month period. Which type of SOC report should be utilized?
SOC 1 Type 1
SOC 2 Type 2
SOC 2 Type 2
SOC 3 Type 1
The type of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period is SOC 2 Type 2. SOC 2 Type 2 is a security audit report that provides information about the design and the operating effectiveness of the controls at a service organization relevant to the security and availability trust service categories, as well as the other trust service categories such as processing integrity, confidentiality, and privacy. SOC 2 Type 2 is the type of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period, because it can:
The other options are not the types of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period. SOC 1 Type 1 is a security audit report that provides information about the design of the controls at a service organization relevant to the internal control over financial reporting of the user entities or the customers, based on the control objectives defined by the service organization. SOC 1 Type 1 is not the type of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period, because it does not:
SOC 2 Type 1 is a security audit report that provides information about the design of the controls at a service organization relevant to the security and availability trust service categories, as well as the other trust service categories such as processing integrity, confidentiality, and privacy. SOC 2 Type 1 is not the type of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period, because it does not:
SOC 3 Type 1 is a security audit report that provides information about the design of the controls at a service organization relevant to the security and availability trust service categories, as well as the other trust service categories such as processing integrity, confidentiality, and privacy. SOC 3 Type 1 is not the type of SOC report that should be utilized to outline the security and availability of a particular system over a 12-month period, because it does not:
References: SOC Report Types: Type 1 vs Type 2 SOC Reports/Audits, SOC 1 vs SOC 2 vs SOC 3: What’s the Difference? | Secureframe, A Comprehensive Guide to SOC Reports - SC&H Group, Service Organization Control (SOC) Reports Explained - Cherry Bekaert, Service Organization Controls (SOC) Reports | Rapid7
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
Contingency plan exercises are intended to do which of the following?
Train personnel in roles and responsibilities
Validate service level agreements
Train maintenance personnel
Validate operation metrics
Contingency plan exercises are intended to train personnel in roles and responsibilities. Contingency plan exercises are simulated scenarios that test the preparedness and effectiveness of the contingency plan, which is a document that outlines the actions and procedures to be followed in the event of a disruption or disaster. Contingency plan exercises help to train the personnel involved in the contingency plan, such as the incident response team, the recovery team, and the business continuity team, in their roles and responsibilities, such as communication, coordination, decision making, and execution. Contingency plan exercises also help to identify and resolve any issues or gaps in the contingency plan, and to improve the skills and confidence of the personnel5 . References: 5: Contingency Plan Testing : Contingency Planning Guide for Federal Information Systems
An Intrusion Detection System (IDS) is generating alarms that a user account has over 100 failed login attempts per minute. A sniffer is placed on the network, and a variety of passwords for that user are noted. Which of the following is MOST likely occurring?
A dictionary attack
A Denial of Service (DoS) attack
A spoofing attack
A backdoor installation
A dictionary attack is a type of brute-force attack that attempts to guess a user’s password by trying a large number of possible words or phrases, often derived from a dictionary or a list of commonly used passwords. A dictionary attack can be detected by an Intrusion Detection System (IDS) if it generates a high number of failed login attempts per minute, as well as a variety of passwords for the same user. A sniffer can capture the network traffic and reveal the passwords being tried by the attacker34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6574: CISSP For Dummies, 7th Edition, Chapter 6, page 197.
A small office is running WiFi 4 APs, and neighboring offices do not want to increase the throughput to associated devices. Which of the following is the MOST cost-efficient way for the office to increase network performance?
Add another AP.
Disable the 2.4GHz radios
Enable channel bonding.
Upgrade to WiFi 5.
The most cost-efficient way for the office to increase network performance is to upgrade to WiFi 5, which is the latest generation of wireless technology that offers faster speeds, lower latency, and higher capacity than WiFi 4. WiFi 5 operates on both 2.4GHz and 5GHz bands, and supports features such as MU-MIMO, beamforming, and channel bonding, which can improve the throughput and efficiency of the wireless network. Upgrading to WiFi 5 may require replacing the existing APs and devices with compatible ones, but it may not be as expensive or complex as the other options. The other options are either ineffective or impractical for increasing network performance, as they may not address the root cause of the problem, may interfere with the neighboring offices, or may require additional hardware or configuration. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points
Drag the following Security Engineering terms on the left to the BEST definition on the right.
The correct matches are:
Comprehensive Explanation: These terms and definitions are based on the glossary of the Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Engineering, pp. 293-2941
References: Official (ISC)2 CISSP CBK Reference, Fifth Edition
The PRIMARY outcome of a certification process is that it provides documented
system weaknesses for remediation.
standards for security assessment, testing, and process evaluation.
interconnected systems and their implemented security controls.
security analyses needed to make a risk-based decision.
The primary outcome of a certification process is that it provides documented security analyses needed to make a risk-based decision. Certification is a process of evaluating and testing the security of a system or product against a set of criteria or standards. Certification provides evidence of the security posture and capabilities of the system or product, as well as the identified vulnerabilities, threats, and risks. Certification helps the decision makers, such as the system owners or accreditors, to determine whether the system or product meets the security requirements and can be authorized to operate in a specific environment12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 455; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 867.
Which of the following is most helpful in applying the principle of LEAST privilege?
Establishing a sandboxing environment
Setting up a Virtual Private Network (VPN) tunnel
Monitoring and reviewing privileged sessions
Introducing a job rotation program
Monitoring and reviewing privileged sessions helps in applying the principle of least privilege by ensuring that users with higher privileges are only accessing resources necessary for their roles, thus reducing the risk of misuse or exploitation. References: CISSP Official (ISC)2 Practice Tests, Chapter 5, page 138; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 249
Which of the following sets of controls should allow an investigation if an attack is not blocked by preventive controls or detected by monitoring?
Logging and audit trail controls to enable forensic analysis
Security incident response lessons learned procedures
Security event alert triage done by analysts using a Security Information and Event Management (SIEM) system
Transactional controls focused on fraud prevention
Logging and audit trail controls are designed to record and monitor the activities and events that occur on a system or network. They can provide valuable information for forensic analysis, such as the source, destination, time, and type of an event, the user or process involved, the data or resources accessed or modified, and the outcome or status of the event. Logging and audit trail controls can help identify the cause, scope, impact, and timeline of an attack, as well as the evidence and artifacts left by the attacker. They can also help determine the effectiveness and gaps of the preventive and detective controls, and support the incident response and recovery processes. Logging and audit trail controls should be configured, protected, and reviewed according to the organizational policies and standards, and comply with the legal and regulatory requirements.
Which of the following prevents improper aggregation of privileges in Role Based Access Control (RBAC)?
Hierarchical inheritance
Dynamic separation of duties
The Clark-Wilson security model
The Bell-LaPadula security model
The method that prevents improper aggregation of privileges in role based access control (RBAC) is dynamic separation of duties. RBAC is a type of access control model that assigns permissions and privileges to users or devices based on their roles or functions within an organization, rather than their identities or attributes. RBAC can simplify and streamline the access control management, as it can reduce the complexity and redundancy of the permissions and privileges. However, RBAC can also introduce the risk of improper aggregation of privileges, which is the situation where a user or a device can accumulate more permissions or privileges than necessary or appropriate for their role or function, either by having multiple roles or by changing roles over time. Dynamic separation of duties is a method that prevents improper aggregation of privileges in RBAC, by enforcing rules or constraints that limit or restrict the roles or the permissions that a user or a device can have or use at any given time or situation.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 349; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 310
The MAIN use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data
through a firewall at the Session layer
through a firewall at the Transport layer
in the Point-to-Point Protocol (PPP)
in the Payload Compression Protocol (PCP)
The main use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data in the Point-to-Point Protocol (PPP). L2TP is a tunneling protocol that operates at the data link layer (Layer 2) of the OSI model, and is used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. L2TP does not provide encryption or authentication by itself, but it can be combined with IPsec to provide security and confidentiality for the tunneled data. L2TP is commonly used to tunnel PPP sessions over an IP network, such as the Internet. PPP is a protocol that establishes a direct connection between two nodes, and provides authentication, encryption, and compression for the data transmitted over the connection. PPP is often used to connect a remote client to a corporate network, or a user to an ISP. By using L2TP to encapsulate PPP packets, the connection can be extended over a public or shared network, creating a VPN. This way, the user can access the network resources and services securely and transparently, as if they were directly connected to the network. The other options are not the main use of L2TP, as they involve different protocols or layers. L2TP does not tunnel data through a firewall, but rather over an IP network. L2TP does not operate at the session layer or the transport layer, but at the data link layer. L2TP does not use the Payload Compression Protocol (PCP), but rather the Point-to-Point Protocol (PPP). References: Layer 2 Tunneling Protocol - Wikipedia; What is the Layer 2 Tunneling Protocol (L2TP)? - NordVPN; Understanding VPN protocols: OpenVPN, L2TP, WireGuard & more.
The design review for an application has been completed and is ready for release. What technique should an organization use to assure application integrity?
Application authentication
Input validation
Digital signing
Device encryption
The technique that an organization should use to assure application integrity is digital signing. Digital signing is a technique that uses cryptography to generate a digital signature for a message or a document, such as an application. The digital signature is a value that is derived from the message and the sender’s private key, and it can be verified by the receiver using the sender’s public key. Digital signing can help to assure application integrity, which means that the application has not been altered or tampered with during the transmission or storage. Digital signing can also help to assure application authenticity, which means that the application originates from the legitimate source. Application authentication, input validation, and device encryption are not techniques that can assure application integrity, but they can help to assure application security, usability, or confidentiality, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 607; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 388.
The process of mutual authentication involves a computer system authenticating a user and authenticating the
user to the audit process.
computer system to the user.
user's access to all authorized objects.
computer system to the audit process.
Mutual authentication is the process of verifying the identity of both parties in a communication. The computer system authenticates the user by verifying their credentials, such as username and password, biometrics, or tokens. The user authenticates the computer system by verifying its identity, such as a digital certificate, a trusted third party, or a challenge-response mechanism34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5154: CISSP For Dummies, 7th Edition, Chapter 5, page 151.
Which of the following is a limitation of the Common Vulnerability Scoring System (CVSS) as it relates to conducting code review?
It has normalized severity ratings.
It has many worksheets and practices to implement.
It aims to calculate the risk of published vulnerabilities.
It requires a robust risk management framework to be put in place.
The Common Vulnerability Scoring System (CVSS) is a framework that provides a standardized and consistent way of measuring and communicating the severity and risk of published vulnerabilities. CVSS assigns a numerical score and a vector string to each vulnerability, based on various metrics and formulas. CVSS is a useful tool for prioritizing the remediation of vulnerabilities, but it has some limitations as it relates to conducting code review. One of the limitations is that CVSS aims to calculate the risk of published vulnerabilities, which means that it does not cover the vulnerabilities that are not yet discovered or disclosed. Code review, on the other hand, is a process of examining the source code of a software to identify and fix any errors, bugs, or vulnerabilities that may exist in the code. Code review can help find vulnerabilities that are not yet published, and therefore not scored by CVSS. References: : CISSP For Dummies, 7th Edition, Chapter 8, page 222. : Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 465.
What is the FIRST step in developing a security test and its evaluation?
Determine testing methods
Develop testing procedures
Identify all applicable security requirements
Identify people, processes, and products not in compliance
The first step in developing a security test and its evaluation is to identify all applicable security requirements. Security requirements are the specifications or criteria that define the security objectives, expectations, and needs of the system or network. Security requirements may be derived from various sources, such as business goals, user needs, regulatory standards, contractual obligations, or best practices. Identifying all applicable security requirements is essential to establish the scope, purpose, and criteria of the security test and its evaluation. Determining testing methods, developing testing procedures, and identifying people, processes, and products not in compliance are subsequent steps that should be done after identifying the security requirements, as they depend on the security requirements to be defined and agreed upon. References: : Security Testing - Overview : Security Testing - Planning
The Hardware Abstraction Layer (HAL) is implemented in the
system software.
system hardware.
application software.
network hardware.
The Hardware Abstraction Layer (HAL) is implemented in the system software. The system software is the software that controls and manages the basic operations and functions of the computer system, such as the operating system, the device drivers, the firmware, and the BIOS. The HAL is a component of the system software that provides a common interface between the hardware and the software layers of the system. The HAL abstracts the details and differences of the hardware devices and components, and allows the software to interact with the hardware in a consistent and uniform way. The HAL also enables the system to support multiple hardware platforms and configurations without requiring changes in the software5 . References: 5: What is Hardware Abstraction Layer (HAL)? : Hardware Abstraction Layer (HAL) - GeeksforGeeks
Which of the following is TRUE regarding equivalence class testing?
It is characterized by the stateless behavior of a process implemented In a function.
An entire partition can be covered by considering only one representative value from that partition.
Test inputs are obtained from the derived boundaries of the given functional specifications.
It is useful for testing communications protocols and graphical user interfaces.
Equivalence class testing is a software testing technique that divides the input domain of a program into a finite number of equivalence classes, or partitions, based on the expected behavior or output of the program. An equivalence class is a set of inputs that are equivalent in terms of satisfying the same condition or producing the same result. The main idea of equivalence class testing is that an entire partition can be covered by considering only one representative value from that partition, as all the values in the same partition are expected to behave the same way. This can reduce the number of test cases and increase the test coverage and efficiency. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 389; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 529]
Which of the following is MOST critical in a contract in a contract for data disposal on a hard drive with a third party?
Authorized destruction times
Allowed unallocated disk space
Amount of overwrites required
Frequency of recovered media
The most critical factor in a contract for data disposal on a hard drive with a third party is the amount of overwrites required. Data disposal is a process of permanently and securely deleting or destroying data from a storage device, such as a hard drive, a flash drive, or a CD-ROM. Data disposal is necessary to prevent unauthorized access, disclosure, or recovery of the data, and to comply with the legal or regulatory requirements for data protection and privacy. Data disposal can be performed by various methods, such as physical destruction, degaussing, encryption, or overwriting. Overwriting is a method of data disposal that replaces the existing data on the storage device with random or meaningless data, making the original data unreadable or unrecoverable. Overwriting can be done by using software tools or commands that overwrite the data on the storage device one or more times. The amount of overwrites required is the number of times that the data on the storage device needs to be overwritten to ensure that the data is completely and irreversibly erased. The amount of overwrites required depends on various factors, such as the type and size of the storage device, the sensitivity and value of the data, and the security standards or guidelines that apply to the data. The amount of overwrites required is the most critical factor in a contract for data disposal on a hard drive with a third party, as it determines the level of assurance and confidence that the data is properly and securely disposed of, and that the organization and the third party meet their obligations and responsibilities for data protection and privacy. Authorized destruction times, allowed unallocated disk space, and frequency of recovered media are not as critical as the amount of overwrites required in a contract for data disposal on a hard drive with a third party, as they are either not directly related to the effectiveness or efficiency of the data disposal method, or they may not be applicable or feasible in all cases. References:
What testing technique enables the designer to develop mitigation strategies for potential vulnerabilities?
Manual inspections and reviews
Penetration testing
Threat modeling
Source code review
Threat modeling is the testing technique that enables the designer to develop mitigation strategies for potential vulnerabilities. Threat modeling is a method of identifying, analyzing, and prioritizing the threats and vulnerabilities that may affect a system or an application. Threat modeling can help the designer to understand the attack surface, the attack vectors, the attack scenarios, and the impact and likelihood of the attacks. Threat modeling can also help the designer to develop mitigation strategies for the potential vulnerabilities, such as applying security controls, implementing security best practices, or redesigning the system or the application. Threat modeling can be performed at any stage of the system development life cycle (SDLC), but it is most effective when done early and iteratively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 441. Free daily CISSP practice questions, Question 2.
Which of the following attributes could be used to describe a protection mechanism of an open design methodology?
lt must be tamperproof to protect it from malicious attacks.
It can facilitate independent confirmation of the design security.
It can facilitate blackbox penetration testing.
It exposes the design to vulnerabilities and malicious attacks.
One of the attributes that could be used to describe a protection mechanism of an open design methodology is that it can facilitate independent confirmation of the design security, meaning that it can enable external parties, such as researchers, experts, or users, to verify, validate, or evaluate the security properties and features of the design, and to provide feedback, suggestions, or improvements to the design. Independent confirmation of the design security can increase the confidence and trust in the design, as well as identify and resolve any security flaws, vulnerabilities, or weaknesses in the design. It must be tamperproof to protect it from malicious attacks, it can facilitate blackbox penetration testing, and it exposes the design to vulnerabilities and malicious attacks are not attributes that could be used to describe a protection mechanism of an open design methodology, as they are either not related to the openness or transparency of the design, or they are negative or undesirable consequences of the open design methodology. References:
What is the PRIMARY objective of business continuity planning?
Establishing a cost estimate for business continuity recovery operations
Restoring computer systems to normal operations as soon as possible
Strengthening the perceived importance of business continuity planning among senior management
Ensuring timely recovery of mission-critical business processes
The primary objective of business continuity planning is to ensure timely recovery of mission-critical business processes. Business continuity planning is the process of identifying, analyzing, and preparing for the potential impacts of disruptive events or incidents that may affect the organization’s normal operations and functions. Business continuity planning involves developing and implementing a business continuity plan (BCP), which is a document that defines the procedures and resources for restoring the organization’s mission-critical business processes and systems after a disaster or an outage. The mission-critical business processes are the core activities or functions that are essential for the organization’s survival and success, and that must be resumed within a predefined time frame, known as the recovery time objective (RTO). The primary objective of business continuity planning is to ensure that the organization can recover its mission-critical business processes within the RTO, and minimize the impact and the loss caused by the disruption. The other options are not the primary objective of business continuity planning. Establishing a cost estimate for business continuity recovery operations is a task or a step within the business continuity planning process, but it is not the main goal or purpose of the process. Restoring computer systems to normal operations as soon as possible is a sub-objective or a component of the business continuity planning process, but it is not the only or the most important objective, as the business continuity planning process also covers other aspects of the organization’s operations and functions, such as the people, the processes, the facilities, or the suppliers. Strengthening the perceived importance of business continuity planning among senior management is a benefit or an outcome of the business continuity planning process, but it is not the primary objective or the reason for the process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1017. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1023.
A recent information security risk assessment identified weak system access controls on mobile devices as a high me In order to address this risk and ensure only authorized staff access company information, which of the following should the organization implement?
Intrusion prevention system (IPS)
Multi-factor authentication (MFA)
Data loss protection (DLP)
Data at rest encryption
Multi-factor authentication (MFA) is a method of authentication that requires two or more independent factors to verify the identity of a user, such as something you know, something you have, or something you are. MFA can help address the risk of weak system access controls on mobile devices, as it provides a higher level of security than a single factor, such as a password. MFA can prevent unauthorized access to company information, even if the mobile device is lost, stolen, or compromised. An intrusion prevention system (IPS) is a device or software that monitors and blocks network traffic based on predefined rules or signatures. An IPS can help protect the network from external attacks, but it does not address the system access controls on mobile devices. Data loss protection (DLP) is a system or tool that prevents the unauthorized disclosure, transfer, or leakage of sensitive data. DLP can help protect the company information from being exposed, but it does not address the system access controls on mobile devices. Data at rest encryption is a technique that encrypts the data that is stored on a device or a media. Data at rest encryption can help protect the company information from being accessed, even if the mobile device is lost, stolen, or compromised, but it does not address the system access controls on mobile devices.
An Internet software application requires authentication before a user is permitted to utilize the resource. Which testing scenario BEST validates the functionality of the application?
Reasonable data testing
Input validation testing
Web session testing
Allowed data bounds and limits testing
Web session testing is the testing scenario that best validates the functionality of an Internet software application that requires authentication before a user is permitted to utilize the resource. Web session testing is a type of software testing that verifies the behavior and the performance of a web application when it interacts with the user through a web browser. Web session testing can check various aspects of a web application, such as the user interface, the navigation, the functionality, the security, the usability, and the compatibility. Web session testing can also validate the authentication and the authorization mechanisms of a web application, such as the login process, the session management, the access control, and the logout process. Web session testing can help ensure that the web application provides a secure and reliable service to the user, and that the user can access the web application resources only after being authenticated and authorized. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 440. CISSP Practice Exam – FREE 20 Questions and Answers, Question 19.
Which of the following is the MOST appropriate technique for destroying magnetic platter style hard disk drives (HDD) containing data with a "HIGH" security categorization?
Drill through the device and platters.
Mechanically shred the entire HDD.
Remove the control electronics.
HP iProcess the HDD through a degaussing device.
Mechanically shredding the entire HDD is the most appropriate technique for destroying magnetic platter style hard disk drives containing data with a “HIGH” security categorization. Mechanical shredding is a process that uses a powerful machine to cut, tear, or crush the HDD into small pieces, making it impossible to recover any data from the platters or the components. Mechanical shredding is the most appropriate technique for destroying HDDs with high security data, because it can:
The other options are not the most appropriate techniques for destroying HDDs with high security data. Drilling through the device and platters is a process that uses a drill to make holes in the HDD and the platters, making it difficult to read the data from the platters. Drilling through the device and platters is not the most appropriate technique for destroying HDDs with high security data, because it may not:
Removing the control electronics is a process that involves detaching the circuit board or the controller from the HDD, making it unable to communicate or operate with other devices. Removing the control electronics is not the most appropriate technique for destroying HDDs with high security data, because it does not:
HP iProcess the HDD through a degaussing device is a process that uses a powerful magnet to erase the data from the HDD by altering or eliminating the magnetic field of the platters or the components. HP iProcess the HDD through a degaussing device is not the most appropriate technique for destroying HDDs with high security data, because it may not:
If you want to learn more about the different techniques for destroying HDDs and the data, you can check out these resources:
Which of the following is TRUE for an organization that is using a third-party federated identity service?
The organization enforces the rules to other organization's user provisioning
The organization establishes a trust relationship with the other organizations
The organization defines internal standard for overall user identification
The organization specifies alone how to authenticate other organization's users
The true statement for an organization that is using a third-party federated identity service is that the organization establishes a trust relationship with the other organizations. A federated identity service is a service that enables users to access multiple applications or systems across different domains or organizations using a single identity and authentication method. A federated identity service relies on a trust relationship between the identity provider (IdP), which is the organization that issues and manages the user’s identity, and the service provider (SP), which is the organization that provides the application or system that the user wants to access. The trust relationship is established by using standards and protocols, such as SAML, OAuth, or OpenID Connect, that enable the exchange of identity and authentication information between the IdP and the SP. The other options are not true for an organization that is using a third-party federated identity service. The organization does not enforce the rules to other organization’s user provisioning, as the user provisioning is the responsibility of the IdP, not the SP. The organization does not define internal standard for overall user identification, as the user identification is based on the standard and protocol agreed upon by the IdP and the SP. The organization does not specify alone how to authenticate other organization’s users, as the authentication is performed by the IdP, not the SP. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, page 624. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Communication and Network Security, page 625.
An organization is considering partnering with a third-party supplier of cloud services. The organization will only be providing the data and the third-party supplier will be providing the security controls. Which of the following BEST describes this service offering?
Platform as a Service (PaaS)
Infrastructure as a Service (IaaS)
Software as a Service (SaaS)
Anything as a Service (XaaS)
SaaS is a service offering that provides the software or the application that is hosted and managed by the service provider or the vendor, and that is accessed and used by the client or the customer over the network or the internet, such as the web browser, the email, or the office suite. SaaS best describes the scenario where the organization will only be providing the data and the third-party supplier will be providing the security controls, because it can:
The other options are not the service offerings that best describe the scenario where the organization will only be providing the data and the third-party supplier will be providing the security controls. Platform as a Service (PaaS) is a service offering that provides the platform or the environment that is hosted and managed by the service provider or the vendor, and that is used by the client or the customer for developing, testing, deploying, or running the software or the application over the network or the internet, such as the database, the web server, or the programming language. PaaS does not best describe the scenario where the organization will only be providing the data and the third-party supplier will be providing the security controls, because it requires the organization to develop, test, deploy, or run the software or the application, and to share the responsibility and the accountability for the security controls of the platform or the environment and the software or the application and the data with the third-party supplier. Infrastructure as a Service (IaaS) is a service offering that provides the infrastructure or the resources that are hosted and managed by the service provider or the vendor, and that are used by the client or the customer for storing, processing, or networking the data or the information over the network or the internet, such as the server, the storage, or the network. IaaS does not best describe the scenario where the organization will only be providing the data and the third-party supplier will be providing the security controls, because it requires the organization to install, configure, maintain, or support the software or the application and the data, and to share the responsibility and the accountability for the security controls of the infrastructure or the resources and the software or the application and the data with the third-party supplier. Anything as a Service (XaaS) is a service offering that provides any type or combination of the services or the solutions that are hosted and managed by the service provider or the vendor, and that are accessed and used by the client or the customer over the network or the internet, such as the security, the analytics, or the communication. XaaS does not best describe the scenario where the organization will only be providing the data and the third-party supplier will be providing the security controls, because it is a generic and broad term that encompasses various types or combinations of the services or the solutions, and it does not specify the exact or the specific service offering that matches the scenario.
If you want to learn more about the different types of cloud service offerings, you can check out these resources:
What type of access control determines the authorization to resource based on pre-defined job titles within an organization?
Role-Based Access Control (RBAC)
Role-based access control
Non-discretionary access control
Discretionary Access Control (DAC)
Role-Based Access Control (RBAC) is the type of access control that determines the authorization to resources based on predefined job titles within an organization. RBAC is a model of access control that assigns roles to users based on their functions, responsibilities, or qualifications, and grants permissions to resources based on the roles. RBAC simplifies the management and administration of access control, as it reduces the complexity and redundancy of assigning permissions to individual users or groups. RBAC also enhances the security and compliance of access control, as it enforces the principle of least privilege and the separation of duties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 203. Free daily CISSP practice questions, Question 4.
According to the Capability Maturity Model Integration (CMMI), which of the following levels is identified by a managed process that is tailored from the organization's set of standard processes according to the organization's tailoring guidelines?
Level 0: Incomplete
Level 1: Performed
Level 2: Managed
Level 3: Defined
The Capability Maturity Model Integration (CMMI) is a framework that defines the best practices and standards for improving the performance, quality, and efficiency of an organization’s processes. The CMMI consists of five maturity levels that represent the degree of maturity and capability of the organization’s processes, from level 1 (lowest) to level 5 (highest). Each maturity level consists of several process areas that define the specific goals and practices for the organization’s processes. The maturity level that is identified by a managed process that is tailored from the organization’s set of standard processes according to the organization’s tailoring guidelines is level 3: Defined. A managed process is a process that is planned, executed, monitored, and controlled, and that meets the requirements and objectives of the organization and the stakeholders. A set of standard processes is a collection of processes that are established and maintained by the organization, and that can be applied to different projects or situations. A tailoring guideline is a rule or criterion that defines how the standard processes can be adapted or modified to suit the specific needs or characteristics of a project or situation. Level 3: Defined means that the organization has a well-defined and consistent process that is based on the standard processes, but that can be tailored to meet the specific requirements and objectives of each project or situation. Level 3: Defined can help to improve the effectiveness, predictability, and repeatability of the organization’s processes, as well as to enable the continuous improvement of the processes. Level 0: Incomplete, level 1: Performed, or level 2: Managed are not the maturity levels that are identified by a managed process that is tailored from the organization’s set of standard processes according to the organization’s tailoring guidelines, as they are either lower or non-existent levels of maturity and capability of the organization’s processes. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1177; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.12, page 305.
Which of the following steps should be performed FIRST when purchasing Commercial Off-The-Shelf (COTS) software?
undergo a security assessment as part of authorization process
establish a risk management strategy
harden the hosting server, and perform hosting and application vulnerability scans
establish policies and procedures on system and services acquisition
The first step when purchasing Commercial Off-The-Shelf (COTS) software is to establish policies and procedures on system and services acquisition. This involves defining the objectives, scope, and criteria for acquiring the software, as well as the roles and responsibilities of the stakeholders involved in the acquisition process. The policies and procedures should also address the legal, contractual, and regulatory aspects of the acquisition, such as the terms and conditions, the service level agreements, and the compliance requirements. Undergoing a security assessment, establishing a risk management strategy, and hardening the hosting server are not the first steps when purchasing COTS software, but they may be part of the subsequent steps, such as the evaluation, selection, and implementation of the software. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 64; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 56.
What are the steps of a risk assessment?
identification, analysis, evaluation
analysis, evaluation, mitigation
classification, identification, risk management
identification, evaluation, mitigation
The steps of a risk assessment are identification, analysis, and evaluation. Identification is the process of finding and listing the assets, threats, and vulnerabilities that are relevant to the risk assessment. Analysis is the process of estimating the likelihood and impact of each threat scenario and calculating the level of risk. Evaluation is the process of comparing the risk level with the risk criteria and determining whether the risk is acceptable or not. Mitigation is not part of the risk assessment, but it is part of the risk management, which is the process of applying controls to reduce or eliminate the risk. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 36; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 28.
Which of the following MUST be in place to recognize a system attack?
Stateful firewall
Distributed antivirus
Log analysis
Passive honeypot
Log analysis is the most essential method to recognize a system attack. Log analysis is the process of collecting, reviewing, and interpreting the records of events and activities that occur on a system or a network. Logs can provide valuable information and evidence about the source, nature, and impact of an attack, as well as the actions and responses of the system or the network. Log analysis can help to detect and analyze anomalies, patterns, trends, and indicators of compromise, as well as to identify and correlate the root cause, scope, and severity of an attack. Log analysis can also help to support incident response, forensic investigation, audit, and compliance activities. Log analysis requires the use of appropriate tools, techniques, and procedures, as well as the implementation of effective log management practices, such as log generation, collection, storage, retention, protection, and disposal. Stateful firewall, distributed antivirus, and passive honeypot are not the methods that must be in place to recognize a system attack, although they may be related or useful techniques. Stateful firewall is a type of network security device that monitors and controls the incoming and outgoing network traffic based on the state, context, and rules of the network connections. Stateful firewall can help to prevent or mitigate some types of attacks, such as denial-of-service, spoofing, or port scanning, by filtering or blocking the packets that do not match the established or expected state of the connection. However, stateful firewall is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that bypass or exploit the firewall rules, such as application-layer attacks, encryption-based attacks, or insider attacks. Distributed antivirus is a type of malware protection solution that uses a centralized server and multiple agents or clients to scan, detect, and remove malware from the systems or the network. Distributed antivirus can help to prevent or mitigate some types of attacks, such as viruses, worms, or ransomware, by updating and applying the malware signatures, heuristics, or behavioral analysis to the systems or the network. However, distributed antivirus is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that evade or disable the antivirus solution, such as zero-day attacks, polymorphic malware, or rootkits. Passive honeypot is a type of decoy system or network that mimics the real system or network and attracts the attackers to interact with it, while monitoring and recording their activities. Passive honeypot can help to divert or distract some types of attacks, such as reconnaissance, scanning, or probing, by providing false or misleading information to the attackers, while collecting valuable intelligence about their techniques, tools, or motives. However, passive honeypot is not sufficient to recognize a system attack, as it may not be able to detect or analyze the attacks that target the real system or network, or that avoid or identify the honeypot.
A user has infected a computer with malware by connecting a Universal Serial Bus (USB) storage device.
Which of the following is MOST effective to mitigate future infections?
Develop a written organizational policy prohibiting unauthorized USB devices
Train users on the dangers of transferring data in USB devices
Implement centralized technical control of USB port connections
Encrypt removable USB devices containing data at rest
The most effective method to mitigate future infections caused by connecting a Universal Serial Bus (USB) storage device is to implement centralized technical control of USB port connections. USB port connections are the physical interfaces that allow USB devices, such as flash drives, keyboards, or mice, to connect to a computer or a network. USB port connections can pose a security risk, as they can be used to introduce or spread malware, to steal or leak data, or to bypass other security controls. Centralized technical control of USB port connections is a technique that uses a centralized system or a policy to monitor, restrict, or disable the USB port connections on the computers or the network. Centralized technical control of USB port connections can prevent or limit future infections caused by connecting a USB storage device, as it can block or allow the USB devices based on various criteria, such as the device type, the device ID, the user ID, the time, or the location. Centralized technical control of USB port connections can also provide some benefits for web security, such as enhancing the visibility and the auditability of the USB activities, enforcing the compliance and the consistency of the USB policies, and reducing the reliance and the burden on the end users. Develop a written organizational policy prohibiting unauthorized USB devices, train users on the dangers of transferring data in USB devices, and encrypt removable USB devices containing data at rest are not the most effective methods to mitigate future infections caused by connecting a USB storage device, although they may be related or useful techniques. Develop a written organizational policy prohibiting unauthorized USB devices is a technique that uses a formal document to define and communicate the rules and the expectations regarding the usage of USB devices on the computers or the network. Develop a written organizational policy prohibiting unauthorized USB devices can provide some benefits for web security, such as raising the awareness and the responsibility of the parties, establishing the standards and the guidelines for the USB activities, and providing the basis and the justification for the enforcement and the sanctions of the USB policies. However, develop a written organizational policy prohibiting unauthorized USB devices is not sufficient to prevent or limit future infections caused by connecting a USB storage device, as the policy may not be effectively implemented, communicated, or followed by the parties, and it may not be able to address the dynamic and the complex nature of the USB threats. Train users on the dangers of transferring data in USB devices is a technique that uses education and awareness programs to inform and instruct the users about the risks and the best practices of using USB devices on the computers or the network. Train users on the dangers of transferring data in USB devices can provide some benefits for web security, such as improving the knowledge and the skills of the users, changing the attitudes and the behaviors of the users, and empowering the users to make informed and secure decisions regarding the USB activities.
The 802.1x standard provides a framework for what?
Network authentication for only wireless networks
Network authentication for wired and wireless networks
Wireless encryption using the Advanced Encryption Standard (AES)
Wireless network encryption using Secure Sockets Layer (SSL)
The 802.1x standard provides a framework for network authentication for wired and wireless networks. The 802.1x standard defines the Extensible Authentication Protocol (EAP), which is a protocol that enables the exchange of authentication information between a supplicant (a device that wants to access the network), an authenticator (a device that controls the access to the network), and an authentication server (a device that verifies the identity and credentials of the supplicant). The 802.1x standard supports various authentication methods, such as passwords, certificates, tokens, or biometrics. The other options are not correct descriptions of the 802.1x standard. Option A is a description of network authentication for only wireless networks, which is not the scope of the 802.1x standard, as it also applies to wired networks. Option C is a description of wireless encryption using the Advanced Encryption Standard (AES), which is not a function of the 802.1x standard, but rather a function of the Wi-Fi Protected Access 2 (WPA2) standard. Option D is a description of wireless network encryption using Secure Sockets Layer (SSL), which is not a function of the 802.1x standard, but rather a function of the Transport Layer Security (TLS) protocol. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, p. 310; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, p. 233.
Sensitive customer data is going to be added to a database. What is the MOST effective implementation for ensuring data privacy?
Discretionary Access Control (DAC) procedures
Mandatory Access Control (MAC) procedures
Data link encryption
Segregation of duties
The most effective implementation for ensuring data privacy when sensitive customer data is going to be added to a database is data link encryption. Data link encryption is a type of encryption or a protection technique or mechanism that encrypts or protects the data or the information that is transmitted or communicated over the data link layer or the second layer of the Open Systems Interconnection (OSI) model, which is the layer or the level that provides or offers the reliable or the error-free transmission or communication of the data or the information between the nodes or the devices that are connected or linked by the physical layer or the first layer of the OSI model, such as the switches, the bridges, or the wireless access points. Data link encryption can provide a high level of security or protection for the data or the information that is transmitted or communicated over the data link layer, as it can prevent or reduce the risk of unauthorized or inappropriate access, disclosure, modification, or interception of the data or the information by the third parties or the attackers who capture or monitor the data or the information over the data link layer, and as it can also provide the confidentiality, the integrity, or the authenticity of the data or the information that is transmitted or communicated over the data link layer. Data link encryption is the most effective implementation for ensuring data privacy when sensitive customer data is going to be added to a database, as it can ensure or maintain the security or the quality of the sensitive customer data or the information that is transmitted or communicated over the data link layer, by encrypting or protecting the sensitive customer data or the information that is going to be added to the database, and by preventing or reducing the risk of unauthorized or inappropriate access, disclosure, modification, or interception of the sensitive customer data or the information by the third parties or the attackers who capture or monitor the sensitive customer data or the information over the data link layer.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 146; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 211
Which Web Services Security (WS-Security) specification maintains a single authenticated identity across multiple dissimilar environments? Click on the correct specification in the image below.
WS-Federation
WS-Federation is the WS-Security specification that maintains a single authenticated identity across multiple dissimilar environments. WS-Federation is a specification that defines mechanisms for federated identity and access management, which allows users or devices to use a single identity or credential to access multiple or different applications, systems, or networks, without requiring to authenticate or to login separately or repeatedly for each application, system, or network. WS-Federation is based on the WS-Trust specification, which defines mechanisms for issuing, renewing, and validating security tokens, such as SAML assertions or Kerberos tickets, that can be used as credentials for federated identity and access management. References: CISSP Official (ISC)2 Practice Tests, Chapter 4, page 122; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 179
Which of the following is ensured when hashing files during chain of custody handling?
Availability
Accountability
Integrity
Non-repudiation
Hashing files during chain of custody handling ensures integrity, which means that the files have not been altered or tampered with during the collection, preservation, or analysis of digital evidence1. Hashing is a process of applying a mathematical function to a file to generate a unique value, called a hash or a digest, that represents the file’s content. By comparing the hash values of the original and the copied files, the integrity of the files can be verified. Availability, accountability, and non-repudiation are not ensured by hashing files during chain of custody handling, as they are related to different aspects of information security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633.
Which of the following is the FIRST step of a penetration test plan?
Analyzing a network diagram of the target network
Notifying the company's customers
Obtaining the approval of the company's management
Scheduling the penetration test during a period of least impact
The first step of a penetration test plan is to obtain the approval of the company’s management, as well as the consent of the target network’s owner or administrator. This is essential to ensure the legality, ethics, and scope of the test, as well as to define the objectives, expectations, and deliverables of the test. Without proper authorization, a penetration test could be considered as an unauthorized or malicious attack, and could result in legal or reputational consequences . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 758. : CISSP For Dummies, 7th Edition, Chapter 7, page 234.
Which of the following methods protects Personally Identifiable Information (PII) by use of a full replacement of the data element?
Transparent Database Encryption (TDE)
Column level database encryption
Volume encryption
Data tokenization
Data tokenization is a method of protecting PII by replacing the sensitive data element with a non-sensitive equivalent, called a token, that has no extrinsic or exploitable meaning or value1. The token is then mapped back to the original data element in a secure database. This way, the PII is not exposed in the data processing or storage, and only authorized parties can access the original data element. Data tokenization is different from encryption, which transforms the data element into a ciphertext that can be decrypted with a key. Data tokenization does not require a key, and the token cannot be reversed to reveal the original data element2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2812: CISSP For Dummies, 7th Edition, Chapter 10, page 289.
What security management control is MOST often broken by collusion?
Job rotation
Separation of duties
Least privilege model
Increased monitoring
Separation of duties is a security management control that divides a critical or sensitive task into two or more parts, and assigns them to different individuals or groups. This reduces the risk of fraud, error, or abuse of authority, as no single person or group can perform the entire task without the cooperation or oversight of others. Separation of duties is most often broken by collusion, which is a secret or illegal agreement between two or more parties to bypass the control and achieve a common goal12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 352: CISSP For Dummies, 7th Edition, Chapter 1, page 23.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the
confidentiality of the traffic is protected.
opportunity to sniff network traffic exists.
opportunity for device identity spoofing is eliminated.
storage devices are protected against availability attacks.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the opportunity to sniff network traffic exists. A SAN is a dedicated network that connects storage devices, such as disk arrays, tape libraries, or servers, to provide high-speed data access and transfer. A SAN may use different protocols or technologies to communicate with storage devices, such as Fibre Channel, iSCSI, or NFS. By allowing storage communications to run on top of TCP/IP, a common network protocol that supports internet and intranet communications, a SAN may leverage the existing network infrastructure and reduce costs and complexity. However, this also exposes the storage communications to the same risks and threats that affect the network communications, such as sniffing, spoofing, or denial-of-service attacks. Sniffing is the act of capturing or monitoring network traffic, which may reveal sensitive or confidential information, such as passwords, encryption keys, or data. By allowing storage communications to run on top of TCP/IP with a SAN, the confidentiality of the traffic is not protected, unless encryption or other security measures are applied. The opportunity for device identity spoofing is not eliminated, as an attacker may still impersonate a legitimate storage device or server by using a forged or stolen IP address or MAC address. The storage devices are not protected against availability attacks, as an attacker may still disrupt or overload the network or the storage devices by sending malicious or excessive packets or requests.
Which of the following elements MUST a compliant EU-US Safe Harbor Privacy Policy contain?
An explanation of how long the data subject's collected information will be retained for and how it will be eventually disposed.
An explanation of who can be contacted at the organization collecting the information if corrections are required by the data subject.
An explanation of the regulatory frameworks and compliance standards the information collecting organization adheres to.
An explanation of all the technologies employed by the collecting organization in gathering information on the data subject.
The EU-US Safe Harbor Privacy Policy is a framework that was established in 2000 to enable the transfer of personal data from the European Union to the United States, while ensuring adequate protection of the data subject’s privacy rights3. The framework was invalidated by the European Court of Justice in 2015, and replaced by the EU-US Privacy Shield in 20164. However, the Safe Harbor Privacy Policy still serves as a reference for the principles and requirements of data protection across the Atlantic. One of the elements that a compliant Safe Harbor Privacy Policy must contain is an explanation of who can be contacted at the organization collecting the information if corrections are required by the data subject. This is part of the principle of access, which states that individuals must have access to their personal information and be able to correct, amend, or delete it where it is inaccurate. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2954: CISSP For Dummies, 7th Edition, Chapter 10, page 284. : Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 293.
The key benefits of a signed and encrypted e-mail include
confidentiality, authentication, and authorization.
confidentiality, non-repudiation, and authentication.
non-repudiation, authorization, and authentication.
non-repudiation, confidentiality, and authorization.
A signed and encrypted e-mail provides confidentiality by preventing unauthorized access to the message content, non-repudiation by verifying the identity and integrity of the sender, and authentication by ensuring that the message is from the claimed source. Authorization is not a benefit of a signed and encrypted e-mail, as it refers to the process of granting or denying access to resources based on predefined rules.
An engineer in a software company has created a virus creation tool. The tool can generate thousands of polymorphic viruses. The engineer is planning to use the tool in a controlled environment to test the company's next generation virus scanning software. Which would BEST describe the behavior of the engineer and why?
The behavior is ethical because the tool will be used to create a better virus scanner.
The behavior is ethical because any experienced programmer could create such a tool.
The behavior is not ethical because creating any kind of virus is bad.
The behavior is not ethical because such a tool could be leaked on the Internet.
Creating a virus creation tool that can generate thousands of polymorphic viruses is not ethical, even if the intention is to use it in a controlled environment to test the company’s next generation virus scanning software. Such a tool could be leaked on the Internet, either intentionally or accidentally, and fall into the hands of malicious actors who could use it to create and spread harmful viruses that could compromise the security and privacy of millions of users and systems. The engineer should follow the code of ethics and professional conduct of the ISC2, which states that members and certificate holders shall protect society, the common good, necessary public trust and confidence, and the infrastructure . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18. : CISSP For Dummies, 7th Edition, Chapter 1, page 11.
A disadvantage of an application filtering firewall is that it can lead to
a crash of the network as a result of user activities.
performance degradation due to the rules applied.
loss of packets on the network due to insufficient bandwidth.
Internet Protocol (IP) spoofing by hackers.
A disadvantage of an application filtering firewall is that it can lead to performance degradation due to the rules applied. An application filtering firewall is a type of firewall that inspects the content and context of the data packets at the application layer of the OSI model. It can block or allow traffic based on the application protocol, the source and destination addresses, the user identity, the time of day, and other criteria. An application filtering firewall provides a high level of security and control, but it also requires more processing power and memory than other types of firewalls. This can result in slower network performance and increased latency56. References: 5: Application Layer Filtering (ALF): What is it and How does it Fit into your Security Plan?76: Different types of Firewalls: Their advantages and disadvantages
The Structured Query Language (SQL) implements Discretionary Access Controls (DAC) using
INSERT and DELETE.
GRANT and REVOKE.
PUBLIC and PRIVATE.
ROLLBACK and TERMINATE.
The Structured Query Language (SQL) implements Discretionary Access Controls (DAC) using the GRANT and REVOKE commands. DAC is a type of access control that allows the owner or creator of an object, such as a table, view, or procedure, to grant or revoke permissions to other users or roles. For example, a user can grant SELECT, INSERT, UPDATE, or DELETE privileges to another user on a specific table, or revoke them if needed34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 4134: CISSP For Dummies, 7th Edition, Chapter 4, page 123.
Which one of the following is the MOST important in designing a biometric access system if it is essential that no one other than authorized individuals are admitted?
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Crossover Error Rate (CER)
Rejection Error Rate
The most important factor in designing a biometric access system if it is essential that no one other than authorized individuals are admitted is the False Acceptance Rate (FAR). FAR is the probability that a biometric system will incorrectly accept an unauthorized user or reject an authorized user2. FAR is a measure of the security or accuracy of the biometric system, and it should be as low as possible to prevent unauthorized access. False Rejection Rate (FRR), Crossover Error Rate (CER), and Rejection Error Rate are not as important as FAR, as they are related to the usability or convenience of the biometric system, rather than the security. FRR is the probability that a biometric system will incorrectly reject an authorized user or accept an unauthorized user. CER is the point where FAR and FRR are equal, and it is used to compare the performance of different biometric systems. Rejection Error Rate is the probability that a biometric system will fail to capture or process a biometric sample. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 95.
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
Which of the following is an attacker MOST likely to target to gain privileged access to a system?
Programs that write to system resources
Programs that write to user directories
Log files containing sensitive information
Log files containing system calls
An attacker is most likely to target programs that write to system resources to gain privileged access to a system. System resources are the hardware and software components that are essential for the operation and functionality of a system, such as the CPU, memory, disk, network, operating system, drivers, libraries, etc. Programs that write to system resources may have higher privileges or permissions than programs that write to user directories or log files. An attacker may exploit vulnerabilities or flaws in these programs to execute malicious code, escalate privileges, or bypass security controls. Programs that write to user directories or log files are less likely to be targeted by an attacker, as they may have lower privileges or permissions, and may not contain sensitive information or system calls. User directories are the folders or locations where users store their personal files or data. Log files are the records of events or activities that occur in a system or application.
When is security personnel involvement in the Systems Development Life Cycle (SDLC) process MOST beneficial?
Testing phase
Development phase
Requirements definition phase
Operations and maintenance phase
The most beneficial phase for security personnel involvement in the Systems Development Life Cycle (SDLC) process is the requirements definition phase. This is the phase where the security personnel can identify and analyze the security needs, objectives, and constraints of the system, and define the security requirements and specifications that the system must meet. By involving security personnel in this phase, the organization can ensure that security is integrated into the system design from the beginning, and avoid costly or complex changes or fixes later in the SDLC process. The other options are not as beneficial as the requirements definition phase, as they either involve security personnel too late in the SDLC process (A, B, and D), or do not address the security needs and objectives of the system (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 459; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 551.
What is the MOST important reason to configure unique user IDs?
Supporting accountability
Reducing authentication errors
Preventing password compromise
Supporting Single Sign On (SSO)
Unique user IDs are essential for supporting accountability, which is the ability to trace actions or events to their source. Accountability is a key principle of security and helps to deter, detect, and correct unauthorized or malicious activities. Without unique user IDs, it would be difficult or impossible to identify who performed what action on a system or network. Reducing authentication errors, preventing password compromise, and supporting Single Sign On (SSO) are all possible benefits of using unique user IDs, but they are not the most important reason for configuring them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 38.
An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers having digital certificates. What is the security benefit of this implementation?
Clients can authenticate themselves to the servers.
Mutual authentication is available between the clients and servers.
Servers are able to issue digital certificates to the client.
Servers can authenticate themselves to the client.
A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates, which are electronic documents that bind a public key to an identity. A digital certificate can be used to authenticate the identity of an entity, such as a person, a device, or a server, that possesses the corresponding private key. An organization can implement a partial PKI with only the servers having digital certificates, which means that only the servers can prove their identity to the clients, but not vice versa. The security benefit of this implementation is that servers can authenticate themselves to the client, which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers. Clients can authenticate themselves to the servers, mutual authentication is available between the clients and servers, and servers are able to issue digital certificates to the client are not the security benefits of this implementation, as they require the clients to have digital certificates as well. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 631.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
Given the number of priorities, which of the following will MOST likely influence the selection of top initiatives?
Severity of risk
Complexity of strategy
Frequency of incidents
Ongoing awareness
The most likely factor that will influence the selection of top initiatives is the severity of risk. The severity of risk is a measure of the impact or the consequence of a threat exploiting a vulnerability, and the likelihood or the probability of that occurrence. The severity of risk can help to prioritize the security initiatives, as it can indicate the level of urgency or importance of addressing or mitigating the risk, and the potential benefit or value of implementing the initiative. The security initiatives that have the highest severity of risk should be selected as the top initiatives, as they can provide the most protection or improvement for the security program. Complexity of strategy, frequency of incidents, and ongoing awareness are not the most likely factors that will influence the selection of top initiatives, as they are related to the difficulty, the occurrence, or the education of the security program, not the prioritization or the justification of the security initiatives. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 25. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 40.
What is the PRIMARY reason for ethics awareness and related policy implementation?
It affects the workflow of an organization.
It affects the reputation of an organization.
It affects the retention rate of employees.
It affects the morale of the employees.
The primary reason for ethics awareness and related policy implementation is to affect the reputation of an organization positively, by demonstrating its commitment to ethical principles, values, and standards in its business practices, services, and products. Ethics awareness and policy implementation can also help the organization avoid legal liabilities, fines, or sanctions for unethical conduct, and foster trust and loyalty among its customers, partners, and employees. The other options are not as important as affecting the reputation, as they either do not directly relate to ethics (A), or are secondary outcomes of ethics (C and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 28.
Which of the following MUST system and database administrators be aware of and apply when configuring systems used for storing personal employee data?
Secondary use of the data by business users
The organization's security policies and standards
The business purpose for which the data is to be used
The overall protection of corporate resources and data
The thing that system and database administrators must be aware of and apply when configuring systems used for storing personal employee data is the organization’s security policies and standards. Security policies and standards are the documents that define the rules, guidelines, and procedures that govern the security of the organization’s information systems and data. Security policies and standards help to ensure the confidentiality, integrity, and availability of the information systems and data, and to comply with the legal or regulatory requirements. System and database administrators must be aware of and apply the organization’s security policies and standards when configuring systems used for storing personal employee data, as they are responsible for implementing and maintaining the security controls and measures that protect the personal employee data from unauthorized access, use, disclosure, or theft. Secondary use of the data by business users, the business purpose for which the data is to be used, and the overall protection of corporate resources and data are not the things that system and database administrators must be aware of and apply when configuring systems used for storing personal employee data, as they are related to the usage, purpose, or scope of the data, not the security of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 35. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Which of the following is a BEST practice when traveling internationally with laptops containing Personally Identifiable Information (PII)?
Use a thumb drive to transfer information from a foreign computer.
Do not take unnecessary information, including sensitive information.
Connect the laptop only to well-known networks like the hotel or public Internet cafes.
Request international points of contact help scan the laptop on arrival to ensure it is protected.
The best practice when traveling internationally with laptops containing Personally Identifiable Information (PII) is to do not take unnecessary information, including sensitive information. PII is any information that can be used to identify, contact, or locate a specific individual, such as name, address, phone number, email, social security number, or biometric data. PII is subject to various privacy and security laws and regulations, and must be protected from unauthorized access, use, disclosure, or theft. When traveling internationally with laptops containing PII, the best practice is to minimize the amount and type of PII that is stored or processed on the laptop, and to take only the information that is absolutely necessary for the business purpose. This can reduce the risk of losing, exposing, or compromising the PII, and the potential legal or reputational consequences. Using a thumb drive to transfer information from a foreign computer, connecting the laptop only to well-known networks like the hotel or public Internet cafes, and requesting international points of contact help scan the laptop on arrival to ensure it is protected are not the best practices when traveling internationally with laptops containing PII, as they may still expose the PII to various threats, such as malware, interception, or tampering, and may not comply with the privacy and security requirements of different countries or regions. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 43. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 56.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will MOST likely allow the organization to keep risk at an acceptable level?
Increasing the amount of audits performed by third parties
Removing privileged accounts from operational staff
Assigning privileged functions to appropriate staff
Separating the security function into distinct roles
The most likely action that will allow the organization to keep risk at an acceptable level is separating the security function into distinct roles. Separating the security function into distinct roles means to create and assign the specific and dedicated roles or positions for the security activities and initiatives, such as the security planning, the security implementation, the security monitoring, or the security auditing, and to separate them from the normal IT operations. Separating the security function into distinct roles can help to keep risk at an acceptable level, as it can enhance the security performance and effectiveness, by providing the authority, the resources, the guidance, and the accountability for the security roles, and by supporting the principle of least privilege and the separation of duties. Increasing the amount of audits performed by third parties, removing privileged accounts from operational staff, and assigning privileged functions to appropriate staff are not the most likely actions that will allow the organization to keep risk at an acceptable level, as they are related to the evaluation, the restriction, or the allocation of the security access or activity, not the separation of the security function into distinct roles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 32. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 47.
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization.
The organization should ensure that the third party's physical security controls are in place so that they
are more rigorous than the original controls.
are able to limit access to sensitive information.
allow access by the organization staff at any time.
cannot be accessed by subcontractors of the third party.
The organization should ensure that the third party’s physical security controls are in place so that they are able to limit access to sensitive information. Physical security controls are the measures or the mechanisms that protect the physical assets, such as the hardware, the software, the media, or the personnel, from the unauthorized or the malicious access, damage, or theft. Physical security controls can include locks, fences, guards, cameras, alarms, or biometrics. The organization should ensure that the third party’s physical security controls are able to limit access to sensitive information, as it can prevent or reduce the risk of the data breach, the data loss, or the data corruption, and it can ensure the confidentiality, the integrity, and the availability of the information. The organization should also ensure that the third party’s physical security controls are compliant with the organization’s policies, standards, and regulations, and that they are audited and monitored regularly. The organization should not ensure that the third party’s physical security controls are more rigorous than the original controls, allow access by the organization staff at any time, or cannot be accessed by subcontractors of the third party, as they are related to the level, the scope, or the restriction of the physical security controls, not the ability to limit access to sensitive information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 849. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 865.
Which of the following is required to determine classification and ownership?
System and data resources are properly identified
Access violations are logged and audited
Data file references are identified and linked
System security controls are fully integrated
The required step to determine classification and ownership is to ensure that the system and data resources are properly identified. Identification is the process of assigning unique names or labels to the system and data resources, such as hardware, software, files, databases, or networks. Identification helps to distinguish the system and data resources from each other, and to associate them with their respective owners, custodians, or users. Identification is a prerequisite for classification and ownership, which are the processes of assigning the value, sensitivity, and criticality of the system and data resources, and the roles and responsibilities of the parties involved in their protection and management. Logging and auditing access violations, identifying and linking data file references, and integrating system security controls are not required steps to determine classification and ownership, as they are related to the implementation and monitoring of the security policies and measures, not the identification of the system and data resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 52.
Which of the following is an example of two-factor authentication?
Retina scan and a palm print
Fingerprint and a smart card
Magnetic stripe card and an ID badge
Password and Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA)
An example of two-factor authentication is fingerprint and a smart card. Two-factor authentication is a type of authentication that requires two different factors or methods to verify the identity or the credentials of a user or a device. The factors or methods can be categorized into three types: something you know, something you have, or something you are. Something you know is a factor that relies on the knowledge of the user or the device, such as a password, a PIN, or a security question. Something you have is a factor that relies on the possession of the user or the device, such as a smart card, a token, or a certificate. Something you are is a factor that relies on the biometrics of the user or the device, such as a fingerprint, a retina scan, or a voice recognition. Fingerprint and a smart card are an example of two-factor authentication, as they combine two different factors: something you are and something you have. Retina scan and a palm print are not an example of two-factor authentication, as they are both the same factor: something you are. Magnetic stripe card and an ID badge are not an example of two-factor authentication, as they are both the same factor: something you have. Password and CAPTCHA are not an example of two-factor authentication, as they are both the same factor: something you know. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Which of the following BEST mitigates a replay attack against a system using identity federation and Security Assertion Markup Language (SAML) implementation?
Two-factor authentication
Digital certificates and hardware tokens
Timed sessions and Secure Socket Layer (SSL)
Passwords with alpha-numeric and special characters
The best way to mitigate a replay attack against a system using identity federation and Security Assertion Markup Language (SAML) implementation is to use timed sessions and Secure Socket Layer (SSL). A replay attack is a type of network attack that involves capturing and retransmitting a valid message or data to gain unauthorized access or perform malicious actions. Identity federation is a process that enables the sharing of identity information across different security domains, such as different organizations or applications. SAML is a standard protocol that enables identity federation by using XML-based assertions to exchange authentication and authorization information. To prevent a replay attack, the system can use timed sessions and SSL. Timed sessions are sessions that have a limited duration and expire after a certain period of time or inactivity. SSL is a protocol that provides encryption and authentication for data transmission over the internet. By using timed sessions and SSL, the system can ensure that the SAML assertions are valid, fresh, and secure, and that they cannot be reused or tampered with by an attacker. Two-factor authentication, digital certificates and hardware tokens, and passwords with alpha-numeric and special characters are not the best ways to mitigate a replay attack against a system using identity federation and SAML implementation, as they do not address the specific vulnerabilities of the SAML protocol or the network transmission. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Which of the following MOST influences the design of the organization's electronic monitoring policies?
Workplace privacy laws
Level of organizational trust
Results of background checks
Business ethical considerations
The factor that most influences the design of the organization’s electronic monitoring policies is workplace privacy laws. Workplace privacy laws are the laws that regulate the extent and manner of the employer’s monitoring or surveillance of the employee’s activities, communications, or behavior in the workplace, such as email, phone, internet, or video monitoring. Workplace privacy laws vary by country, state, or region, and may impose different requirements or restrictions on the employer’s electronic monitoring policies, such as the purpose, scope, consent, disclosure, or protection of the monitoring data. The employer must comply with the applicable workplace privacy laws when designing and implementing the electronic monitoring policies, to avoid violating the employee’s privacy rights or facing legal consequences. Level of organizational trust, results of background checks, and business ethical considerations are not the factors that most influence the design of the organization’s electronic monitoring policies, as they are related to the culture, security, or values of the organization, not the legal or regulatory framework of the electronic monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 50. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 65.
Host-Based Intrusion Protection (HIPS) systems are often deployed in monitoring or learning mode during their initial implementation. What is the objective of starting in this mode?
Automatically create exceptions for specific actions or files
Determine which files are unsafe to access and blacklist them
Automatically whitelist actions or files known to the system
Build a baseline of normal or safe system events for review
A Host-Based Intrusion Protection (HIPS) system is a software that monitors and blocks malicious activities on a single host, such as a computer or a server. A HIPS system can also prevent unauthorized changes to the system configuration, files, or registry12
During the initial implementation, a HIPS system is often deployed in monitoring or learning mode, which means that it observes the normal behavior of the system and the applications running on it, without blocking or alerting on any events. The objective of starting in this mode is to automatically create exceptions for specific actions or files that are legitimate and safe, but may otherwise trigger false alarms or unwanted blocks by the HIPS system34
By creating exceptions, the HIPS system can reduce the number of false positives and improve its accuracy and efficiency. However, the monitoring or learning mode should not last too long, as it may also expose the system to potential attacks that are not detected or prevented by the HIPS system. Therefore, after a sufficient baseline of normal behavior is established, the HIPS system should be switched to a more proactive mode, such as alerting or blocking mode, which can actively respond to suspicious or malicious events
Which of the following is the MOST crucial for a successful audit plan?
Defining the scope of the audit to be performed
Identifying the security controls to be implemented
Working with the system owner on new controls
Acquiring evidence of systems that are not compliant
An audit is an independent and objective examination of an organization’s activities, systems, processes, or controls to evaluate their adequacy, effectiveness, efficiency, and compliance with applicable standards, policies, laws, or regulations. An audit plan is a document that outlines the objectives, scope, methodology, criteria, schedule, and resources of an audit. The most crucial element of a successful audit plan is defining the scope of the audit to be performed, which is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The scope of the audit determines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. Identifying the security controls to be implemented, working with the system owner on new controls, and acquiring evidence of systems that are not compliant are all important tasks in an audit, but they are not the most crucial for a successful audit plan, as they depend on the scope of the audit to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
An online retail company has formulated a record retention schedule for customer transactions. Which of the following is a valid reason a customer transaction is kept beyond the retention schedule?
Pending legal hold
Long term data mining needs
Customer makes request to retain
Useful for future business initiatives
A valid reason for keeping a customer transaction beyond the retention schedule is a pending legal hold. A legal hold is a requirement or an order to preserve certain records or data that are relevant or potentially relevant to a legal matter, such as a lawsuit, an investigation, or an audit. A legal hold can override the normal record retention schedule or policy of an organization, and can mandate the organization to retain the records or data until the legal matter is resolved or the legal hold is lifted. A pending legal hold can be a valid reason for keeping a customer transaction beyond the retention schedule, as it can ensure the compliance, evidence, or liability of the organization or the customer. Long term data mining needs, customer makes request to retain, and useful for future business initiatives are not valid reasons for keeping a customer transaction beyond the retention schedule, as they are related to the business value, preference, or strategy of the organization or the customer, not the legal obligation or necessity of the organization or the customer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
Which of the following describes the concept of a Single Sign -On (SSO) system?
Users are authenticated to one system at a time.
Users are identified to multiple systems with several credentials.
Users are authenticated to multiple systems with one login.
Only one user is using the system at a time.
Single Sign-On (SSO) is a technology that allows users to securely access multiple applications and services using just one set of credentials, such as a username and a password56
With SSO, users do not have to remember and enter multiple passwords for different applications and services, which can improve their convenience and productivity. SSO also enhances security, as users can use stronger passwords, avoid reusing passwords, and comply with password policies more easily. Moreover, SSO reduces the risk of phishing, credential theft, and password fatigue56
SSO is based on the concept of federated identity, which means that the identity of a user is shared and trusted across different systems that have established a trust relationship. SSO uses various protocols and standards, such as SAML, OAuth, OIDC, and Kerberos, to enable the exchange of identity information and authentication tokens between the systems56
Which of the following access management procedures would minimize the possibility of an organization's employees retaining access to secure werk areas after they change roles?
User access modification
user access recertification
User access termination
User access provisioning
The access management procedure that would minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles is user access modification. User access modification is a process that involves changing or updating the access rights or permissions of a user account based on the user’s current role, responsibilities, or needs. User access modification can help to minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles, as it can ensure that the employees only have the access that is necessary and appropriate for their new roles, and that any access that is no longer needed or authorized is revoked or removed. User access recertification, user access termination, and user access provisioning are not access management procedures that can minimize the possibility of an organization’s employees retaining access to secure work areas after they change roles, but they can help to verify, revoke, or grant the access of the user accounts, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, page 154; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 2: Asset Security, page 146.
Which of the following is the MOST challenging issue in apprehending cyber criminals?
They often use sophisticated method to commit a crime.
It is often hard to collect and maintain integrity of digital evidence.
The crime is often committed from a different jurisdiction.
There is often no physical evidence involved.
The most challenging issue in apprehending cyber criminals is that the crime is often committed from a different jurisdiction. This means that the cyber criminals may operate from a different country or region than the victim or the target, and thus may be subject to different laws, regulations, and enforcement agencies. This can create difficulties and delays in identifying, locating, and prosecuting the cyber criminals, as well as in obtaining and preserving the digital evidence. The other issues, such as the sophistication of the methods, the integrity of the evidence, and the lack of physical evidence, are also challenges in apprehending cyber criminals, but they are not as significant as the jurisdiction issue. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 475; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 544.
Which of the following provides the MOST comprehensive filtering of Peer-to-Peer (P2P) traffic?
Application proxy
Port filter
Network boundary router
Access layer switch
An application proxy provides the most comprehensive filtering of Peer-to-Peer (P2P) traffic. P2P traffic is a type of network traffic that involves direct communication and file sharing between peers, without the need for a central server. P2P traffic can be used for legitimate purposes, such as distributed computing, content delivery, or collaboration, but it can also be used for illegal or malicious purposes, such as piracy, malware distribution, or denial-of-service attacks. P2P traffic can also consume a lot of bandwidth and degrade the performance of other network applications. Therefore, it may be desirable to filter or block P2P traffic on a network. An application proxy is a type of firewall that operates at the application layer of the OSI model, and acts as an intermediary between the client and the server. An application proxy can inspect the content and the behavior of the network traffic, and apply granular filtering rules based on the specific application protocol, such as HTTP, FTP, or SMTP. An application proxy can also perform authentication, encryption, caching, and logging functions. An application proxy can provide the most comprehensive filtering of P2P traffic, as it can identify and block the P2P applications and protocols, regardless of the port number or the payload. An application proxy can also prevent P2P traffic from bypassing the firewall by using encryption or tunneling techniques. The other options are not as effective as an application proxy for filtering P2P traffic. A port filter is a type of firewall that operates at the transport layer of the OSI model, and blocks or allows traffic based on the source and destination port numbers. A port filter cannot inspect the content or the behavior of the traffic, and cannot distinguish between different applications that use the same port number. A port filter can also be easily evaded by P2P traffic that uses random or well-known port numbers, such as port 80 for HTTP. A network boundary router is a router that connects a network to another network, such as the Internet. A network boundary router can perform some basic filtering functions, such as access control lists (ACLs) or packet filtering, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. A network boundary router can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. An access layer switch is a switch that connects end devices, such as PCs, printers, or servers, to the network. An access layer switch can perform some basic filtering functions, such as MAC address filtering or port security, but it cannot inspect the content or the behavior of the traffic, and cannot apply granular filtering rules based on the specific application protocol. An access layer switch can also be easily evaded by P2P traffic that uses encryption or tunneling techniques. References: Why and how to control peer-to-peer traffic | Network World; Detection and Management of P2P Traffic in Networks using Artificial Neural Networksa | Journal of Network and Systems Management; Blocking P2P And File Sharing - Cisco Meraki Documentation.
What does a Synchronous (SYN) flood attack do?
Forces Transmission Control Protocol /Internet Protocol (TCP/IP) connections into a reset state
Establishes many new Transmission Control Protocol / Internet Protocol (TCP/IP) connections
Empties the queue of pending Transmission Control Protocol /Internet Protocol (TCP/IP) requests
Exceeds the limits for new Transmission Control Protocol /Internet Protocol (TCP/IP) connections
A SYN flood attack does exceed the limits for new TCP/IP connections. A SYN flood attack is a type of denial-of-service attack that sends a large number of SYN packets to a server, without completing the TCP three-way handshake. The server allocates resources for each SYN packet and waits for the final ACK packet, which never arrives. This consumes the server’s memory and processing power, and prevents it from accepting new legitimate connections. The other options are not accurate descriptions of what a SYN flood attack does. References: SYN flood - Wikipedia; SYN flood DDoS attack | Cloudflare.
The core component of Role Based Access Control (RBAC) must be constructed of defined data elements.
Which elements are required?
Users, permissions, operations, and protected objects
Roles, accounts, permissions, and protected objects
Users, roles, operations, and protected objects
Roles, operations, accounts, and protected objects
Role Based Access Control (RBAC) is a model of access control that assigns permissions to users based on their roles, rather than their individual identities. The core component of RBAC is the role, which is a collection of permissions that define what operations a user can perform on what protected objects. The required data elements for RBAC are:
What is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique?
Purging
Encryption
Destruction
Clearing
Purging is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique. Purging is also known as sanitization, erasure, or wiping, and it is a security measure to prevent unauthorized access, disclosure, or misuse of the data. Purging can be performed by using software tools or physical methods that overwrite, degauss, or destroy the data and the storage media. Purging is required when the system or storage device is decommissioned, disposed, transferred, or reused, and the data is no longer needed or has a high level of sensitivity or classification. Encryption, destruction, and clearing are not the same as purging, although they may be related or complementary processes. Encryption is the process of transforming data into an unreadable form by using a secret key or algorithm. Encryption can protect the data from unauthorized access or disclosure, but it does not remove the data from the system or storage device. The encrypted data can still be recovered if the key or algorithm is compromised or broken. Destruction is the process of physically damaging or disintegrating the system or storage device to the point that it is unusable and irreparable. Destruction can prevent the data from being reconstructed, but it may not be feasible, cost-effective, or environmentally friendly. Clearing is the process of removing data from a system or storage device by using logical techniques, such as overwriting or deleting. Clearing can protect the data from unauthorized access by normal means, but it does not prevent the data from being reconstructed by using advanced techniques, such as forensic analysis or data recovery tools.
An organization has discovered that users are visiting unauthorized websites using anonymous proxies.
Which of the following is the BEST way to prevent future occurrences?
Remove the anonymity from the proxy
Analyze Internet Protocol (IP) traffic for proxy requests
Disable the proxy server on the firewall
Block the Internet Protocol (IP) address of known anonymous proxies
Anonymous proxies are servers that act as intermediaries between the user and the internet, hiding the user’s real IP address and allowing them to bypass network restrictions and access unauthorized websites. The best way to prevent users from visiting unauthorized websites using anonymous proxies is to block the IP address of known anonymous proxies on the firewall or router. This will prevent the user from establishing a connection with the proxy server and accessing the blocked content. Removing the anonymity from the proxy, analyzing IP traffic for proxy requests, or disabling the proxy server on the firewall are not effective ways to prevent future occurrences, as they do not address the root cause of the problem or require more resources and time to implement. References: The 17 Best Proxy Sites to Help You Browse Anonymously; Buy HTTP proxies and Socks5 | Anonymous Proxies; The Best Free Proxy Server List: Tested & Working! (2024).
What is the correct order of steps in an information security assessment?
Place the information security assessment steps on the left next to the numbered boxes on the right in the
correct order.
The correct order of steps in an information security assessment is:
Comprehensive Explanation: An information security assessment is a process of evaluating the security posture of a system, network, or organization. It involves four main steps:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 853; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 791.
Which of the following is the MOST important security goal when performing application interface testing?
Confirm that all platforms are supported and function properly
Evaluate whether systems or components pass data and control correctly to one another
Verify compatibility of software, hardware, and network connections
Examine error conditions related to external interfaces to prevent application details leakage
The most important security goal when performing application interface testing is to examine error conditions related to external interfaces to prevent application details leakage. Application interface testing is a type of testing that focuses on the interactions between different systems or components through their interfaces, such as APIs, web services, or protocols. Error conditions related to external interfaces can occur when the input, output, or communication is invalid, incomplete, or unexpected. These error conditions can cause the application to reveal sensitive or confidential information, such as error messages, stack traces, configuration files, or database queries, which can be exploited by attackers to gain access or compromise the system. Therefore, it is important to examine these error conditions and ensure that the application handles them properly and securely. Confirming that all platforms are supported and function properly, evaluating whether systems or components pass data and control correctly to one another, and verifying compatibility of software, hardware, and network connections are not security goals, but functional or performance goals of application interface testing. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 1000; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 922.
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
It is MOST important to perform which of the following to minimize potential impact when implementing a new vulnerability scanning tool in a production environment?
Negotiate schedule with the Information Technology (IT) operation’s team
Log vulnerability summary reports to a secured server
Enable scanning during off-peak hours
Establish access for Information Technology (IT) management
It is most important to perform a schedule negotiation with the IT operation’s team to minimize the potential impact when implementing a new vulnerability scanning tool in a production environment. This is because a vulnerability scan can cause network congestion, performance degradation, or system instability, which can affect the availability and functionality of the production systems. Therefore, it is essential to coordinate with the IT operation’s team to determine the best time and frequency for the scan, as well as the scope and intensity of the scan. Logging vulnerability summary reports, enabling scanning during off-peak hours, and establishing access for IT management are also good practices for vulnerability scanning, but they are not as important as negotiating the schedule with the IT operation’s team. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 858; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 794.
Which of the following is the MOST effective practice in managing user accounts when an employee is terminated?
Implement processes for automated removal of access for terminated employees.
Delete employee network and system IDs upon termination.
Manually remove terminated employee user-access to all systems and applications.
Disable terminated employee network ID to remove all access.
The most effective practice in managing user accounts when an employee is terminated is to implement processes for automated removal of access for terminated employees. This practice can ensure that the access rights of the terminated employee are revoked as soon as possible, preventing any unauthorized or malicious use of the account. Automated removal of access can be achieved by using software tools or scripts that can disable or delete the account, remove it from any groups or roles, and revoke any permissions or privileges associated with the account. Automated removal of access can also reduce the human errors or delays that may occur in manual processes, and provide an audit trail of the actions taken. Deleting employee network and system IDs upon termination, manually removing terminated employee user-access to all systems and applications, and disabling terminated employee network ID to remove all access are all possible ways to manage user accounts when an employee is terminated, but they are not as effective as automated removal of access. Deleting employee network and system IDs upon termination may cause problems with data retention, backup, or recovery, and may not remove all traces of the account from the systems. Manually removing terminated employee user-access to all systems and applications may be time-consuming, error-prone, or incomplete, and may depend on the cooperation and coordination of different administrators or departments. Disabling terminated employee network ID to remove all access may not be sufficient, as the account may still exist and be reactivated, or may have access to some resources that are not controlled by the network ID.
A security practitioner is tasked with securing the organization’s Wireless Access Points (WAP). Which of these is the MOST effective way of restricting this environment to authorized users?
Enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point
Disable the broadcast of the Service Set Identifier (SSID) name
Change the name of the Service Set Identifier (SSID) to a random value not associated with the organization
Create Access Control Lists (ACL) based on Media Access Control (MAC) addresses
The most effective way of restricting the wireless environment to authorized users is to enable Wi-Fi Protected Access 2 (WPA2) encryption on the wireless access point. WPA2 is a security protocol that provides confidentiality, integrity, and authentication for wireless networks. WPA2 uses Advanced Encryption Standard (AES) to encrypt the data transmitted over the wireless network, and prevents unauthorized users from intercepting or modifying the traffic. WPA2 also uses a pre-shared key (PSK) or an Extensible Authentication Protocol (EAP) to authenticate the users who want to join the wireless network, and prevents unauthorized users from accessing the network resources. WPA2 is the current standard for wireless security and is widely supported by most wireless devices. The other options are not as effective as WPA2 encryption for restricting the wireless environment to authorized users. Disabling the broadcast of the SSID name is a technique that hides the name of the wireless network from being displayed on the list of available networks, but it does not prevent unauthorized users from discovering the name by using a wireless sniffer or a brute force tool. Changing the name of the SSID to a random value not associated with the organization is a technique that reduces the likelihood of being targeted by an attacker who is looking for a specific network, but it does not prevent unauthorized users from joining the network if they know the name and the password. Creating ACLs based on MAC addresses is a technique that allows or denies access to the wireless network based on the physical address of the wireless device, but it does not prevent unauthorized users from spoofing a valid MAC address or bypassing the ACL by using a wireless bridge or a repeater. References: Secure Wireless Access Points - Fortinet; Configure Wireless Security Settings on a WAP - Cisco; Best WAP of 2024 | TechRadar.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
Which of the following is BEST achieved through the use of eXtensible Access Markup Language (XACML)?
Minimize malicious attacks from third parties
Manage resource privileges
Share digital identities in hybrid cloud
Defined a standard protocol
XACML is an XML-based language for specifying access control policies. It defines a declarative, fine-grained, attribute-based access control policy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies. XACML is best suited for managing resource privileges, as it allows for flexible and dynamic authorization decisions based on various attributes of the subject, resource, action, and environment. XACML is not designed to minimize malicious attacks, share digital identities, or define a standard protocol, although it can interoperate with other standards such as SAML and OAuth. References: XACML - Wikipedia; OASIS eXtensible Access Control Markup Language (XACML) TC; A beginner’s guide to XACML.
What can happen when an Intrusion Detection System (IDS) is installed inside a firewall-protected internal network?
The IDS can detect failed administrator logon attempts from servers.
The IDS can increase the number of packets to analyze.
The firewall can increase the number of packets to analyze.
The firewall can detect failed administrator login attempts from servers
An Intrusion Detection System (IDS) is a monitoring system that detects suspicious activities and generates alerts when they are detected. An IDS can be installed inside a firewall-protected internal network to monitor the traffic within the network and identify any potential threats or anomalies. One of the scenarios that an IDS can detect is failed administrator logon attempts from servers. This could indicate that an attacker has compromised a server and is trying to escalate privileges or access sensitive data. An IDS can alert the security team of such attempts and help them to investigate and respond to the incident. The other options are not valid consequences of installing an IDS inside a firewall-protected internal network. An IDS does not increase the number of packets to analyze, as it only passively observes the traffic that is already flowing in the network. An IDS does not affect the firewall’s functionality or performance, as it operates independently from the firewall. An IDS does not enable the firewall to detect failed administrator login attempts from servers, as the firewall is not designed to inspect the content or the behavior of the traffic, but only to filter it based on predefined rules. References: Intrusion Detection System (IDS) - GeeksforGeeks; Exploring Firewalls & Intrusion Detection Systems in Network Security ….
Which type of test would an organization perform in order to locate and target exploitable defects?
Penetration
System
Performance
Vulnerability
Penetration testing is a type of test that an organization performs in order to locate and target exploitable defects in its information systems and networks. Penetration testing simulates a real-world attack scenario, where a tester, also known as a penetration tester or ethical hacker, tries to find and exploit the vulnerabilities in the system or network, using the same tools and techniques as a malicious attacker. The goal of penetration testing is to identify the weaknesses and gaps in the security posture of the organization, and to provide recommendations and solutions to mitigate or eliminate them. Penetration testing can help the organization improve its security awareness, compliance, and resilience, and prevent potential breaches or incidents.
Drag the following Security Engineering terms on the left to the BEST definition on the right.
There are different terms related to Security Engineering, which is the discipline of designing, building, and maintaining secure systems. According to [1], Security Engineering is the art and science of building dependable systems. Some common terms and their definitions are:
The following table shows the possible matching of the Security Engineering terms to their definitions:
Security Engineering terms and definitions are important to understand and apply in the context of developing, deploying, and maintaining secure systems. Security Engineering terms and definitions can help to establish a common language and framework for security professionals, stakeholders, and users, and to communicate the security objectives, requirements, and issues of the system. Security Engineering terms and definitions can also help to guide the security engineering process, which involves the following steps: security planning, security analysis, security design, security implementation, security testing, security deployment, security operation, and security maintenance. Security Engineering terms and definitions can also help to support the security certification and accreditation (C&A) process, which involves the following tasks: security categorization, security control selection, security control implementation, security control assessment, security certification, security accreditation, and security monitoring.
Which of the following initiates the systems recovery phase of a disaster recovery plan?
Issuing a formal disaster declaration
Activating the organization's hot site
Evacuating the disaster site
Assessing the extent of damage following the disaster
The systems recovery phase of a disaster recovery plan is the phase that involves restoring the critical systems and operations of the organization after a disaster. The systems recovery phase is initiated by activating the organization’s hot site. A hot site is a fully equipped and operational alternative site that can be used to resume the business functions within a short time after a disaster. A hot site typically has the same hardware, software, network, and data as the original site, and can be switched to quickly and seamlessly. A hot site can ensure the continuity and availability of the organization’s systems and services during a disaster recovery situation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Business Continuity and Disaster Recovery Planning, page 365; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Business Continuity Planning, page 499]
What is a security concern when considering implementing software-defined networking (SDN)?
It increases the attack footprint.
It uses open source protocols.
It has a decentralized architecture.
It is cloud based.
A security concern when considering implementing software-defined networking (SDN) is that it increases the attack footprint. SDN is a network architecture that decouples the control plane from the data plane, and centralizes the network intelligence and management in a software controller. SDN enables more flexibility, scalability, and programmability of the network, as well as better integration with cloud services and applications. However, SDN also introduces new security challenges and risks, such as the following:
Which of the following is the PRIMARY risk associated with Extensible Markup Language (XML) applications?
Users can manipulate the code.
The stack data structure cannot be replicated.
The stack data structure is repetitive.
Potential sensitive data leakage.
The primary risk associated with XML applications is potential sensitive data leakage. XML is a markup language that defines a set of rules for encoding and exchanging data in a human-readable and machine-readable format. XML applications are applications that use XML to store, process, or transmit data, such as web services, RSS feeds, or SOAP messages. XML applications may pose a risk of sensitive data leakage, as XML data may contain confidential or personal information, such as names, addresses, passwords, or credit card numbers. If XML data is not properly protected, encrypted, or validated, it may be exposed, intercepted, or modified by unauthorized parties, leading to data breaches, identity theft, or fraud. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 1012; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 934.
The development team has been tasked with collecting data from biometric devices. The application will support a variety of collection data streams. During the testing phase, the team utilizes data from an old production database in a secure testing environment. What principle has the team taken into consideration?
biometric data cannot be changed.
Separate biometric data streams require increased security.
The biometric devices are unknown.
Biometric data must be protected from disclosure.
The principle that the development team has taken into consideration when using data from an old production database in a secure testing environment is that biometric data must be protected from disclosure. Biometric data is a type of data that is derived from the physical or behavioral characteristics of a person, such as fingerprints, iris patterns, or voice recognition. Biometric data is used for identification or authentication purposes, and it is considered as sensitive or personal data that should be protected from unauthorized or malicious access, modification, or disclosure. The development team has taken this principle into consideration when they used data from an old production database in a secure testing environment, as they ensured that the biometric data was not exposed or compromised during the testing phase of the application. Biometric data cannot be changed, separate biometric data streams require increased security, or the biometric devices are unknown are not the principles that the development team has taken into consideration when using data from an old production database in a secure testing environment. Biometric data can be changed, as it may vary due to aging, injury, or disease, and it may need to be updated or replaced. Separate biometric data streams do not necessarily require increased security, as it depends on the type, quality, and purpose of the biometric data. The biometric devices are not unknown, as the development team should be aware of the specifications, capabilities, and limitations of the biometric devices that they are using for the application. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 5: Identity and Access Management, page 407.
An organization is setting a security assessment scope with the goal of developing a Security Management Program (SMP). The next step is to select an approach for conducting the risk assessment. Which of the following approaches is MOST effective for the SMP?
Data driven risk assessment with a focus on data
Security controls driven assessment that focuses on controls management
Business processes based risk assessment with a focus on business goals
Asset driven risk assessment with a focus on the assets
The approach that is most effective for the Security Management Program (SMP) is the business processes based risk assessment with a focus on business goals. A SMP is a framework that defines the policies, procedures, roles, and responsibilities for managing the security of an organization. A SMP aligns the security objectives and activities with the business objectives and strategies, and ensures the security of the organization’s assets, information, and operations. A business processes based risk assessment is a method that identifies and evaluates the risks associated with the business processes of an organization, such as the inputs, outputs, activities, and resources of the processes. A business processes based risk assessment focuses on the business goals and outcomes of the processes, and considers the impact and likelihood of the risks on the processes. A business processes based risk assessment can help to develop a SMP that is relevant, effective, and efficient for the organization, as it can help to prioritize the security needs and requirements of the processes, align the security controls and measures with the processes, and measure the performance and improvement of the processes. Data driven risk assessment with a focus on data, security controls driven assessment that focuses on controls management, and asset driven risk assessment with a focus on the assets are not approaches that are most effective for the SMP. These approaches may not capture the full scope or context of the security risks, or they may not align the security risks with the business objectives and strategies. Data driven risk assessment is a method that identifies and evaluates the risks associated with the data of an organization, such as the classification, storage, transmission, and processing of the data. Data driven risk assessment focuses on the data and its attributes, and considers the confidentiality, integrity, and availability of the data. Security controls driven assessment is a method that identifies and evaluates the risks associated with the security controls of an organization, such as the policies, procedures, technologies, and practices that are implemented to protect the organization. Security controls driven assessment focuses on the controls and their management, and considers the effectiveness, efficiency, and compliance of the controls. Asset driven risk assessment is a method that identifies and evaluates the risks associated with the assets of an organization, such as the hardware, software, network, or personnel that are used to support the organization. Asset driven risk assessment focuses on the assets and their value, and considers the threats, vulnerabilities, and consequences of the assets. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1, Security and Risk Management, page 17. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security Governance Through Principles and Policies, page 18.
A company needs to provide employee access to travel services, which are hosted by a third-party service provider, Employee experience is important, and when users are
already authenticated, access to the travel portal is seamless. Which of the following methods is used to share information and grant user access to the travel portal?
Security Assertion Markup Language (SAML) access
Single sign-on (SSO) access
Open Authorization (OAuth) access
Federated access
The method that is used to share information and grant user access to the travel portal is Security Assertion Markup Language (SAML) access. SAML is a standard and protocol that enables the exchange of authentication and authorization information between different domains or entities, such as a service provider (SP) and an identity provider (IdP). SAML access can provide a seamless user experience, as it allows the users to access multiple services or resources from different domains, using a single or federated identity, without having to reauthenticate or reauthorize each time. SAML access can also enhance the security and privacy of the user information, as it does not require the sharing or storing of the user credentials or attributes between the domains, but rather relies on the digital signatures and encryption of the SAML assertions or messages. SAML access is suitable for the scenario where a company needs to provide employee access to travel services, which are hosted by a third-party service provider, as it can enable the employees to access the travel portal, using their existing company identity, after they are already authenticated by the company domain34. References: CISSP CBK, Fifth Edition, Chapter 5, page 450; 2024 exam CISSP Dumps, Question 12.
Which is the second phase of public key Infrastructure (pk1) key/certificate life-cycle management?
Issued Phase
Cancellation Phase
Implementation phase
Initialization Phase
The second phase of public key infrastructure (PKI) key/certificate life-cycle management is the issued phase, where the certificate authority (CA) issues a digital certificate to the requester after verifying their identity and public key. The certificate contains the public key, the identity of the owner, the validity period, the serial number, and the digital signature of the CA. The certificate is then published in a repository or directory for others to access and validate. References: CISSP Study Guide: Key Management Life Cycle, Key Management - OWASP Cheat Sheet Series, CISSP 2021: Software Development Lifecycles & Ecosystems
How can a security engineer maintain network separation from a secure environment while allowing remote users to work in the secure environment?
Use a Virtual Local Area Network (VLAN) to segment the network
Implement a bastion host
Install anti-virus on all enceinte
Enforce port security on access switches
A bastion host is a hardened system that acts as a gateway between a secure environment and an untrusted network, such as the internet. A bastion host can be used to maintain network separation from a secure environment while allowing remote users to work in the secure environment, by providing controlled access and logging services. A bastion host can also implement additional security measures, such as encryption, authentication, and firewalls, to protect the communication and data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 181; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 255]
Which change management role is responsible for the overall success of the project and supporting the change throughout the organization?
Change driver
Change implementer
Program sponsor
Project manager
The change management role that is responsible for the overall success of the project and supporting the change throughout the organization is the program sponsor. The program sponsor is the senior executive or stakeholder who provides the vision, direction, and support for the change management project, and who ensures the alignment and integration of the change management project with the business goals and strategy of the organization. The program sponsor is responsible for the overall success of the project and supporting the change throughout the organization, as they can provide the leadership, guidance, and resources for the change management project, and communicate and advocate the benefits and value of the change management project to the other stakeholders, such as the management, the employees, or the customers34. References: CISSP CBK, Fifth Edition, Chapter 6, page 554; 2024 exam CISSP Dumps, Question 20.
Which of the following would an information security professional use to recognize changes to content, particularly unauthorized changes?
File Integrity Checker
Security information and event management (SIEM) system
Audit Logs
Intrusion detection system (IDS)
The tool that an information security professional would use to recognize changes to content, particularly unauthorized changes, is a File Integrity Checker. A File Integrity Checker is a type of security tool that monitors and verifies the integrity and authenticity of the files or content, by comparing the current state or version of the files or content with a known or trusted baseline or reference, using various methods, such as checksums, hashes, or signatures. A File Integrity Checker can recognize changes to content, particularly unauthorized changes, by detecting and reporting any discrepancies or anomalies between the current state or version and the baseline or reference, such as the addition, deletion, modification, or corruption of the files or content. A File Integrity Checker can help to prevent or mitigate the unauthorized changes to content, by alerting the information security professional, and by restoring the files or content to the original or desired state or version . References: [CISSP CBK, Fifth Edition, Chapter 3, page 245]; [100 CISSP Questions, Answers and Explanations, Question 18].
Which layer of the Open systems Interconnection (OSI) model is being targeted in the event of a Synchronization (SYN) flood attack?
Session
Transport
Network
Presentation
A Synchronization (SYN) flood attack is a type of denial-of-service (DoS) attack that exploits the three-way handshake mechanism of the Transmission Control Protocol (TCP), which operates at the transport layer of the Open Systems Interconnection (OSI) model. In a SYN flood attack, the attacker sends a large number of SYN packets to the target server, but does not respond to the SYN-ACK packets sent by the server. This causes the server to exhaust its resources and become unable to accept legitimate requests. The session, network, and presentation layers of the OSI model are not directly involved in this attack. References:
CISSP Official (ISC)2 Practice Tests, 3rd Edition, Domain 4: Communication and Network Security, Question 4.2.1
CISSP CBK, 5th Edition, Chapter 4: Communication and Network Security, Section: Secure Network Architecture and Design
Which evidence collecting technique would be utilized when it is believed an attacker is employing a rootkit and a quick analysis is needed?
Memory collection
Forensic disk imaging
Malware analysis
Live response
Live response is an evidence collecting technique that involves analyzing a system while it is still running, without shutting it down or altering it. Live response can be useful when it is believed that an attacker is employing a rootkit and a quick analysis is needed. A rootkit is a type of malicious software that hides itself and other malware from detection and removal by modifying the system’s core components, such as the kernel, drivers, or libraries. A rootkit may also erase or alter the evidence of its presence or activities on the system, such as log files, registry entries, or processes. Therefore, live response can help capture the volatile data that may be lost or changed if the system is powered off or rebooted, such as memory contents, network connections, or running processes. Live response can also help identify and isolate the rootkit before it causes more damage or spreads to other systems. References: CISSP All-in-One Exam Guide, Chapter 10: Legal, Regulations, Investigations, and Compliance, Section: Forensics, pp. 1328-1329.
The European Union (EU) General Data Protection Regulation (GDPR) requires organizations to implement appropriate technical and organizational measures to ensure a
level of security appropriate to the risk. The Data Owner should therefore consider which of the following requirements?
Data masking and encryption of personal data
Only to use encryption protocols approved by EU
Anonymization of personal data when transmitted to sources outside the EU
Never to store personal data of EU citizens outside the EU
The GDPR is a regulation that aims to protect the privacy and security of the personal data of individuals in the EU. The GDPR requires organizations to implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk. The data owner, who is the person or entity that has the authority and responsibility for the personal data, should therefore consider data masking and encryption of personal data as possible technical measures. Data masking is a technique that replaces or obscures sensitive or identifying information in the personal data with fictitious or random data, such as replacing names with pseudonyms or masking credit card numbers with asterisks. Data encryption is a technique that transforms the personal data into an unreadable or unintelligible form using a secret key, such that only authorized parties with the correct key can access or decrypt the personal data. Data masking and encryption can protect the personal data from unauthorized access, disclosure, or modification, and reduce the impact of data breaches or leaks. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 323-324; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Asset Security, pp. 269-270.
Which of the following BEST describes when an organization should conduct a black box security audit on a new software product?
When the organization wishes to check for non-functional compliance
When the organization wants to enumerate known security vulnerabilities across their infrastructure
When the organization has experienced a security incident
When the organization is confident the final source code is complete
The best description of when an organization should conduct a black box security audit on a new software product is when the organization is confident the final source code is complete. A black box security audit is a type of security testing that involves testing the functionality and behavior of the software product, without having any knowledge or access to the internal structure, design, or code of the software product. A black box security audit can help to identify and evaluate the security vulnerabilities and issues of the software product, from the perspective of an external attacker or user. A black box security audit should be conducted when the organization is confident the final source code is complete, as it can provide a comprehensive and realistic assessment of the security of the software product, and validate the security requirements and expectations of the software product34. References: CISSP CBK, Fifth Edition, Chapter 3, page 234; 2024 exam CISSP Dumps, Question 19.
Which of the following practices provides the development of security and identification of threats in designing software?
Stakeholder review
Requirements review
Penetration testing
Threat modeling
Threat modeling is a practice that provides the development of security and identification of threats in designing software. Threat modeling is a systematic process of identifying, analyzing, and mitigating the potential threats and vulnerabilities that could affect a software system. Threat modeling helps to design secure software by applying security principles, such as defense in depth, least privilege, and fail-safe defaults, throughout the software development life cycle. Stakeholder review, requirements review, and penetration testing are not practices that provide the development of security and identification of threats in designing software, although they may contribute to the overall security assurance of the software. Stakeholder review is a process of obtaining feedback and approval from the stakeholders of a software project, such as customers, users, managers, and developers. Requirements review is a process of verifying and validating the functional and non-functional requirements of a software system, such as performance, usability, reliability, and security. Penetration testing is a process of simulating real-world attacks on a software system to identify and exploit its vulnerabilities and weaknesses. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 903. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 635.
What would be the BEST action to take in a situation where collected evidence was left unattended overnight in an unlocked vehicle?
Report the matter to the local police authorities.
Move evidence to a climate-controlled environment.
Re-inventory the evidence and provide it to the evidence custodian.
Immediately report the matter to the case supervisor.
The best action to take in a situation where collected evidence was left unattended overnight in an unlocked vehicle is to immediately report the matter to the case supervisor. Leaving evidence unattended in an unlocked vehicle is a serious breach of the chain of custody, which is the process of documenting and preserving the integrity and authenticity of the evidence from the time of collection to the time of presentation in court. The chain of custody requires that the evidence is properly labeled, sealed, stored, transported, and handled by authorized personnel, and that any changes or transfers are recorded and justified. Leaving evidence unattended in an unlocked vehicle exposes the evidence to the risk of loss, theft, damage, contamination, or tampering, which could compromise the validity and admissibility of the evidence in court. Therefore, the incident should be reported to the case supervisor as soon as possible, so that the appropriate actions can be taken to mitigate the impact and prevent further incidents. References: CISSP All-in-One Exam Guide, Chapter 10: Legal, Regulations, Investigations, and Compliance, Section: Forensics, pp. 1334-1335.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
Which of the following is the PRIMARY reason for employing physical security personnel at entry points in facilities where card access is in operation?
To verify that only employees have access to the facility.
To identify present hazards requiring remediation.
To monitor staff movement throughout the facility.
To provide a safe environment for employees.
According to the CISSP CBK Official Study Guide, the primary reason for employing physical security personnel at entry points in facilities where card access is in operation is to provide a safe environment for employees. Physical security personnel are the human or the personnel components or elements of the physical security system or the network, which is the system or the network that prevents or deters the unauthorized or unintended access or entry to the resources, data, or information, such as the locks, keys, doors, or windows of the premises or the facilities, or the badges, cards, or tags of the subjects or the entities. Physical security personnel may perform various functions or tasks, such as the guarding, patrolling, or monitoring of the premises or the facilities, or the verifying, identifying, or authenticating of the subjects or the entities. Employing physical security personnel at entry points in facilities where card access is in operation helps to provide a safe environment for employees, as it enhances or supplements the security or the protection of the premises or the facilities, as well as the resources, data, or information that are contained or stored in the premises or the facilities, by adding or applying an additional layer or level of security or protection, as well as a human or a personal touch or factor, to the physical security system or the network. Providing a safe environment for employees helps to ensure the safety or the well-being of the employees, as well as the productivity or the performance of the employees, as it reduces or eliminates the risks or the threats that may harm or damage the employees, such as the theft, vandalism, or violence of the employees. To verify that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Verifying that only employees have access to the facility is the process of checking or confirming that the subjects or the entities that enter or access the facility are the employees or the authorized users or clients of the facility, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Verifying that only employees have access to the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Verifying that only employees have access to the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of verifying, identifying, or authenticating the subjects or the entities that enter or access the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, verifying that only employees have access to the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To identify present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Identifying present hazards requiring remediation is the process of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the resources, data, or information that are contained or stored in the facility, such as the fire, flood, or earthquake of the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, such as the evacuation, recovery, or contingency of the facility. Identifying present hazards requiring remediation helps to ensure the safety or the well-being of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it reduces or eliminates the impact or the consequence of the hazards or dangers that may harm or damage the facility, such as the fire, flood, or earthquake of the facility. Identifying present hazards requiring remediation may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of detecting or discovering the existing or the current hazards or dangers that may affect or impair the facility, as well as the actions or the measures that are needed or required to fix or resolve the hazards or dangers, by using or applying the appropriate tools or techniques, such as the sensors, alarms, or cameras of the facility. However, identifying present hazards requiring remediation is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation. To monitor staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, although it may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation. Monitoring staff movement throughout the facility is the process of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, such as the entry, exit, or location of the staff or the employees, by using or applying the appropriate methods or mechanisms, such as the card access, the biometrics, or the physical security personnel of the facility. Monitoring staff movement throughout the facility helps to ensure the security or the integrity of the facility, as well as the resources, data, or information that are contained or stored in the facility, as it prevents or limits the unauthorized or unintended access or entry to the facility, which may lead to the attacks or threats that may harm or damage the facility, such as the theft, vandalism, or violence of the facility. Monitoring staff movement throughout the facility may also help to ensure the productivity or the performance of the staff or the employees, as it prevents or limits the misuse or abuse of the facility, such as the idle, waste, or fraud of the facility. Monitoring staff movement throughout the facility may be a benefit or a consequence of employing physical security personnel at entry points in facilities where card access is in operation, as physical security personnel may perform the function or the task of observing or tracking the activities or the behaviors of the staff or the employees that work or operate in the facility, by using or applying the card access, the biometrics, or other methods or mechanisms of the facility. However, monitoring staff movement throughout the facility is not the primary reason for employing physical security personnel at entry points in facilities where card access is in operation, as it is not the main or the most important reason or objective for employing physical security personnel at entry points in facilities where card access is in operation.
Determining outage costs caused by a disaster can BEST be measured by the
cost of redundant systems and backups.
cost to recover from an outage.
overall long-term impact of the outage.
revenue lost during the outage.
Determining outage costs caused by a disaster can best be measured by the overall long-term impact of the outage, which includes both direct and indirect costs. Direct costs are the expenses incurred to restore the normal operations, such as the cost of repair, replacement, recovery, or relocation. Indirect costs are the losses or damages that result from the interruption of business activities, such as the loss of revenue, productivity, reputation, customer loyalty, market share, or competitive advantage. The overall long-term impact of the outage can be estimated by using methods such as business impact analysis (BIA), return on investment (ROI), or total cost of ownership (TCO). The cost of redundant systems and backups, the cost to recover from an outage, and the revenue lost during the outage are all examples of direct costs, but they do not capture the full extent of the outage costs caused by a disaster.
Which of the following disaster recovery test plans will be MOST effective while providing minimal risk?
Read-through
Parallel
Full interruption
Simulation
A disaster recovery test plan is a document that describes the methods and procedures for testing the effectiveness and readiness of a disaster recovery plan (DRP), which is a subset of a BCP that focuses on restoring the organization’s IT systems and data after a disruption or disaster. A disaster recovery test plan can have different types and levels of testing, depending on the objectives, scope, and resources of the organization. A parallel test is a type of disaster recovery test plan that involves activating the backup site and running the critical systems and processes in parallel with the primary site, without disrupting the normal operations. A parallel test is the most effective type of disaster recovery test plan, as it simulates a realistic scenario of a disruption or disaster, and allows the organization to evaluate the performance and functionality of the backup site, as well as the communication and coordination between the primary and backup sites. A parallel test also provides minimal risk, as it does not affect the normal operations of the primary site, and does not require switching over to the backup site. A read-through test is a type of disaster recovery test plan that involves reviewing the DRP document and verifying its accuracy and completeness, without performing any actual actions. A read-through test is the least effective type of disaster recovery test plan, as it does not test the actual implementation and execution of the DRP, and does not identify any potential issues or gaps in the DRP. A full interruption test is a type of disaster recovery test plan that involves shutting down the primary site and switching over to the backup site, and performing the normal operations from the backup site. A full interruption test is the most realistic type of disaster recovery test plan, as it tests the actual implementation and execution of the DRP, and identifies any potential issues or gaps in the DRP. However, a full interruption test also provides the highest risk, as it affects the normal operations of the primary site, and may cause data loss, downtime, or customer dissatisfaction. A simulation test is a type of disaster recovery test plan that involves simulating a disruption or disaster scenario and performing the actions and procedures of the DRP, without activating the backup site or affecting the normal operations. A simulation test is a moderately effective type of disaster recovery test plan, as it tests the implementation and execution of the DRP, and identifies any potential issues or gaps in the DRP. However, a simulation test also provides moderate risk, as it does not test the performance and functionality of the backup site, and may not simulate a realistic scenario of a disruption or disaster. References: [Disaster Recovery Test Plan], [CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations]2
Order the below steps to create an effective vulnerability management process.
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
The correct matches are as follows:
Explanation: The image shows a table with two columns. The left column lists four different types of Access Control Models, and the right column lists their associated restrictions. The correct matches are based on the definitions and characteristics of each Access Control Model, as explained below:
References: ISC2 CISSP, 2
Which of the following are required components for implementing software configuration management systems?
Audit control and signoff
User training and acceptance
Rollback and recovery processes
Regression testing and evaluation
The required components for implementing software configuration management systems are audit control and signoff, rollback and recovery processes, and regression testing and evaluation. Software configuration management systems are tools and techniques that enable the identification, control, tracking, and verification of the changes and versions of software products throughout the software development life cycle. Audit control and signoff are the mechanisms that ensure that the changes and versions of the software products are authorized, documented, reviewed, and approved by the appropriate stakeholders. Rollback and recovery processes are the procedures that enable the restoration of the previous state or version of the software products in case of a failure or error. Regression testing and evaluation are the methods that verify that the changes and versions of the software products do not introduce new defects or affect the existing functionality or performance. User training and acceptance are not required components for implementing software configuration management systems, as they are related to the deployment and operation of the software products, not the configuration management. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1037. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1063.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement.
The effectiveness of the security program can PRIMARILY be measured through
audit findings.
risk elimination.
audit requirements.
customer satisfaction.
The primary way to measure the effectiveness of the security program is through the audit findings. The audit findings are the results or the outcomes of the audit process, which is a systematic and independent examination of the security activities and initiatives, to determine whether they comply with the security policies and standards, and whether they achieve the security objectives and goals. The audit findings can help to evaluate the effectiveness of the security program, as they can identify and report the strengths and the weaknesses, the successes and the failures, and the gaps and the risks of the security program, and they can provide the recommendations and the feedback for the improvement and the enhancement of the security program. Risk elimination, audit requirements, and customer satisfaction are not the primary ways to measure the effectiveness of the security program, as they are related to the impossibility, the necessity, or the quality of the security program, not the evaluation or the assessment of the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 54.
What is the MAIN feature that onion routing networks offer?
Non-repudiation
Traceability
Anonymity
Resilience
The main feature that onion routing networks offer is anonymity. Anonymity is the state of being unknown or unidentifiable by hiding or masking the identity or the location of the sender or the receiver of a communication. Onion routing is a technique that enables anonymous communication over a network, such as the internet, by encrypting and routing the messages through multiple layers of intermediate nodes, called onion routers. Onion routing can protect the privacy and security of the users or the data, and can prevent censorship, surveillance, or tracking by third parties. Non-repudiation, traceability, and resilience are not the main features that onion routing networks offer, as they are related to the proof, tracking, or recovery of the communication, not the anonymity of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 467. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 483.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following documents explains the proper use of the organization's assets?
Human resources policy
Acceptable use policy
Code of ethics
Access control policy
The document that explains the proper use of the organization’s assets is the acceptable use policy. An acceptable use policy is a document that defines the rules and guidelines for the appropriate and responsible use of the organization’s information systems and resources, such as computers, networks, or devices. An acceptable use policy can help to prevent or reduce the misuse, abuse, or damage of the organization’s assets, and to protect the security, privacy, and reputation of the organization and its users. An acceptable use policy can also specify the consequences or penalties for violating the policy, such as disciplinary actions, termination, or legal actions. A human resources policy, a code of ethics, and an access control policy are not the documents that explain the proper use of the organization’s assets, as they are related to the management, values, or authorization of the organization’s employees or users, not the usage or responsibility of the organization’s information systems or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Given the various means to protect physical and logical assets, match the access management area to the technology.
In the context of protecting physical and logical assets, the access management areas and the technologies can be matched as follows: - Facilities are the physical buildings or locations that house the organization’s assets, such as servers, computers, or documents. Facilities can be protected by using windows that are resistant to breakage, intrusion, or eavesdropping, and that can prevent the leakage of light or sound from inside the facilities. - Devices are the hardware or software components that enable the communication or processing of data, such as routers, switches, firewalls, or applications. Devices can be protected by using firewalls that can filter, block, or allow the network traffic based on the predefined rules or policies, and that can prevent unauthorized or malicious access or attacks to the devices or the data. - Information Systems are the systems that store, process, or transmit data, such as databases, servers, or applications. Information Systems can be protected by using authentication mechanisms that can verify the identity or the credentials of the users or the devices that request access to the information systems, and that can prevent impersonation or spoofing of the users or the devices. - Encryption is a technology that can be applied in various areas, such as Devices or Information Systems, to protect the confidentiality or the integrity of the data. Encryption can transform the data into an unreadable or unrecognizable form, using a secret key or an algorithm, and can prevent the interception, disclosure, or modification of the data by unauthorized parties.
Which of the following is the MOST difficult to enforce when using cloud computing?
Data access
Data backup
Data recovery
Data disposal
The most difficult thing to enforce when using cloud computing is data disposal. Data disposal is the process of permanently deleting or destroying the data that is no longer needed or authorized, in a secure and compliant manner. Data disposal is challenging when using cloud computing, because the data may be stored or replicated in multiple locations, devices, or servers, and the cloud provider may not have the same policies, procedures, or standards as the cloud customer. Data disposal may also be affected by the legal or regulatory requirements of different jurisdictions, or the contractual obligations of the cloud service agreement. Data access, data backup, and data recovery are not the most difficult things to enforce when using cloud computing, as they can be achieved by using encryption, authentication, authorization, replication, or restoration techniques, and by specifying the service level agreements and the roles and responsibilities of the cloud provider and the cloud customer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 353.
Which of the following actions MUST be taken if a vulnerability is discovered during the maintenance stage in a System Development Life Cycle (SDLC)?
Make changes following principle and design guidelines.
Stop the application until the vulnerability is fixed.
Report the vulnerability to product owner.
Monitor the application and review code.
The action that must be taken if a vulnerability is discovered during the maintenance stage in a SDLC is to make changes following principle and design guidelines. Principle and design guidelines are the rules and standards that define the security objectives, requirements, and specifications of the system. They also provide the criteria and methods for evaluating and testing the security of the system. By making changes following principle and design guidelines, the organization can ensure that the vulnerability is fixed in a secure and consistent manner, and that the system maintains its functionality and quality. The other options are not actions that must be taken, as they either do not fix the vulnerability (B and D), or do not follow the principle and design guidelines ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 461; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 553.
What does secure authentication with logging provide?
Data integrity
Access accountability
Encryption logging format
Segregation of duties
Secure authentication with logging provides access accountability, which means that the actions of users can be traced and audited. Logging can help identify unauthorized or malicious activities, enforce policies, and support investigations12
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will indicate where the IT budget is BEST allocated during this time?
Policies
Frameworks
Metrics
Guidelines
The best indicator of where the IT budget is best allocated during this time is the metrics. The metrics are the measurements or the indicators of the performance, the effectiveness, the efficiency, or the quality of the IT processes, activities, or outcomes. The metrics can help to allocate the IT budget in a rational, objective, and evidence-based manner, as they can show the value, the impact, or the return of the IT investments, and they can identify the gaps, the risks, or the opportunities for the IT improvement or enhancement. The metrics can also help to justify, communicate, or report the IT budget allocation to the senior management or the stakeholders, and to align the IT budget allocation with the business needs and requirements. Policies, frameworks, and guidelines are not the best indicators of where the IT budget is best allocated during this time, as they are related to the documents or the models that define, guide, or standardize the IT processes, activities, or outcomes, not the measurements or the indicators of the IT performance, effectiveness, efficiency, or quality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 38. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 53.
The use of proximity card to gain access to a building is an example of what type of security control?
Legal
Logical
Physical
Procedural
The type of security control that is exemplified by the use of a proximity card to gain access to a building is physical. Physical security controls are the controls that protect the physical assets or resources of an organization, such as buildings, equipment, or documents, from unauthorized or malicious access, damage, or theft. Physical security controls can include locks, doors, windows, fences, gates, cameras, alarms, guards, or badges. A proximity card is a type of physical security control that uses a radio frequency identification (RFID) chip or a magnetic stripe to store and transmit the identity or the credentials of the card holder, and that can be used to unlock or open a door or a gate by bringing the card close to a reader or a scanner. Legal, logical, and procedural are not the types of security control that are exemplified by the use of a proximity card to gain access to a building, as they are related to the laws, regulations, or contracts, the software, hardware, or network components, or the policies, guidelines, or processes that govern the security of the organization’s information systems or data, not the physical assets or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 877. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 893.
Refer to the information below to answer the question.
During the investigation of a security incident, it is determined that an unauthorized individual accessed a system which hosts a database containing financial information.
If the intrusion causes the system processes to hang, which of the following has been affected?
System integrity
System availability
System confidentiality
System auditability
If the intrusion causes the system processes to hang, the system availability has been affected. The system availability is the property or the characteristic of the system that ensures that the system is accessible and functional when needed by the authorized users or entities, and that the system is protected from the unauthorized or the malicious denial or disruption of service. The system availability can be affected when the system processes hang, as it can prevent or delay the system from responding to the requests or performing the tasks, and it can cause the system to crash or freeze. The system availability can also be affected by other factors, such as the network congestion, the hardware failure, the power outage, or the malicious attacks, such as the distributed denial-of-service (DDoS) attack. System integrity, system confidentiality, and system auditability are not the properties or the characteristics of the system that have been affected, if the intrusion causes the system processes to hang, as they are related to the accuracy, the secrecy, or the accountability of the system, not the accessibility or the functionality of the system. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 263. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 279.
Which of the following is a process within a Systems Engineering Life Cycle (SELC) stage?
Requirements Analysis
Development and Deployment
Production Operations
Utilization Support
Requirements analysis is a process within the Systems Engineering Life Cycle (SELC) stage of Concept Development. It involves defining the problem, identifying the stakeholders, eliciting the requirements, analyzing the requirements, and validating the requirements. Requirements analysis is essential for ensuring that the system meets the needs and expectations of the users and customers. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Architecture and Engineering, p. 295; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Design, p. 149.
The amount of data that will be collected during an audit is PRIMARILY determined by the.
audit scope.
auditor's experience level.
availability of the data.
integrity of the data.
The amount of data that will be collected during an audit is primarily determined by the audit scope. The audit scope is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The audit scope defines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. The auditor’s experience level, the availability of the data, and the integrity of the data are not the primary factors that determine the amount of data that will be collected during an audit, as they depend on the audit scope to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following could have MOST likely prevented the Peer-to-Peer (P2P) program from being installed on the computer?
Removing employee's full access to the computer
Supervising their child's use of the computer
Limiting computer's access to only the employee
Ensuring employee understands their business conduct guidelines
The best way to prevent the P2P program from being installed on the computer is to remove the employee’s full access to the computer. Full access or administrator access means that the user has the highest level of privilege or permission to perform any action or operation on the computer, such as installing, modifying, or deleting any software or file. By removing the employee’s full access to the computer, and assigning them a lower level of access, such as user or guest, the organization can restrict the employee’s ability to install unauthorized or potentially harmful programs, such as P2P programs, on the computer. Supervising their child’s use of the computer, limiting computer’s access to only the employee, and ensuring employee understands their business conduct guidelines are not the best ways to prevent the P2P program from being installed on the computer, as they are related to the monitoring, control, or awareness of the computer usage, not the restriction or limitation of the computer access. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
A thorough review of an organization's audit logs finds that a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient. What type of attack has MOST likely occurred?
Spoofing
Eavesdropping
Man-in-the-middle
Denial of service
The type of attack that has most likely occurred when a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of attack that involves an attacker intercepting, modifying, or redirecting the communication between two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or impersonate one of the parties, to achieve malicious goals, such as stealing information, compromising security, or disrupting service. A MITM attack can be performed on various types of networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of service are not the types of attack that have most likely occurred in this scenario, as they do not involve the modification or manipulation of the communication between the parties, but rather the falsification, observation, or prevention of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
TESTED 21 Nov 2024