There are no products listed under this category.
There are many competing models that exist to demonstrate maturity. Given the available choices, the SCF decided to leverage an existing framework, rather than reinvent the wheel. In simple terms, we provided control-level criteria to an existing CMM model.
The C|P-CMM is meant to solve the problem of objectivity in both establishing and evaluating cybersecurity and privacy controls. There are four (4) main objectives for the C|P-CMM:
By using the term “nested” regarding maturity, we refer to how the C|P-CMM’s control criteria were written to acknowledge that each succeeding level of maturity is built upon its predecessor. Essentially, you cannot run without first learning how to walk. Likewise, you cannot walk without first learning how to crawl. This approach to defining cybersecurity & privacy control maturity is how the C|P-CMM is structured.
The C|P-CMM draws upon the high-level structure of the Systems Security Engineering Capability Maturity Model v2.0 (SSE-CMM), since we felt it was the best model to demonstrate varying levels of maturity for people, processes and technology at a control level. If you are unfamiliar with the SSE-CMM, it is well-worth your time to read through the SSE-CMM Model Description Document that is hosted by the US Defense Technical Information Center (DTIC).

It is unfortunate that it must be explicitly stated, but a “maturity model” is entirely dependent upon the ethics and integrity of the individual(s) involved in the evaluation process. This issue is often rooted in the assessor’s perceived pressure that a control should be designated as being more mature than it is (e.g., dysfunctional management influence). Regardless of the reason, it must be emphasized that consciously designating a higher level of maturity (based on objective criteria) to make an organization appear more mature should be considered fraud. Fraud is a broad term that includes “false representations, dishonesty and deceit.”
This stance on fraudulent misrepresentations may appear harsh, but it accurately describes the situation. There is no room in cybersecurity and data protection operations for unethical parties, so the SCF Council published this guidance on what a “reasonable party perspective” should be. This provides objectivity to minimize the ability of unethical parties to abuse the intended use of the C|P-CMM.
The following two (2) questions should be kept in mind when evaluating the maturity of a control (or Assessment Objective (AO)).
Do you need to answer “yes” to every bullet pointed criteria under a level of maturity in the C|P-CMM? No. We recognize that every organization is different. Therefore, the maturity criteria items associated with SCF controls are to help establish what would reasonably exist for each level of maturity. Fundamentally, the decision comes down to assessor experience, professional competence and common sense.
While a more mature implementation of controls can equate to an increased level of security, higher maturity and higher assurance are not mutually inclusive. From a practical perspective, maturity is simply a measure of governance activities pertaining to a specific control or set of controls. Maturity does not equate to an in-depth analysis of the strength and depth of the control being evaluated (e.g., rigor).
According to NIST, assurance is “grounds for confidence that the set of intended security controls in an information system are effective in their application.” Increased rigor in control testing is what leads to increased assurance. Therefore, increased rigor and increased assurance are mutually inclusive.
The SCF Conformity Assessment Program (SCF CAP) leverages (3) three levels of rigor. The SCF CAP’s levels of rigor utilize maturity-based criteria to evaluate a control, since a maturity target can provide context for “what right looks like” at a particular organization:

The C|P-CMM draws upon the high-level structure of the Systems Security Engineering Capability Maturity Model v2.0 (SSE-CMM), since we felt it was the best model to demonstrate varying levels of maturity for people, processes and technology at a control level. If you are unfamiliar with the SSE-CMM, it is well-worth your time to read through the SSE-CMM Overview Document that is hosted by the US Defense Technical Information Center (DTIC).
The six (6) C|P-CMM levels are:
This level of maturity is defined as “non-existence practices,” where the control is not being performed:
L0 practices, or a lack thereof, are generally considered to be negligent. The reason for this is if a control is reasonably-expected to exist, by not performing the control that is negligent behavior. The need for the control could be due to a law, regulation or contractual obligation (e.g., client contract or industry association requirement).
Note – The reality with a L0 level of maturity is often:
This level of maturity is defined as “ad hoc practices,” where the control is being performed, but lacks completeness & consistency:
L1 practices are generally considered to be negligent. The reason for this is if a control is reasonably-expected to exist, by only implementing ad-hoc practices in performing the control that could be considered negligent behavior. The need for the control could be due to a law, regulation or contractual obligation (e.g., client contract or industry association requirement).
Note – The reality with a L1 level of maturity is often:
Practices are “requirements-driven” where the intent of control is met in some circumstances, but not standardized across the entire organization:
L2 practices are generally considered to be “audit ready” with an acceptable level of evidence to demonstrate due diligence and due care in the execution of the control. L2 practices are generally targeted on specific systems, networks, applications or processes that require the control to be performed for a compliance need (e.g., PCI DSS, HIPAA, CMMC, NIST 800-171, etc.).
It can be argued that L2 practices focus more on compliance over security. The reason for this is the scoping of L2 practices are narrowly-focused and are not enterprise-wide.
Note – The reality with a L2 level of maturity is often:
This level of maturity is defined as “enterprise-wide standardization,” where the practices are well-defined and standardized across the organization:
It can be argued that L3 practices focus on security over compliance, where compliance is a natural byproduct of those secure practices. These are well-defined and properly-scoped practices that span the organization, regardless of the department or geographic considerations.
Note – The reality with a L3 level of maturity is often:
This level of maturity is defined as “metrics-driven practices,” where in addition to being well-defined and standardized practices across the organization, there are detailed metrics to enable governance oversight:
L4 practices are generally considered to be “audit ready” with an acceptable level of evidence to demonstrate due diligence and due care in the execution of the control, as well as detailed metrics enable an objective oversight function. Metrics may be daily, weekly, monthly, quarterly, etc.
Note – The reality with a L4 level of maturity is often:
This level of maturity is defined as “world-class practices,” where the practices are not only well-defined and standardized across the organization, as well as having detailed metrics, but the process is continuously improving:
L5 practices are generally considered to be “audit ready” with an acceptable level of evidence to demonstrate due diligence and due care in the execution of the control and incorporates a capability to continuously improve the process. Interestingly, this is where Artificial Intelligence (AI) and Machine Learning (ML) would exist, since AI/ML would focus on evaluating performance and making continuous adjustments to improve the process. However, AI/ML are not required to be L5.
Note – The reality with a L5 level of maturity is often:
There are no products listed under this category.