CVSS, CVE, CWE, CAPEC – common standards security professionals should know

CVSS, CVE, CWE, CAPEC – common standards security professionals should know

CVSS, CVE, CWE, and CAPEC are widespread and well-known security standards to rate the severity of vulnerabilities, uniquely identify vulnerabilities, describe common weaknesses in software, and categorize common attack patterns of bad guys.

In this article, we present the four standards and give brief guidance for daily usage.


  1. CVSS
  2. CVE
  3. CWE
  4. CAPEC
  5. Summary
  6. Links

Always stay in the loop!
Subscribe to our RSS/Atom feeds.


CVSS stands for “Common Vulnerability Scoring System”, and is currently maintained by the “Forum of Incident Response and Security Teams” (FIRST).

Purpose and history

The open CVSS originated from the problem to rate the severity of vulnerabilities in a defined and structured way. Image a report stating the impact of a specific vulnerability is “high” while some other report classifies the impact as “medium”. You likely see that there is no system involved, and the ratings are somewhat subjective. To address this, CVSS comes with different defined metrics that make it easy to understand the composition of the final score.

CVSS version 1.0 was released in 2005 as a (mostly academic) approach to rate the severity of vulnerabilities. However, organizations encountered significant issues when they tried to make use of CVSS 1.0. In the meantime, the already existing Forum of Incident Response and Security Teams (FIRST) became the custodian of CVSS. This led to the quick development of CVSS version 2.0, released in 2007. 8 years later, further feedback of organizations resulted in the release of CVSS version 3.0. The upcoming version 3.1 will improve the clarity of concepts introduced in CVSS 3.0 to make CVSS more user-friendly. Version 3.1 should be released in 2019.


It is important to understand the basic idea of CVSS. The final result of using CVSS 3.0 are a score and a vector string. Both are results of a calculation that is based on three different metrics:

The final score ranges from 0.0 (no vulnerability present) to 10.0 (extremely severe vulnerability).

Base metrics

Normally, the base metrics are already predefined. Somebody already classified the initial severity of a vulnerability for you. If you work with CVSS, you do not modify the base metrics but use the environmental metrics for customization.


In most cases, the initial base score remains the same over time. Sometimes, researches spot new ways to exploit a vulnerability, though. An example is a vulnerability in the MikroTik RouterOS (CVE-2018-14847). The initial base score was 7.5 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N). Later, it was changed to 9.1 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N).

The base metric group consists of three parts:

  1. exploitability metric group (reflects the characteristics of the vulnerable component)
    • attack vector: Is it possible to exploit the vulnerability over the internet, or does an attacker need physical access?
    • attack complexity: Is it easy for the attacker to exploit the vulnerability, or are special conditions required to be successful?
    • privileges required: Is authorization necessary when exploiting the vulnerability?
    • user interaction: Is any action of the user (not the attacker) required to successfully exploit the vulnerability?
  2. scope: Are other components affected if an attacker successfully exploits the original vulnerability?
  3. impact metric group (refers to the properties of the vulnerable component)
    • confidentiality impact: To which extent is confidentiality of information affected by the vulnerability?
    • integrity impact: To which extent is integrity of information/the vulnerable component affected by the vulnerability?
    • availability impact: To which extent is availability of information/the vulnerable component affected by the vulnerability?

(In reality, there are several more possibilities to answer the above-mentioned questions. We dropped some of them in this article for reasons of simplification.)

Defining the base metrics only results in a valid CVSS score, and a valid CVSS vector string. For example, let’s look at a vulnerability in Infineon’s RSA library (dubbed “ROCA”) that makes it easier for attackers to recover the RSA private key. Let’s rate the vulnerability:

  • exploiting the vulnerability is possible over the internet → attack vector: network
  • exploiting the vulnerability isn’t very easy → attack complexity: high
  • special privileges aren’t required → privileges required: none
  • user interaction isn’t required → user interaction: none
  • the vulnerability doesn’t directly affect other authorization scopes→ scope: unchanged
  • the impact on confidentiality is high
  • the impact on integrity is high
  • the impact on availability is low

Selecting these metrics results in a base score of 7.4 (high). If you do not define temporal and environmental metrics, this is also the final CVSS score. However, nobody knows how you calculated 7.4. Therefore, it is important to always provide the vector string that shows your selections. In this case, the vector string would be CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N.


Keep in mind that CVSS 3.0 scoring differs from CVSS 2.0 scoring. So, always be sure that you use the latest CVSS version. The upcoming CVSS 4.0 will differ from CVSS 3.0, and it will introduce new/other metrics.

Temporal metrics

The optional temporal metric group contains metrics that change over time. There are only three of them:

  • exploit code maturity: This metric describes the likelihood of the vulnerability being exploited. It ranges from “unproven” (the exploit is theoretical) to “high” (no exploit required, or there is code that autonomously exploits the vulnerability)
  • remediation level: This metric tells you about the current patch status. It ranges from “unavailable” (there is no patch at the moment) to “official fix” (the vendor provides an official patch).
  • report confidence: This metric describes the likelihood of the existence of the vulnerability and measures the credibility of the technical details published so far. It ranges from “unknown” (there is no clear evidence and uncertainty) to “confirmed” (the vulnerability can be reproduced, or there are detailed reports).

Let’s go back to our example, the ROCA vulnerability. Thanks to coordinated disclosure, there were official fixes when the scientific paper was initially released. We rate the temporal metrics for the vulnerability:

  • the scientific paper included a proof of concept → exploit code maturity: proof-of-concept / report confidence: confirmed
  • vendors immediately provided official patches → remediation level: official fix

Choosing the metrics results in a new score since it considers temporal effects now. We get a CVSS score of 6.7, and the vector string AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N/E:P/RL:O/RC:C.

Over time, people developed functional exploits. This results in a change of our score (exploit code maturity), again: proof-of-concept → functional. The updated CVSS score is 6.9, and the corresponding vector string AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:N/E:F/RL:O/RC:C.

Environmental metrics

Finally, there is the optional environmental metric group. The whole purpose of this group is customization. An organization can redefine the base metric group to match their own IT landscape.

Hence, the environmental metric group contains exactly the same metrics as the base metric group to allow customization.

Additionally, the organization can define their own requirements for confidentiality, integrity, and availability. The ratings are straightforward, ranging from “low” (limited effect on the organization) to “high” (catastrophic affect on the organization).

We go back to our above-mentioned example. Imagine that confidentiality and integrity are extremely important in your organization (high) while availability doesn’t take top priority (low). The final, customized CVSS score is 7.5 then. The whole vector string in this example would be even longer than before.


CVE stands for “Common Vulnerabilities and Exposures”, and is currently maintained by the Mitre Corporation, a US-based not-for-profit organization.

Purpose and history

Years ago, different organizations used different names for publicly-known vulnerabilities in software. There were no unique names or identifiers for vulnerabilities. This resulted in confusion. CVE is an open standard that offers globally unique identifiers for vulnerabilities to solve this problem.

In 1999, the Mitre Corporation presented the concept of a CVE system. Later that year, the initial list of 321 CVE entries was created. One year later, the first 29 organizations adapted the system to provide CVE-compatible identifiers for more than 40 products. Since 2002, the NIST (National Institute of Standards and Technology) recommends using CVE for US agencies. At the moment, there are nearly 115,000 registered CVE entries.


CVE identifiers are assigned by CVE Numbering Authorities (CNAs). The Mitre Corporation administers the whole CVE system, and oversees the different CNAs. As of April 2019, there are 94 CNAs in 16 different countries. Each CNA has a defined scope for which it can assign CVE identifiers. If the affected product is out-of-scope of all CNAs, the Mitre Corporation can be contacted to get a CVE identifier.

Nowadays, CVE identifiers consist of three parts:

  • the “CVE” prefix
  • year of application for the CVE identifier (e.g. “2019”)
  • unique number that is reset each year (4 or more digits)

Examples for well-known vulnerabilities and their corresponding CVE identifiers are:

  • CRIME (Compression Ratio Info-leak Made Easy), CVE-2012-4929 – an exploit that leverages TLS compression to steal authentication cookies for session hijacking
  • Heartbleed, CVE-2014-0160 – a security vulnerability in the OpenSSL cryptography library that can be exploited to steal secret data and TLS encryption keys
  • DROWN (Decrypting RSA with Obsolete and Weakened eNcryption), CVE-2016-0800 – a security vulnerability that allows to weaken TLS encryption if a vulnerable server supports SSLv2
  • Spectre, CVE-2017-5753 and CVE-2017-5715 – a security vulnerability of modern microprocessors that results in leakage of secret data

As you can see, some well-known vulnerabilities have multiple CVE identifiers. There is also no guarantee that every security vulnerability gets a CVE identifier. For example, unreleased code or software in development (alpha/beta) may not get CVE identifiers.


CWE stands for “Common Weakness Enumeration”, and is currently maintained by the Mitre Corporation, a US-based not-for-profit organization.

Purpose and history

The need for a system like CWE originated from the code assessment industry. The purpose of the standardized CWE system is to provide a structured list of clearly defined software weaknesses. A software weakness isn’t a software vulnerability by all means, however, software weaknesses may result in vulnerabilities.

After launching the initial CVE list in 1999, the Mitre Corporation began working on a system to categorize software weaknesses. The result was a system that was sufficient for usage in combination with the CVE system. However, there were additional requirements that needed to be addressed. So, Mitre took 1,500 real-world vulnerabilities, and categorized them. The first version of the CWE system was released in September 2008. At the moment, the current version is 3.2 (released in January 2019). The latest version describes more than 800 common weaknesses in software.


Unlike CVE identifiers, CWE entries are fixed. Most CWE entries consist of:

  • CWE identifier and name of the weakness type
  • General description and alternate terms for the weakness
  • Description of the behavior of the weakness
  • Description of the exploit of the weakness
  • Likelihood of exploit for the weakness
  • Description of the consequences of the exploit
  • Potential mitigation
  • Code samples for the languages/architectures

For example, there is “CWE-326: Inadequate Encryption Strength”, which is described as “The software stores or transmits sensitive data using an encryption scheme that is theoretically sound, but is not strong enough for the level of protection required.”


CAPEC stands for “Common Attack Pattern Enumeration and Classification”, and is currently maintained by the Mitre Corporation, a US-based not-for-profit organization.

Purpose and history

Like CVE and CWE, Mitre created the CAPEC system to standardize something. In the case of CAPEC, Mitre structured and defined typical attack patterns of bad guys.

The first CAPEC list was released in May 2007, consisting of 101 attack patterns. At the moment, the current version is 3.1 (released in April 2019). The latest version describes more than 500 attack patterns.


Like CWE, CAPEC identifiers are fixed. Most CAPEC entries consist of:

  • CAPEC identifier and name of the attack pattern
  • General description
  • Typical severity of the attack pattern
  • Prerequisites to conduct the attack
  • Potential mitigation
  • Related CWE entries

For example, CAPEC-245 describes an XSS attack using doubled characters. The related software weakness is “CWE-85: Doubled Character XSS Manipulations”.

Follow us on Mastodon:


CVSS allows you to get an idea about the potential severity of a specific vulnerability, and to customize the score according to your own IT landscape. CVE identifiers allow people to get globally unique identifiers for vulnerabilities to clearly refer to them. The CWE system allows people to clearly refer to software weaknesses while the CAPEC system allows people to clearly refer to attack patterns. As a security professional, you should know these systems.

By the way, the Mitre Corporation and other organizations developed more systems, however, some systems were merged or abandoned over time.