Evaluating the Soundness of Security Metrics from Vulnerability Scoring Frameworks

Are the vulnerability scoring frameworks we rely on for security decisions actually sound — are they reproducible, objective, and unbiased?

2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2020)

IEEE · 2020

Read Paper
Security MetricsSecurity EvaluationMDSSMCVSSVulnerability Scoring
The Gap

A number of vulnerability scoring frameworks aiming to estimate the severity of known vulnerabilities in software-dependent systems have been proposed and have found widespread use. While such frameworks have been widely used to characterize the severity of vulnerabilities, the soundness of their associated security metrics has yet to be formally evaluated.

which led us to ask
?The Question

Are the vulnerability scoring frameworks we rely on for security decisions actually sound — are they reproducible, objective, and unbiased?

The Approach

We evaluate five vulnerability scoring frameworks reported in the literature that have found widespread use in security decision-making processes or that have been proposed for such purposes. The evaluation is based on the Method for Designing Sound Security Metrics (MDSSM). We also present several recommendations to improve vulnerability scoring frameworks to yield more sound security metrics.

Figures

Security evaluation framework sheet

Security evaluation framework sheet

The Transformation

Our results show that four of the five frameworks considered in our evaluation yield security metrics that are not sound. Relying on unsound metrics when making security-related decisions raises questions as to whether decisions based on these metrics are justifiable and/or acceptable. We provided several recommendations that can help to improve the evaluated frameworks and should be considered in the development of new frameworks.