User Tools

Site Tools


vulnerability_discovery_models

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
vulnerability_discovery_models [2018/08/29 16:41]
ivan.pashchenko@unitn.it
vulnerability_discovery_models [2025/01/28 00:47] (current)
fabio.massacci@unitn.it
Line 3: Line 3:
 Among the [[research_activities|research topics]] ​ of the [[start|Security Group]] we focus on here on the following activities: Among the [[research_activities|research topics]] ​ of the [[start|Security Group]] we focus on here on the following activities:
  
 +  * Which vulnerable dependencies really matter?
   * How to (automatically) find vulnerabilities in the deployed versions of FOSS?   * How to (automatically) find vulnerabilities in the deployed versions of FOSS?
   * How to automatically test them (when you get a report)   * How to automatically test them (when you get a report)
   * Which vulnerabilities are actually exploited in the Wild?   * Which vulnerabilities are actually exploited in the Wild?
 +  * Which vulnerability scanning tool performs best on your particular project?
   * Empirical Validation of Vulnerability Discovery Models?   * Empirical Validation of Vulnerability Discovery Models?
  
Line 11: Line 13:
  
 Most importantly,​ **Do you want data?** We know that building datasets is difficult, error prone and time consuming so we have decided to share our efforts of the past 4 years. Check our [[datasets|Security Datasets in Trento]]. Most importantly,​ **Do you want data?** We know that building datasets is difficult, error prone and time consuming so we have decided to share our efforts of the past 4 years. Check our [[datasets|Security Datasets in Trento]].
 +
 +===== Bringing order to the dependency hell: which vulnerable dependencies really matter =====
 +
 +Vulnerable dependencies are a known problem in today’s open-source software ecosystems because FOSS libraries are highly interconnected and developers do not always update their dependencies.
 +
 +You may want to read first a thematic analysis study ({{:​research_activities:​vulnerability-analysis:​ccs-2020-aam.pdf|Accepted in CCS}}) in which we interviewed 25 developers all over the world provide some important insight in the choice of company to update or not update the software.
 +
 +In {{:​research_activities:​vulnerability-analysis:​pashchenko-vuln4real.pdf |TSE 2020 Paper}} we show how to avoid the over-inflation problem of academic and industrial approaches for reporting vulnerable dependencies in FOSS software, and therefore, satisfy the needs of industrial practice for correct allocation of development and audit resources.
 +
 +To achieve this, we carefully analysed the deployed dependencies,​ aggregated dependencies by their projects, and distinguished halted dependencies. All this allowed us to obtain a counting method that avoids over-inflation.
 +
 +To understand the industrial impact, we considered the 200 most popular FOSS Java libraries used by SAP in its own software. Our analysis included 10905 distinct GAVs (group, artifact, version) in Maven when considering all the library versions.
 +
 +We found that about 20% of the dependencies affected by a known vulnerability are not deployed, and therefore, they do not represent a danger to the analyzed library because they cannot be exploited in practice. Developers of the analyzed libraries are able to fix (and actually responsible for) 82% of the deployed vulnerable dependencies. The vast majority (81%) of vulnerable dependencies may be fixed by simply updating to a new version, while 1% of the vulnerable dependencies in our sample are halted, and therefore, potentially require a costly mitigation strategy. ​
 +
 +Our methodology allows software development companies to receive actionable information about their library dependencies,​ and therefore, correctly allocate costly development and audit resources, which is spent inefficiently in case of distorted measurements.
 +
 +Do you want to check if your project actually uses some vulnerable dependencies?​ Let us know.
 +
  
 ===== A Screening Test for Disclosed Vulnerabilities in FOSS Components ===== ===== A Screening Test for Disclosed Vulnerabilities in FOSS Components =====
Line 26: Line 47:
  
 If you are interested in getting the code for the analysis please let us know. If you are interested in getting the code for the analysis please let us know.
 +
 +
 +===== Effort of security maintenance of FOSS components ===== 
 +
 +In our paper we investigated publicly available factors (from number of active users to commits, from code size to usage of popular programming languages, etc.) to identify which ones impact three potential effort models: Centralized (the company checks each component and propagates changes to the product groups), Distributed (each product group is in charge of evaluating and fixing its consumed FOSS components),​ and Hybrid (seldom used components are checked individually by each development team, the rest is centralized).
 +
 +We use Grounded Theory to extract the factors from a six months study at the vendor and report the results on a sample of 152 FOSS components used by the vendor.
 +
 +===== Which static analyzer performs best on a particular FOSS project? =====
 +
 +Our {{:​esem-final.pdf|paper in proceedings of International Symposium on Empirical Software Engineering and Measurement}} addresses the limitations of the existing static analysis security testing (SAST) tool benchmarks: lack of vulnerability realism, uncertain ground truth, and large amount of findings not related to analyzed vulnerability.
 +
 +We propose **Delta-Bench** – a novel approach for the automatic construction of benchmarks for SAST tools based on
 +differencing vulnerable and fixed versions in Free and Open Source (FOSS) repositories. I.e., Delta-Bench allows SAST tools to be automatically evaluated on the real-world historical vulnerabilities using only the findings that a tool produced for the analyzed vulnerability.
 +
 +We applied our approach to test 7 state of the art SAST tools against 70 revisions of four major versions of Apache Tomcat spanning 62 distinct Common Vulnerabilities and Exposures (CVE) fixes and vulnerable files totalling over
 +100K lines of code as the source of ground truth vulnerabilities.
 +
 +The most interesting finding we have - tools perform differently due to the selected benchmark.
 +
 +The Delta-Bench was awarded silver medal in the ESEC/FSE 2017 Graduate Student Research Competition:​ {{https://​drive.google.com/​file/​d/​0B_rJCkKmzPjSWllQcEJpQWNOOVU/​view?​usp=sharing|Author'​s PDF}} or {{https://​doi.org/​10.1145/​3106237.3121276|Publisher'​s Version}}
 +
 +Let us know if you want us to select a SAST tool that suits to your needs.
 +
  
 ===== Which vulnerabilities are actually exploited in the Wild? ===== ===== Which vulnerabilities are actually exploited in the Wild? =====
vulnerability_discovery_models.1535553688.txt.gz · Last modified: 2021/01/29 10:58 (external edit)