ABSTRACT
We propose a novel methodology that allows automatic construction of benchmarks for Static Analysis Security Testing (SAST) tools based on real-world software projects by differencing vulnerable and fixed versions in FOSS repositories. The methodology allows us to evaluate ``actual'' performance of SAST tools (without unrelated alarms). To test our approach, we benchmarked 7 SAST tools (although we report only results for the two best tools), against 70 revisions of four major versions of Apache Tomcat with 62 distinct CVEs as the source of ground truth vulnerabilities.
- National Security Agency Center for Assured Software (NSA CAS). 2012. Juliet Test Suite v1.2 for Java User Guide. (2012).Google Scholar
- Nuno Antunes and Marco Vieira. 2015. Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples. 8, 2 (2015), 269–283.Google Scholar
- Joao Eduardo M. Araujo, Silvio Souza, and Marco Tulio Valente. 2011. Study on the relevance of the warnings reported by Java bug-finding tools. 5, 4 (2011), 2 This information is taken from https://www.openhub.net/ (last accessed on June 2017). 366–374.Google ScholarCross Ref
- Nathaniel Ayewah, William Pugh, J. David Morgenthaler, John Penix, and YuQian Zhou. 2007. Evaluating static analysis defect warnings on production software.Google Scholar
- Paul E. Black and Athos Ribeiro. 2016. SATE V Ockham Sound Analysis Criteria. Technical Report. National Institute of Standards and Technology (NIST).Google Scholar
- Cristian Cadar and Alastair F. Donaldson. 2016. Analysing the Program Analyser (ICSE ’16). ACM, New York, NY, USA, 765–768. Google ScholarDigital Library
- 2889206Google Scholar
- Aurelien Delaitre, Bertrand Stivalet, Elizabeth Fong, and Vadim Okun. 2015. Evaluating Bug Finders–Test and Measurement of Static Code Analyzers.Google Scholar
- Lisa Nguyen Quang Do, Michael Eichberg, and Eric Bodden. 2016. Toward an automated benchmark management system. ACM, 13–17.Google Scholar
- Brendan Dolan-Gavitt, Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, and Ryan Whelan. 2016. LAVA: Large-scale automated vulnerability addition.Google Scholar
- Pär Emanuelsson and Ulf Nilsson. 2008. A comparative study of industrial static analysis tools. 216 (2008), 5–21.Google Scholar
- Martin Johns and Moritz Jodeit. 2011. Scanstud: a methodology for systematic, fine-grained evaluation of static analysis tools.Google Scholar
- James A Kupsch and Barton P Miller. 2009. Manual vs. automated vulnerability assessment: A case study. 83–97.Google Scholar
- Daoyuan Li, Li Li, Dongsun Kim, Tegawendé F Bissyandé, David Lo, and Yves Le Traon. 2016. Watch out for This Commit! A Study of Influential Software Changes. arXiv preprint arXiv:1606.03266 (2016).Google Scholar
- Peng Li and Baojiang Cui. 2010. A comparative study on software vulnerability static analysis techniques and tools.Google Scholar
- Benjamin Livshits. 2005. Stanford SecuriBench. Online: http://suif. stanford. edu/livshits/securibench (2005).Google Scholar
- Viet Hung Nguyen, Stanislav Dashevskyi, and Fabio Massacci. 2015. An automatic method for assessing the versions affected by a vulnerability. 21, 6 (2015), 2268?–2297. Google ScholarDigital Library
- Viet Hung Nguyen, Stanislav Dashevskyi, and Fabio Massacci. 2016. An automatic method for assessing the versions affected by a vulnerability. Empirical Software Engineering 21, 6 (2016), 2268–2297. Google ScholarDigital Library
- NIST. 2016. SAMATE list of Source Code Security Analyzers. (2016). https: //samate.nist.gov/index.php/Source_Code_Security_Analyzers.htmlGoogle Scholar
- Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2010. The second static analysis tool exposition (SATE) 2009. (2010), 500–287.Google Scholar
- Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2011. Report on the Third Static Analysis Tool Exposition (SATE 2010). (2011), 500–283.Google Scholar
- Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2013. Report on the static analysis tool exposition (SATE) IV. 500 (2013), 297.Google Scholar
- Vadim Okun, Romain Gaucher, and Paul E. Black. 2009. Static analysis tool exposition (SATE) 2008. 5, 00-2 (2009), 79.Google Scholar
- OWASP. 2017. OWASP list of Source Code Analysis Tools. (2017). https: //www.owasp.org/index.php/Source_Code_Analysis_ToolsGoogle Scholar
- Latifa Ben Arfa Rabai, Barry Cohen, and Ali Mili. 2015. Programming Language Use in US Academia and Industry. 14, 2 (2015), 143.Google Scholar
- Michael Reif, Michael Eichberg, Ben Hermann, and Mira Mezini. 2017. Hermes: assessment and creation of effective test corpora. ACM, 43–48. Google ScholarDigital Library
- Joseph R. Ruthruff, John Penix, J. David Morgenthaler, Sebastian Elbaum, and Gregg Rothermel. 2008. Predicting accurate and actionable static analysis warnings: an experimental approach.Google Scholar
- David Wheeler. 2015. Static analysis tools for security. (2015). http://www. dwheeler.com/essays/static-analysis-tools.htmlGoogle Scholar
- John Wilander and Mariam Kamkar. 2002. A comparison of publicly available tools for static intrusion prevention. (2002).Google Scholar
- Abstract 1 Research problem & Motivation 2 Background & Related Work 3 Approach & Uniqueness 4 Results & Contributions ReferencesGoogle Scholar
Index Terms
- FOSS version differentiation as a benchmark for static analysis security testing tools
Recommendations
Delta-bench: differential benchmark for static analysis security testing tools
ESEM '17: Proceedings of the 11th ACM/IEEE International Symposium on Empirical Software Engineering and MeasurementBackground: Static analysis security testing (SAST) tools may be evaluated using synthetic micro benchmarks and benchmarks based on real-world software.
Aims: The aim of this study is to address the limitations of the existing SAST tool benchmarks: lack ...
Comparing Static Security Analysis Tools Using Open Source Software
SERE-C '12: Proceedings of the 2012 IEEE Sixth International Conference on Software Security and Reliability CompanionSoftware vulnerabilities present a significant impediment to the safe operation of many computer applications, both proprietary and open source. Fortunately, many static analysis tools exist to identify potential security issues. We present the results ...
Effect of static analysis tools on software security: preliminary investigation
QoP '07: Proceedings of the 2007 ACM workshop on Quality of protectionStatic analysis tools can handle large-scale software and find thousands of defects. But do they improve software security? We evaluate the effect of static analysis tool use on software security in open source projects. We measure security by ...
Comments