skip to main content
10.1145/3106237.3121276acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

FOSS version differentiation as a benchmark for static analysis security testing tools

Published:21 August 2017Publication History

ABSTRACT

We propose a novel methodology that allows automatic construction of benchmarks for Static Analysis Security Testing (SAST) tools based on real-world software projects by differencing vulnerable and fixed versions in FOSS repositories. The methodology allows us to evaluate ``actual'' performance of SAST tools (without unrelated alarms). To test our approach, we benchmarked 7 SAST tools (although we report only results for the two best tools), against 70 revisions of four major versions of Apache Tomcat with 62 distinct CVEs as the source of ground truth vulnerabilities.

References

  1. National Security Agency Center for Assured Software (NSA CAS). 2012. Juliet Test Suite v1.2 for Java User Guide. (2012).Google ScholarGoogle Scholar
  2. Nuno Antunes and Marco Vieira. 2015. Assessing and Comparing Vulnerability Detection Tools for Web Services: Benchmarking Approach and Examples. 8, 2 (2015), 269–283.Google ScholarGoogle Scholar
  3. Joao Eduardo M. Araujo, Silvio Souza, and Marco Tulio Valente. 2011. Study on the relevance of the warnings reported by Java bug-finding tools. 5, 4 (2011), 2 This information is taken from https://www.openhub.net/ (last accessed on June 2017). 366–374.Google ScholarGoogle ScholarCross RefCross Ref
  4. Nathaniel Ayewah, William Pugh, J. David Morgenthaler, John Penix, and YuQian Zhou. 2007. Evaluating static analysis defect warnings on production software.Google ScholarGoogle Scholar
  5. Paul E. Black and Athos Ribeiro. 2016. SATE V Ockham Sound Analysis Criteria. Technical Report. National Institute of Standards and Technology (NIST).Google ScholarGoogle Scholar
  6. Cristian Cadar and Alastair F. Donaldson. 2016. Analysing the Program Analyser (ICSE ’16). ACM, New York, NY, USA, 765–768. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. 2889206Google ScholarGoogle Scholar
  8. Aurelien Delaitre, Bertrand Stivalet, Elizabeth Fong, and Vadim Okun. 2015. Evaluating Bug Finders–Test and Measurement of Static Code Analyzers.Google ScholarGoogle Scholar
  9. Lisa Nguyen Quang Do, Michael Eichberg, and Eric Bodden. 2016. Toward an automated benchmark management system. ACM, 13–17.Google ScholarGoogle Scholar
  10. Brendan Dolan-Gavitt, Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, and Ryan Whelan. 2016. LAVA: Large-scale automated vulnerability addition.Google ScholarGoogle Scholar
  11. Pär Emanuelsson and Ulf Nilsson. 2008. A comparative study of industrial static analysis tools. 216 (2008), 5–21.Google ScholarGoogle Scholar
  12. Martin Johns and Moritz Jodeit. 2011. Scanstud: a methodology for systematic, fine-grained evaluation of static analysis tools.Google ScholarGoogle Scholar
  13. James A Kupsch and Barton P Miller. 2009. Manual vs. automated vulnerability assessment: A case study. 83–97.Google ScholarGoogle Scholar
  14. Daoyuan Li, Li Li, Dongsun Kim, Tegawendé F Bissyandé, David Lo, and Yves Le Traon. 2016. Watch out for This Commit! A Study of Influential Software Changes. arXiv preprint arXiv:1606.03266 (2016).Google ScholarGoogle Scholar
  15. Peng Li and Baojiang Cui. 2010. A comparative study on software vulnerability static analysis techniques and tools.Google ScholarGoogle Scholar
  16. Benjamin Livshits. 2005. Stanford SecuriBench. Online: http://suif. stanford. edu/livshits/securibench (2005).Google ScholarGoogle Scholar
  17. Viet Hung Nguyen, Stanislav Dashevskyi, and Fabio Massacci. 2015. An automatic method for assessing the versions affected by a vulnerability. 21, 6 (2015), 2268?–2297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Viet Hung Nguyen, Stanislav Dashevskyi, and Fabio Massacci. 2016. An automatic method for assessing the versions affected by a vulnerability. Empirical Software Engineering 21, 6 (2016), 2268–2297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. NIST. 2016. SAMATE list of Source Code Security Analyzers. (2016). https: //samate.nist.gov/index.php/Source_Code_Security_Analyzers.htmlGoogle ScholarGoogle Scholar
  20. Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2010. The second static analysis tool exposition (SATE) 2009. (2010), 500–287.Google ScholarGoogle Scholar
  21. Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2011. Report on the Third Static Analysis Tool Exposition (SATE 2010). (2011), 500–283.Google ScholarGoogle Scholar
  22. Vadim Okun, Aurelien Delaitre, and Paul E. Black. 2013. Report on the static analysis tool exposition (SATE) IV. 500 (2013), 297.Google ScholarGoogle Scholar
  23. Vadim Okun, Romain Gaucher, and Paul E. Black. 2009. Static analysis tool exposition (SATE) 2008. 5, 00-2 (2009), 79.Google ScholarGoogle Scholar
  24. OWASP. 2017. OWASP list of Source Code Analysis Tools. (2017). https: //www.owasp.org/index.php/Source_Code_Analysis_ToolsGoogle ScholarGoogle Scholar
  25. Latifa Ben Arfa Rabai, Barry Cohen, and Ali Mili. 2015. Programming Language Use in US Academia and Industry. 14, 2 (2015), 143.Google ScholarGoogle Scholar
  26. Michael Reif, Michael Eichberg, Ben Hermann, and Mira Mezini. 2017. Hermes: assessment and creation of effective test corpora. ACM, 43–48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Joseph R. Ruthruff, John Penix, J. David Morgenthaler, Sebastian Elbaum, and Gregg Rothermel. 2008. Predicting accurate and actionable static analysis warnings: an experimental approach.Google ScholarGoogle Scholar
  28. David Wheeler. 2015. Static analysis tools for security. (2015). http://www. dwheeler.com/essays/static-analysis-tools.htmlGoogle ScholarGoogle Scholar
  29. John Wilander and Mariam Kamkar. 2002. A comparison of publicly available tools for static intrusion prevention. (2002).Google ScholarGoogle Scholar
  30. Abstract 1 Research problem & Motivation 2 Background & Related Work 3 Approach & Uniqueness 4 Results & Contributions ReferencesGoogle ScholarGoogle Scholar

Index Terms

  1. FOSS version differentiation as a benchmark for static analysis security testing tools

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2017: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
      August 2017
      1073 pages
      ISBN:9781450351058
      DOI:10.1145/3106237

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 August 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader