Table of Contents

Empirical Validation of Risk and Security Methodologies

Among the research topics of the Security Group we try to understand whether security and risk assessment methodologies can actually work in practice.

There are many risk assessment methodologies and many security requirements methods both from industry (CoBIT, ISO2700x, ISECOM's OSST), and academia (CORAS, SecureTropos, SI*, SREP etc.).

To answer this question we need to ask first what does it mean in practice? The usual interpretation of researchers is the researchers tackle a real world problem.

But this is just the first mile of a long road. We can explain it with an anecdote: V.M., a former air traffic controller with 30+ years of experience of evaluation of controller software, was evaluating our tool for safety patterns (See Security Requirements Engineering). He told us our software generated a Windows error message (the kind of `error at exAB91A'). It was not an error, it was a window showing that a logical formula would not hold on his proposed safety pattern!

A methodology works in practice if

  1. it can be used effectively and
  2. efficiently
  3. by somebody else beside the methodology's own inventors
  4. on a real world problem.

Everybody can of course use any techniques on any problem with sufficient time and patience… but slicing beef with a stone knife is not going to be quick and the results is surely not a fine carpaccio.

In this research stream of empirical analysis we n experiments that evaluate how “normal” people (auditors, domain experts, even students etc.) uses researchers' and consultants' tools and methods to understand what's really the problem in practice.

Research Approach and Experimental Protocol

Since our research questions are exploratory in nature, we applied a mix-method experimental methodology combining both qualitative and quantitative data collection and analysis techniques. We evaluate methods' effectiveness based on the reports delivered by the participants, while we investigate the whys methods are effective by means of questionnaires, focus group interviews and post-it notes.

One of our goals is to investigate whether the methods under evaluation could be used effectively by users who have no prior knowledge of the methods. Therefore we have designed a protocol to conduct comparative empirical studies in this setting. The protocol consists of three main phases:

Our Experimental Protocol involves five types of actors:

  1. Method Designer is the researcher who has proposed one of the method under evaluation. His main responsibility is to train participants in the method and to answer participants' questions during the Application phase. S/he also contributes to the assessment of the methods'effectiveness by analyzing groups' reports.
  2. Customer is an industrial partner who introduces the industrial application scenario to the participants. S/he also has to be available during the Application phase to answers all possible questions that participants may raise during analysis.
  3. Observer plays an important role during the Application phase because they supplement audio-video recording with information about the behavior of participants e.g (if the Participants work in group vs work alone) and the difficulties that they face during the application of the method. The observer also interviews the groups and leads the post-it notes sessions.
  4. Researcher takes care of the organization, sets the research questions, selects the participants, invites the method designers and the customers, and analyzes the data collected during the study.
  5. Participant is the most important role. Participants work in group and apply a method provided by one of the method designers to analyze the risk and security issues of the scenario provided by the customer.

Experiments

Within the main stream project we covered a number of themes.

  1. Empirical validation of Risk and Security Requirements Methodologies
    • The e-RISE challenge. eRISE is an annual challenge that aims to compare the effectiveness of academic methods for the elicitation and analysis of threats and security requirements and investigate why these methods are effective. Four editions of eRISE challenge has been held:
      • eRISE 2011 (13 students and 36 professionals),
      • eRISE 2012 (15 students and 27 professionals),
      • eRISE 2013 (29 students and 28 professionals),
      • eRISE 2014 (56 professionals).
    • An Experimental Comparison of Tabular vs. Graphical Security Methods. We have conducted several experiments on this topic in:
      • Fall 2012 (28 participants),
      • Fall 2013 (29 participants),
      • Fall 2014 (35 participants),
      • Fall 2015 (28 participants).
  2. The Role of Catalogues of Threats and Security Controls in Security Risk Assessment. On this topic we have conducted three controlled experiments in:
    • Jan 2014 with novices (18 participants),
    • May 2014 with practitioners (15 participants).
    • Nov 2016 we conducted an additional study with novices (40 participants).
  3. Risk Models Comprehension: An Empirical Comparison of Tabular vs. Graphical Representations. We have conducted seven experiments on this topic on:
    • Oct 1st, 2014 in University of Trento, Italy (35 participants),
    • Nov 14th, 2014 in PUCRS University in Porto Alegre, Brazil (13 participants),
    • Nov 18th, 2014 in PUCRS University in Porto Alegre, Brazil (27 participants),
    • Sep 16th, 2015 in Cosenza, Italy at Poste Italiane cyber-security lab (52 participants),
    • Sep 21st, 2015 in University of Trento, Italy (51 participants),
    • Dec 2nd, 2015 in Bologna, Italy with ATM professionals (15 participants),
    • Jan-Feb, 2016 an online comprehensibility experiment with IT professionals (58 participants),
    • Sep 21st, 2016 in University of Trento, Italy (35 participants).
  4. Empirical Evaluation of CVSS Environmental Metrics.
    • Nov 2016 in University of Trento, Italy (29 participants).

People

The following is a list a people that has been involved in the project at some point in time.

Projects

This activity was supported by a number of projects

Publications

Working papers

Published papers

Talks and Tutorials

Dataset

We collected a huge pile of data to have evidence of methods’ effectiveness: questionnaires, final reports, hours of focus group interviews’ audio recordings, hundreds of post it notes, and more than 300 hours of videos of the training and application phases. If you would like to have access to the raw data you are welcomed to do so but we would need to discuss access.