This is an old revision of the document!
Among the research topics of the Security Group we focus on here on enforcement mechanisms that are practical in the sense that they tolerate the fact that humans make mistakes in good faith.
If we look at the way humans organizations manage security we appreciate their flexibility: a policy officer unsatisfied by our thorn driving licence will explicitly ask for another document of his liking, a project officer will not launch a major review of an EU project if a single deliverable is sent with a week delay. She might do it after a continued violation of deadlines. The current formal models for enforcement and authentication don't distinguish between small and big infringements.
The starting point is that a server should be able to compute and communicate to a client the credential that are missing to obtain a service and that it should be possible for either the server or the client to disclose such missing credentials in a piecewise fashion (a generalization of the trust negotiation by Winslett, Yu, Winsborough and others. We have actually theoretically specified by using abduction and fully implemented as web-services using PKI and PMI for credentials. It also worked well: logic only takes a fraction over the time taken by the cryptographic verification of the credentials. You can check the TAAS paper for the details and have a look at the architecture.
Yet, this is not enough because, once access has been granted, security monitors suffer from the same lack of flexibility and do not capture the real working of human organizations. Most papers (Schneider with Erlingsson, Hamlen and Morriset, Ligatti with Bauer and Walker) described in some theorems the good traces potentially enforceable with this or that enforcement mechanism (safety properties, renewal properties, etc.). In collaboration with some researchers from the San Raffaele hospital in Milano (who were interested in the practical aspect of enforcement) we showed that safety and renewal properties are not what you want. They key observation is that most real-life tasks are a repetitions of sub-tasks. We called them iterative properties and you can see the difference between classical security properties such as safety and renewal in the figure on the side. As an example consider a drug dispensation process (a process running hundreds of times and lasting for tens of steps in the hospital IT system
). Safety says that as soon as one single process is wrong you halt the system. Renewal says that until the first mistake is corrected the system will start to silently gobble all other actions. Hardly appealing behaviors for any practical purposes…
Yet many of their proponents have actually implemented systems that enforced those properties.
There is a catch here that many people overlook. What distinguishes an enforcement mechanism is not what happens when traces are good, because nothing should happen! The interesting part is how precisely bad traces (that don't satisfy the policy P) are converted into good ones (that do satisfy the policy P). The picture on the sides shows a classification of edit automata which enforce a renewal property P from Bauer, Ligatti and Walker. Implemented systems, being by definition implemented, will actually take care of correcting bad traces that are not in P, in some way. But this part is simply not reflected in the current theories which sits on the bottom of the pile.
Currently we are planning to devise a general mechanism based on the idea of MAP-REDUCE that can lead to a programmable model of a whole range information flow policies (essentially generalizing Secure Multi Execution to a property of your choice).
Within the main stream project we covered a number of themes.
The following is a list a people that has been involved in the project at some point in time.