Contacts
lsv [at] lsv.ens-paris-saclay.fr (LSV laboratory)
Thesis Supervisors

Privacy in Security Protocols : a formal and automated analysis

The information society we belong to heavily relies on secure information exchanges by using security protocols.
How do these protocols work? Which level of security do they provide?
In LSV Laboratory, Lucca Hirschi has devoted his PhD thesis to cryptographic protocols analysis and verification techniques which aim at improving their security.

To exchange information securely, one should use security protocols that specify how communicating agents should behave notably by using cryptographic primitives (e.g.encryption, signature).

Given their ubiquitous and critical nature, we need to reach an extremely high level of confidence that they actually meet their goals. Those goals can be various and depend on the usage context but, more and more often, they include privacy properties (e.g.anonymity, unlinkability).

Unfortunately, designed and deployed security protocols are often awed and critical attacks are regularly disclosed, even on protocols of utmost importance, leading to the never-ending cycle between attack and fix.

Security protocols analysis

To break the present stalemate, we advocate the use of formal methods providing rigorous, mathematical frameworks and techniques to analyse security protocols.

One such method allowing for a very high level of automation consists in analysing security protocols in the symbolic model and modelling privacy properties as equivalences between two systems.

Unfortunately, deciding such equivalences is actually undecidable in the general case. To circumvent undecidability, two main approaches have emerged.

First, for a bounded number of agents and sessions of the security protocol to analyse, it is possible to symbolically explore all possible executions yielding decision procedures for equivalence between systems.

Second, for the general case, one can semi-decide the problem leveraging dedicated abstractions, notably relying on a strong form of equivalence (i.e.diff-equivalence).

The two approaches, i.e. decision for the bounded case or semi-decision for the unbounded case, suffer from two different problems that significantly limit their practical impact. First, (symbolically) exploring all possible executions leads to the so-called states space explosion problem caused by the concurrency nature of security protocols.

Concerning the unbounded case, diff-equivalence is actually too imprecise to meaningfully analyse some privacy properties such as unlinkability, nullifying methods and tools relying on it for such cases.

Verification techniques for a better security

In the present thesis, we address those two problems, going back and forth between theory and practice.
Practical aspects motivate our work but our solutions actually take the form of theoretical developments. Moreover, we make the effort to confirm practical relevance of our solutions by putting them into practice (implementations) on real-world case studies (analysis of real-world security protocols).

First, we have developed new partial order reduction techniques in order to dramatically reduce the number of states to explore without loosing any attack. We design them to be compatible with equivalence verification and such that they can be nicely integrated in frameworks on which existing procedures and tools are based. We formally prove the soundness of such an integration in a tool and provide a full implementation. We are thus able to provide benchmarks showing dramatic speedups brought by our techniques and conclude that more protocols can henceforth be analysed.

Second, to solve the precision issue for the unbounded case, we propose a new methodology based on the idea to ensure privacy via sufficient conditions.
We present two conditions that always imply unlinkability and anonymity that can be verified using existing tools (e.g. ProVerif).

We implement a tool that puts this methodology into practice, hence solving the precision issue for a large class of protocols. This novel approach allows us to conduct the first formal analysis of some real-world protocols (some of them being widely deployed) and to discover some novel attacks.