In the second part of the course we studied a static analysis
technique (control flow analysis) for proving secrecy/forward secrecy
properties on crypto protocols. We used a tool in the lab (Rewrite)
which performed this analysis.
Typical exam exercises on this part are about running the analysis
technique "on paper". Of course, a computer is able to run algorithms
much faster than a human, so exercises will not be about the analysis
of a full real-world protocol, but on small excerpts which can be
handled on paper. Below, I mention a few examples.
*** Formalization Exercise
Consider the following protocol excerpt. Alice (A) and Bob (B) share
a symmetric key K. A generates a fresh symmetric key K2, encrypts it
with K, and sends it to B. B receives K, generates a random number N,
and sends its encryption with K2 to A. A then sends to B the hash of
N.
Formalize the above protocol steps using the applied-pi notation.
(End of the exercise)
I will not be picky on requiring the exact syntax, but your answer
must be clear and not ambiguous. I would expect something like the
following.
new K .
( # Alice
new K2 . out enc(K2,K) . in X . out hash(dec(X,K2)) . ()
| # Bob
in Y . let K2 = dec(Y,K) . new N . out enc(N,K2) . ()
)
*** Tree Automaton Generation Exercise
Consider the following specification
( new K . out cons(cons(K,1),2) . ()
| ! . in X . out fst(X) . ()
)
Generate a tree automaton representing an over approximation for the
sets of messages exchanged by the above code, as done by the control
flow analysis, by running the function gen(...) on it. Specify the
states of the resulting automaton (the "@x" labels ) as well as its
transitions (@x -> f(@y)).
(End of exercise)
I will not require you to name the @x labels exactly as gen() does, or
even generating exactly the same transitions, but you have to provide
an automaton which is correct and can be easily related to the
specification. For instance, the following would be acceptable even if
the actual gen()-generated automaton is slightly different.
@K_bar -> K , next(@K_bar)
@K -> val(@K_bar)
@out-above -> cons(cons(@K,1),2)
@out-below -> @out-repl
@in-repl -> @in-below
@X -> @in-repl
@out-repl -> fst(@X)
@in-repl -> @out-repl
@in-above -> @out-below , @in
@in-below -> @out-above , @in
@out -> @out-above , @out-below
Again, variants of the above will be accepted, provide they do
over-approximate the message flow.
*** Tree Automaton Completion Exercise
Consider the tree automaton A
@a -> f(@b,@b)
@b -> 1 , 2 , h(@b)
and the rewriting rule
f(h(X),X) => g(X)
Applying the completion algorithm, construct a tree automaton
over-approximating the language of A up to rewriting.
(End of exercise)
Here you should notice that the rule is not left linear, so you
need an over approximation for intersection as well. E.g.
@b intersects @b : yes ( there's 1 )
@a intersects @a : yes ( f(1,1) )
@a intersects @b : no
@b intersects @a : no ( symmetric )
Note that you have to keep this table updated when you add further
transitions to the automaton A.
Running the completion algorithm, the rule is triggered since
@b intersects @b, hence we add the following to the automaton:
@a -> g(@b)
It is then easy to check that the intersection approximation is not
affected by this new transition. Also, the rewriting rule is not
triggered on any new cases. So, we reach a fixed point, and
the augmented automaton is the final result.
Of course, in the exam expect a few more iterations, due to the added
transition triggering the rewriting rule again.