Fred Schneider, the Samuel B. Eckert Professor of Computer Science at Cornell University, presented the Day 2 keynote. Since Fred is well-known in the security community I looked forward to this talk – and it did not disappoint.
He started his talk by saying that the first crisis of software was about correctness. Now, while this might not be solved completely we have come a long way. However we have a new crisis to deal with: Software is not secure. Of course an ever-increasing reliance on software amplified this problem considerably over the later years. Defenses are largely reactive, defending on known attacks. Clearly we are always then a step behind the attackers. What is further true is that we do not really know how to measure effectiveness of our defenses.
While Byzantine Fault Tolerance is useful for access control, it is relatively useless for confidentiality as it is based on the underlying assumption that replicas fail independently. To make it useful we need a calculus for independence.
Similarly program refinement can be used to argue about programs satisfying properties, but it is inherently based on the assumption that we model by sequence and that we can equate the properties with a set of steps.
So Fred argues that we need to get beyond seeing security as an art (based on the innate abilities) or craft (transferable through training). We need an SoS: a SCIENCE of SECURITY – a body of laws (similar to the laws of thermodynamics perhaps). Such laws will give us predictive capability, it should transcend systems, attacks and defenses, it must be applicable to a real setting and provide an explanatory value.
The rest of the talk toured the (possible) landscape of such laws for policies, attacks and defenses. He argued the limits to reference monitoring to be ensuring safety properties, but not liveness properties. Current policies are often talked about in terms of CIA: Confidentiality, Integrity and Availability. These concepts are however not orthogonal (we can for example make a system 100% confidential by taking all availability away) and this has limited value from a formal perspective. An argument for using hyper-properties were made, but hyper-properties are not refinement closed – and we often program by refinement. (Disclaimer: I do not understand hyper-properties so well at this point in time, so maybe I got the point wrong). Obfuscation, it was argued, provides good probabilistic protection. What was interesting in this discussion is that he could make comparative arguments without paying attention to the internals: that is after all what good laws should allow you to do.
One of his most interesting (and possible controversial) statements made was that “trust cannot be created it can only be relocated…”. Again a law-like statement, unproven… but somebody studied thermodynamics sometime…
Overall an excellently delivered talk on what is a rather complex and tricky subject.