Proof assistants are formal proof management systems, which provide a formal language to write mathematical definitions, executable algorithms and theorems, more of which here is explained by Inria Rennes – Bretagne Atlantique
Software correctness and, in particular, formal software correctness, is a notion that becomes more and more prevalent, from financial considerations for embedded systems to lethal issues for medicine or autonomous vehicles. Proof assistants based on Type Theory have shown their efficiency to prove the correctness of important pieces of software like the complete tool chain of the DeepSpec project. Among proof assistants, one of the most popular is the Coq proof assistant.
It is built around a well-delimited kernel that perfoms typechecking for definitions in a variant of the Calculus of Inductive Constructions (CIC). The typechecking process corresponds to the certification phase of the proof assistant in the sense that if a proof typechecks then it must be correct. Since the introduction of the Calculus of Inductive Constructions (CIC) and the first implementation of the Coq proof assistant, very few issues have been found in the underlying theory of Coq.
There have been several debates on which set of axioms are consistent altogether but globally, the type theory supporting the Coq proof assistant is very stable and reliable. This is, however, far from the case for the implementation of the kernel of Coq, for which on average one critical bug is found every year. This webpage hosts the list of the critical bugs that have been found in the Coq proof assistant. For instance, one can find that for some time, there was a substitution missing in the body of a let in the definition of pattern-matching, somewhere in the code, which could lead to a proof of False. Fortunately, all those critical bugs are very tricky to exhibit and would be very unlikely to happen by accident, but still, a malicious developer could exploit them to produce very hard to detect fake formalisations.
At a time where certificates obtained from proof assistants, such as Coq are gaining traction in the safety & security industry, this looks like a bad joke. However, implementing and maintaining an efficient and bug-free type checker for CIC is a very complex task, mainly due to the fact that the implementation relies on many invariants which are not always explicit and easy to break by modifying the code or adding new functionality. But because of Gödel’s second incompleteness theorem, there is no hope to prove completely the correctness of the specification of Coq inside Coq.
Indeed, Gödel taught us almost one hundred years ago, that no sufficiently powerful logical system can justify itself. However, this negative result does not prevent us to prove the correctness of the implementation, assuming the correctness of the underlying logical system, thus, moving from a trusted code base (TCB) to a trusted theory base (TTB) paradigm.
MetaCoq: A proof assistant which justifies itself
The MetaCoq project aims at providing the first implementation of a type checker for the core language of a proof assistant (here Coq), which is proven correct – in the proof assistant itself – with respect to an entirely formal specification of the core language. We would like to mention right away that we are not claiming to prove the consistency of Coq in Coq, which would obviously contradict Gödel’s second incompleteness theorem.
Therefore, it is not possible to prove the consistency of the kernel of Coq using Coq, but it is perfectly possible to prove the correctness of an implementation of a type-checker assuming the consistency of the theory. This is the only Achilles heel of the MetaCoq project, the correctness of the specification of the metatheory: If the metatheory fulfils the well-known, assumed properties, then there is no error in the implementation. Note that if one of the assumed properties would not be true, the implementation might be wrong – but there would be a much more serious problem to fix in Coq’s metatheory.
Without entering into philosophical debates, we argue that this project provides important assurances about the correctness of a proof assistant, that is complementary to the many theoretical studies already existing in the literature.
Finally, to get a correct and efficient type checking algorithm in practice, the certified checker needs to be extracted to an efficient language, such as Ocaml. In order to maintain the correctness guarantees on the extracted code, the MetaCoq project also provides a proven correct extraction mechanism.
*Please note: This is a commercial profile
Editor's Recommended Articles
Must Read >> Report examines origins of ‘maths anxiety’