I am currently reading through Ulf Norell's PhD Thesis, where he describes the implementation of Agda.
In Chapter 2.2 (page 41), which is concerned with pattern matching with dependent types, he points out that some patterns - even when they are disjoint and exhaustive - cannot be represented as definitional equalities in the underlying core type theory. He illustrates that with the following example:
maj : Bool → Bool → Bool → Bool
maj true true true = true
maj true false z = z
maj false y true = y
maj x true false = x
maj false false false = false
Unfortunately he does not prove or motivate this claim. Intuitively I think, it is true, because we probably cannot really 'count' using the core theory, but I have not found any proper argument that convinced me of the truth of this statement.
Could someone here please explain, why certain disjoint and exhaustive patterns not be represented as definitional equalities - maybe using the example above?
There is an extensive discussion of this example in our paper "Overlapping and order-independent patterns" at ESOP 2014 (https://jesper.cx/files/overlapping-and-order-independent-patterns.pdf).
Basically, most proof assistants translate definitions by pattern matching to case trees (and sometimes further to eliminators), but there is no way to faithfully represent the maj
function as a case tree. The reason for this is that a case tree forces you to make a decision on which argument to match first, but as soon as you match on any argument, the equation where that argument is a variable no longer holds as a definitional equality.