For this post, I will use the programming language Hoq. I use this for two reasons:
- I like the functional (Agda/Idris) style of proving, particularly when exploring the possibilities of the type system. Tactics-based provers seem too removed from the underlying mathematics to give me a clear picture of what's going on.
- I wanted to use a language built for HoTT, rather than one that has been modified to use it. Idris is incompatible with HoTT due to the K axiom, and, though it can be run K-less these days, I don't know whether there are other things missing from Agda (my guess being that there are).
The objective of this exercise is to prove right distributivity of boolean or over and by first proving left distributivity and commutativity, then combining them in the obvious way. The fact that we can combine them in the obvious way is the focus of this post. This contrasts with what we'd do in an intensional language, as we will see.
First, we define some basic things, which should really be in some prelude module. This will hopefully give an introduction to the syntax, which is similar to that of Idris and Adga. I've refrained from defining the most general versions of ext and flip for simplicity (and to help Hoq infer implicit arguments).
-- Booleans data Bool = False | True or : Bool -> Bool -> Bool or False False = False or _ _ = True and : Bool -> Bool -> Bool and True True = True and _ _ = False -- Convenient alias: at p = \i -> p @ i at : {A : Type} -> {x y : A} -> x = y -> I -> A at _ _ _ p i = p @ i -- Equality proofs ext : {A B : Type} -> {f g : A -> B} -> ((a : A) -> f a = g a) -> f = g ext _ _ _ _ p = path (\i a -> p a @ i) idp : {A : Type} -> {a : A} -> a = a idp _ a = path (\_ -> a) -- Combinator flip : {A B C : Type} -> (A -> B -> C) -> B -> A -> C flip _ _ _ f y x = f x yAlso introduced implicitly is the interval type I, as seen in lambda variable i. An equality proof p for terms of type A is basically a wrapper around a function of type A -> I, and the underlying function is accessed using p @ i. I is supposed to be analogous to \({\leq}|0,1|{\geq}\) (written by people other than me as \([0,1]\)), with end points left and right which, crucially, we can't pattern match with. If p : x = y, p @ left is x and p @ right is y.
In order to proceed, we prove the required facts about or in the standard way:
orCommutative : (x y : Bool) -> or x y = or y x orCommutative False False = idp orCommutative _ _ = idp orLeftDistr : (x y z : Bool) -> or x (and y z) = and (or x y) (or x z) orLeftDistr False False _ = idp orLeftDistr False _ False = idp orLeftDistr False True True = idp orLeftDistr True _ _ = idpAs you can see, we use idp (the identity path) where we might expect Refl on the right-hand side.
We could just prove right distributivity in the same way, but that would be terribly boring. The point of this article is really to introduce a method that can be used in more complex situations, where we want to avoid code duplication. In intensional type theory, the proof we want might look something like this (using equational reasoning syntax from examples/Paths.hoq):
orDistrRight' : (x y z : Bool) -> or (and x y) z = and (or x z) (or y z) orDistrRight' x y z = or (and x y) z ==< orCommutative (and x y) z >== or z (and x y) ==< orLeftDistr z x y >== and (or z x) (or z y) ==< pmap (\w -> and w (or z y)) (orCommutative z x) >== and (or x z) (or z y) ==< pmap (and (or x z)) (orCommutative z y) >== and (or x z) (or y z) !qedThis is rather repetitive, though. Each time we want to apply orCommutative, we have to say exactly where to apply it in the expression, which involves writing out the expression over and over again.
We'll proceed with the main proof in small steps. To put us on steady ground, we prove or = flip or:
orFlippedEq : or = flip or orFlippedEq = ext (\x -> ext (\y -> orCommutative x y))Now, at last, we get to a HoTT-specific part. Here, we prove that the identity we're trying to prove is equivalent to the type of orDistrLeft. Naturally, we produce an equivalence between equivalences:
orRightDistrEqOrLeftDistr : (x y z : Bool) -> (or x (and y z) = and (or x y) (or x z)) = (or (and y z) x = and (or y x) (or z x)) orRightDistrEqOrLeftDistr x y z = path (\i -> (orFlippedEq @ i) x (and y z) = and ((orFlippedEq @ i) x y) ((orFlippedEq @ i) x z))The expression inside the lambda abstraction is simply the left side of the required equality, but with or replaced by orFlippedEq @ i. orFlippedEq @ left is or and orFlippedEq @ right is flip or, which give us the required end points.
The final thing we need to introduce is coe (short for “coerce”, presumably). Its type is (A : I -> Type) -> (i : I) -> A i -> (j : I) -> A j, meaning that it takes a type equality and moves the given value along it from i to j. Here, we use it to simply transport our proof of left distributivity into a proof of right distributivity:
orRightDistr : (x y z : Bool) -> or (and x y) z = and (or x z) (or y z) orRightDistr x y z = coe (at (orRightDistrEqOrLeftDistr z x y)) left (orLeftDistr z x y) rightAnd that's the result we wanted.