Foundations of Functional Programming/Simply typed λ-calculus

The simply typed λ-calculus is a typed λ-calculus with a monomorphic type system. It comes in an implicitly typed version, and an explicitly typed version. The difference between these two systems is that in the explicitly typed version, arguments to λ-abstractions must be annotated with types, whereas in the implicitly typed version, the types need not be specified and can be inferred.

Syntax for types
In the simply typed λ-calculus there are two kinds of type expressions: type variables, and function types. We use Greek letters $$\alpha,\beta,...$$ to denote type variables. We use boldface letters $$\mathbf{A},\mathbf{B},...$$ to denote arbitrary type expressions. We use $$\mathbf{A} \to \mathbf{B}$$ to denote the type of functions from $$\mathbf{A}$$ to $$\mathbf{B}$$.

The syntax for type expressions is as follows:

$$ \begin{array}{rcl} \text{TypeVar} &::=& \alpha\ |\ \beta\ |\ \cdots \\ \text{TypeExpr} &::=& \text{TypeVar} \\ &|&  \text{TypeExpr} \to \text{TypeExpr} \end{array} $$

In type expressions we can use parentheses to disambiguate scope, and $$\to$$ is right associative, as usual.

The interpretation of type variables is tricky, and is unfamiliar relative to languages like Haskell and ML. A type variable $$\alpha$$ does not denote a polymorphic type. $$\alpha \to \alpha$$ is not the type of a function which takes an object of an arbitrary type, and returns an object of the same type. Rather, it is the type of a function which takes an object of type $$\alpha$$, and returns an object of type $$\alpha$$.

A type variable denotes a single, monomorphic type, such as integer, boolean, etc. The simply typed λ-calculus does not say what types the type variables represent. As far as the λ-calculus is concerned, they can denote any types; but each denotes a single, fixed, monomorphic type.

From here, the two versions of the simply typed λ-calculus — the implicitly typed version and the explicitly typed version — diverge. We will introduce each version in turn, beginning with the implicitly typed version.

Syntax for values
In the implicitly simply typed λ-calculus, the syntax for value expressions is the same as in the untyped λ-calculus. Value expressions are variables $$a,b,c,...$$, applications $$AB$$, or λ-abstractions $$\lambda x. A$$.

Statements, declarations, and contexts
A "statement" is a piece of syntax stating that a given value expression has a given type. Statements are of the form $$A : \mathbf{T}$$, meaning that $$A$$ has the type $$\mathbf{T}$$. $$A$$ is called the "subject" of the statement, and $$\mathbf{T}$$ is called the "predicate."

A "declaration" is a statement where the subject is a variable. That is, a declaration is a statement of the form $$x : \mathbf{T}$$. To get some initial intuition for why declarations are special, consider that the type assertions that appear in the top level of a program are declarations. They predicate, of a particular global name, that it has a given type.

A "context" is a set of declarations. In a context, each variable must occur as the subject of at most one declaration. That is, you must not declare the type of a variable twice. In programming terms, a context is a little bit like a C header file, giving the types of some names without their definitions. However, for us it will be more useful to think of a context as a set of assumptions. What this means will become clear momentarily.

We denote contexts by upper case Greek letters $$\Gamma,\Delta,...$$

Type entailments
We write $$\Gamma \vdash A : \mathbf{T}$$ to express that under the assumptions in the context $$\Gamma$$, $$A$$ has type $$\mathbf{T}$$. This syntactic item is called a "type entailment."

What does it mean to say that $$A$$ has type $$\mathbf{T}$$ under the assumptions in $$\Gamma$$? To give a trivial example, under the assumption that $$x$$ has type $$\alpha$$, it follows that $$x$$ has type $$\alpha$$. That is, $$x : \alpha \vdash x : \alpha$$. To give a less trivial example:

$$x : \alpha, f : \alpha \to (\beta \to \beta) \vdash fx : \beta \to \beta$$

That is, if $$x$$ has type $$\alpha$$, and $$f$$ has type $$\alpha \to (\beta \to \beta)$$, then $$fx$$ has type $$\beta \to \beta$$.

In the case that $$\Gamma$$ is the empty set, we can write just $$\vdash A : \mathbf{T}$$, which just states that $$A$$ has type $$\mathbf{T}$$. For example, it is true that for any type $$\mathbf{T}$$, $$\vdash \lambda x. x : \mathbf{T} \to \mathbf{T}$$.

What is the distinction between writing $$A : \mathbf{T}$$ and $$\vdash A : \mathbf{T}$$? The difference is that the former is just a piece of syntax which expresses that $$A$$ has the type $$\mathbf{T}$$, and the latter is an assertion which says that $$A$$ actually does have type $$\mathbf{T}$$.

Rules for type entailments
The following rules describe what type entailments $$\Gamma \vdash A : \mathbf{T}$$ are valid.

I will explain how to read this table. The first entry says that if $$x : \mathbf{T}$$ is one of the declarations in $$\Gamma$$, then the derivation $$\Gamma \vdash x : \mathbf{T}$$ is valid. This is a very trivial point.

The horizontal bar notation used in the second two rules means that if the stuff above the bar is true, then the stuff below the bar is true. So for example, the second rule says that if $$\Gamma \vdash A : \mathbf{T} \to \mathbf{U}$$, and $$\Gamma \vdash B : \mathbf{T}$$, then $$\Gamma \vdash AB : \mathbf{U}$$. This is the rule giving the type of a function application.

The third rule gives the type of a λ-abstraction. The notation $$\Gamma,x : \mathbf{T}$$ means the context consisting of all the declarations in $$\Gamma$$ plus the declaration $$x : \mathbf{T}$$. This rule says that if assuming $$\Gamma$$ and assuming that $$x$$ has type $$\mathbf{T}$$, it follows that $$A$$ has type $$\mathbf{U}$$, then it follows that $$\lambda x. A$$ has type $$\mathbf{T} \to \mathbf{U}$$.

The valid type entailments are exactly the type entailments that can be derived using these rules.

Example type derivation
Let us give an example of using these rules to derive a type of a function. We will show that the function $$A = \lambda x \lambda y. xy$$ has the type $$(\alpha \to \beta) \to \alpha \to \beta$$.


 * 1) $$x : \alpha \to \beta, y : \alpha \vdash x : \alpha \to \beta$$ (axiom)
 * 2) $$x : \alpha \to \beta, y : \alpha \vdash y : \alpha$$ (axiom)
 * 3) $$x : \alpha \to \beta, y : \alpha \vdash xy : \beta$$ (→-elimination, from 1 and 2)
 * 4) $$x : \alpha \to \beta \vdash \lambda y. xy : \alpha \to \beta$$ (→-introduction, from 3)
 * 5) $$\vdash \lambda x \lambda y. xy : (\alpha \to \beta) \to \alpha \to \beta$$ (→-introduction, from 4)

Note that this is just one of the many types that this function $$A$$ has. $$A$$ also has, for example, the type $$(\gamma \to \delta) \to \gamma \to \delta$$, and the type $$(\alpha \to \alpha) \to \alpha \to \alpha$$. All of these are distinct types. It also has, for example, the type $$((\alpha \to \beta) \to \gamma) \to (\alpha \to \beta) \to \gamma$$, and in general any type of the form $$(\mathbf{T} \to \mathbf{U}) \to \mathbf{T} \to \mathbf{U}$$. All of these facts can be proven using the same derivation we just gave.

Algorithms for type checking
Note that the rules given above are not an algorithm for computing whether an object has a given type (under a given set of assumptions). They are, instead, declarative rules which define what it is for an object to have a given type (under a given set of assumptions). That is, an expression $$A$$ has the type $$\mathbf{T}$$ under the assumptions $$\Gamma$$ just in case $$\Gamma \vdash A : \mathbf{T}$$ is derivable from the above rules. These rules do not, however, say anything about how to determine algorithmically whether an object has a given type.

It does follow immediately from the rules above that the problem of whether an object has a given type is semidecidable (i.e., computably enumerable). One can write a computer program which repeatedly applies all of the above rules to print out a list of all true statements of the form $$\Gamma \vdash A : \mathbf{T}$$. Such a program allows one (in principle) to verify that an object has a given type, provided that it actually does; but it does not always allow one to verify that an object does not have a given type. One can watch the output of such a program as long as one wants, but how does one know that a given output won't ever appear in it?

In fact, however, in the case of the simply typed λ-calculus, type checking is decidable. That is, there is an algorithm which determines whether or not a given statement $$\Gamma \vdash A : \mathbf{T}$$ is true. The problem of typability is also decidable. That is, there is an algorithm which determines whether a value expression can be assigned a type in a given context: whether, given $$\Gamma$$ and $$A$$, there is a type $$\mathbf{T}$$ such that $$\Gamma \vdash A : \mathbf{T}$$.

β-reduction and type preservation
The rules for β-reduction of value expressions in the implicitly simply typed λ-calculus are identical to the rules in the untyped λ-calculus. β-reduction has the desirable property of preserving type. That is, if $$\Gamma \vdash A : \mathbf{T}$$ and $$A$$ β-reduces to $$A'$$, then $$\Gamma \vdash A' : \mathbf{T}$$.

Interestingly, the converse is not true. That is, it is not the case that if $$A$$ β-reduces to $$A'$$, and $$\Gamma \vdash A' : \mathbf{T}$$, then $$\Gamma \vdash A : \mathbf{T}$$.

One type of counterexample to this arises from the fact that not every λ-expression can be assigned a type in the simply typed λ-calculus. For example, $$\lambda x. xx$$ cannot be assigned a type. Then, for example, we have

$$\vdash \lambda x. x : \alpha \to \alpha$$

but not

$$\vdash (\lambda x\lambda y. x)(\lambda x. x)(\lambda x. xx) : \alpha \to \alpha$$

even though the latter β-reduces to the former. The reason is that the latter expression cannot be assigned a type. There are also examples of expressions $$A$$ and $$A'$$ such $$A$$ β-reduces to $$A'$$, and both expressions can be assigned a type, but they cannot be assigned the same type. See Barendregt (1992), Observation 3.1.12.

Turing incompleteness and normalization
The simply typed λ-calculus is not Turing complete. Not every λ-expression can be assigned a type in the simply typed λ-calculus, and the λ-expressions that can be assigned a type cannot represent all Turing computable functions.

The simply typed λ-calculus satisfies the strong normalization property. This means, in particular, that all functions in the simply typed λ-calculus halt. In fact, the simply typed λ-calculus has very little computational power; see this Stack Exchange answer.

At first blush, this makes it look like the simply typed λ-calculus is not a suitable basis for a functional language. However, it can be extended to be more powerful, e.g. by adding a primitive recursion operator; so the conclusion is not so immediate as it might appear. (Of course, the simply typed λ-calculus lacks polymorphism, and so it fails to be a suitable basis for a functional language for other reasons.)

Explicitly simply typed λ-calculus
So far we have been discussing the implicitly simply typed λ-calculus, where the arguments of λ-abstractions do not have type annotations. In the explicitly simply typed λ-calculus, the arguments of λ-abstractions do have type annotations. This gives rise to some fairly significant differences. For example, in the implicitly simply typed λ-calculus, expressions which can be assigned any type can be assigned more than one type. In the explicitly simply typed λ-calculus, if an expression can be assigned a type, then it can be assigned only one type.

Perhaps the most important difference between the implicitly and explicitly simply typed λ-calculi is that the explicitly simply typed λ-calculi can ultimately be developed into more sophisticated and capable type theories. For example, when one moves up to the second-order λ-calculus, type checking is undecidable in the implicitly typed version, but is decidable in the explicitly typed version.

Syntax for values
The syntax for values in the explicitly simply typed λ-calculus is the same as that for the untyped λ-calculus, except that λ-abstractions include type annotations: so a λ-abstraction is an expression of the form $$\lambda x : \mathbf{T}. A$$, where $$\mathbf{T}$$ is a type expression.

Type theory
Statements, declarations, and contexts for the explicitly simply typed λ-calculus are defined as in the implicitly simply typed λ-calculus. The valid type entailments are produced by the following rules, which are identical to the rules for the implicitly simply typed λ-calculus, except for the presence of a type annotation in the λ-abstraction of the third rule:

Despite being nearly notationally identical, there are some significant differences between these two type theories. In the implicitly simply typed λ-calculus, an expression could have many different types. In the explicitly simply typed λ-calculus, an expression always has at most one type.

That said, the systems are very closely related. Whenever $$\Gamma \vdash A : \mathbf{T}$$ is derivable in the explicitly typed system, the result of dropping the type annotations from λ-abstractions is derivable in the implicitly typed system. Similarly, if $$\Gamma \vdash A : \mathbf{T}$$ is derivable in the implicitly typed system, there is a way of adding type annotations to $$A$$ such that the result is derivable in the explicitly typed system.

Computational properties
The rules for β-reduction of value expressions in the explicitly simply typed λ-calculus are identical to the rules in the untyped λ-calculus, modulo the obvious notational variation. As with the implicitly simply typed λ-calculus, β-reduction preserves type, but β-expansion (i.e., reversed β-reduction) does not.

As with the implicitly simply typed λ-calculus, the problems of type checking and typability are decidable for the explicitly simply typed λ-calculus.

The explicitly simply typed λ-calculus is not Turing complete, and it satisfies the strong normalization property, so that all typable functions halt.

Algebraic data types
Algebraic data types are familiar from languages such as ML and Haskell. We assume the reader is familiar with algebraic data types as they exist in such a language, and give a formal treatment of the notion of algebraic data types in the explicitly simply typed λ-calculus.

In the simply typed λ-calculus, an algebraic data type definition consists of: (1) a name for the data type; and (2) declarations for its constructors. Here are three simple examples of algebraic data type definitions in Haskell:

data Pair a b = Pair a b

data Either a b = Left a  | Right b

data List a = Empty | Cons a (List a)

The first of these three polymorphic type declarations declares the type of pairs of instances of  and instances of. The second declares the type which is the disjoint union of  and. The third declares the type of lists of instances of. Recall that  and   are variables standing for arbitrary types.

In type theory, the declarations of algebraic data types look a little different, being more similar to the generalized algebraic data type (GADT) declarations in Haskell. In Haskell, the above types would be declared using GADT syntax as follows:

data Pair a b where Pair :: a -> b -> Pair a b

data Either a b where Left :: a -> Either a b  Right :: b -> Either a b

data List a where Empty :: List a  Cons :: a -> List a -> List a

In GADT syntax, one declares a constructor by specifying its type. The result type of the constructor must be an instance of the ADT being declared. Sometimes other restrictions are also imposed.

The constructors of ADTs tell us how to turn component data of an ADT into an ADT. How do you reverse the process, to recover the component data from a given ADT? In functional programming languages, this is usually done with pattern matching. Here we will start by describing a simpler mechanism for recovering the component data of an ADT, known as "elimination constants." Elimination constants and pattern matching are, in general, interdefinable notions; pattern matching can be described in terms of elimination constants, or vice versa. We take elimination constants as our primitive notion, and define pattern matching in terms of elimination constants, because elimination constants are mathematically simpler.

An elimination constant is a function which can be defined from the definition of a given ADT. It enables one to "destructure" instances of the ADT, serving a function similar to that of pattern matching. We will start by giving the types, and definitions in terms of pattern matching, of the Haskell elimination constants for each of the above data types, to give a feel for the concept:

elimPair :: (a -> b -> p) -> Pair a b -> p elimPair f (Pair x y) = f x y

elimEither :: (a -> p) -> (b -> p) -> Either a b -> p elimEither f g (Left x) = f x elimEither f g (Right y) = g y

elimList :: p -> (a -> List a -> p) -> List a -> p elimList empty cons Empty = empty elimList empty cons (Cons x xs) = cons x xs

In each case, the elimination constant gives a way to turn an instance of the type under question into an arbitrary type,. It does this by taking functions, which we will call "destructor functions," which say what to do with each constructor of the type. Each such function takes as arguments the arguments to the constructor it corresponds to.

The scheme we have described here for elimination constants is one of the simplest versions of the scheme. There can be various other complications. We will discuss one of them right now.

In some λ-calculi, including the simply typed λ-calculus, general recursion is not possible. In these systems, it would not be possible to do recursion over algebraic data types unless special provisions were made for this in the definition of the elimination constants. In the case of, the one recursive data type we discussed above, to allow for recursion the elimination constant becomes:

elimList :: p -> (a -> List a -> p -> p) -> List a -> p elimList empty cons Empty = empty elimList empty cons (Cons x xs) = cons x xs (elimList empty cons xs)

The destructor for, the one recursive constructor, gains an additional argument of type   which gets set to the result of recursively applying the elimination constant with the same function arguments to the recursive argument of the constructor. If there were multiple recursive arguments to the constructor, there would be one additional argument to the destructor for each recursive argument. For example, consider the data type:

data Tree a where Leaf :: a -> Tree a  Branch :: a -> Tree a -> Tree a -> Tree a

This data type represents binary trees holding data of type. Its elimination constant is as follows:

elimTree :: (a -> p) -> (a -> Tree a -> p -> Tree a -> p -> p) -> Tree a -> p elimTree f g (Leaf x) = f x elimTree f g (Branch x t1 t2) = g x t1 (elimTree f g t1) t2 (elimTree f g t2)

Basic system
Here we present a basic λ-calculus augmenting the explicitly simply typed λ-calculus with algebraic data types.