How Type Variance fits into Category Theory
If you’re not a mathematician, you’ve probably never heard of Category Theory. Even if you are one, it is such a niche field of mathematics that you may not have studied it. Category Theory tries to unify all algebraic structures to better understand their differences. On the other hand, if you’re not a developper, you’ve probably never heard of type variance. Well, even if you are, type variance is rather an advanced topic that you may also not have run into it.
Surprisingly enough, Category theory has many “applications” (if we can say that) in programming paradigms. One of these applications is type variance, which, as we are going to see, is perfectly inscribed within the theory.
We will divide this blog post in three sections. We will first give a (gentle) introduction to category theory, by motivating its purpose. Then, we will review what type variance is and why it is useful. Finally, we will show how type variance fits into category theory.
Better warn you, if you’re not ready to read the following only for the beauty of it, you should probably spend your time elsewhere…
Category theory
Before jumping into the what is a “category”, we are going to review some examples that lead to its definition.
Sets
The first example is sets (in the mathematical sense, i.e., something that contains “elements”) and functions. When you have two sets A and B, you can have functions between these two sets, from A to B, written f: A -> B. A function is simply a “machine” that takes all elements in A (the domain) and maps them to an element in B (the co-domain). For example, A could be the set of all objects in a store, B the set of (positive) real numbers, and f would associate its price to each object in A.
Now, if you have three sets A, B and C, and two functions f : A -> B and g: B -> C, you can create another function, written g ◦ f : A -> C. This new function is defined as such: an element of A is first mapped to an element of B via f, f(a) then f(a) is mapped to an element of C via g, g(f(a)). This new function is called the composition of f and g.
In order to have a category, as we will define it later, there are two things that are missing: “associativity of ◦” and and “identity” function. The identity function is simple. For every set A, there is a special function, called the identity, and written id: A -> A that maps every element of A onto itself. Associativity is a little bit more involved. It says that if you now have four sets A, B, C and D, and three functions f : A -> B, g: B -> C, h : C -> D, then, it doesn’t matter if you first compose f and g, and then h, or first g and h, and then f. Mathematically, this is written as
(f◦g)◦h = f◦(g◦h)
Said otherwise, it means that it does not matter where you put the parenthesis, and you can just not put them.
Groups
Sets are not that exiting, because there is nearly no structure in them. Between two sets, you have plenty of functions because “everything” is basically allowed. Things become quickly much more interesting when we add a little bit of contraints (aka, structure). A group is a set with additional properties. It has an internal binary operator °, such that for two elements a and b, you can create another element a ° b. It also has a special element e, called the neutral element, and every element has an “inverse”.
Definition (group)
A couple (G, °), where G is a set, and ° : G x G -> G a binary operator, is said to be a “Group” if all these conditions hold:
— there exists an element e in G such that for every g in G, e ° g = g = g ° e (e is called the neutral of the group)
— for every a, b and c in G, we have (a ° b) ° c = a ° (b ° c) (associativity of °)
— for every element g in G, there exists an element h in G such that g ° h = e = h ° g (the inverse)
There are plenty of examples of groups. A first example is the set of integers with ° being the addition +. The integers are the natural numbers 0, 1, 2, … with their negative counterparts -1, -2, … The neutral element is 0, and the inverse of an integer x is -x. And you probably now that + is associative since you learned how to sum numbers.
Another interesting example is the integers modulo 2. That is, 0 and 1, with ° begin + again, and 1 + 1 = 0.There are only two elements in the set, and you can check that it is indeed a group. As a matter of fact, “2” does not matter and any modulo n for n a natural number is a group.
Since groups are also sets, you can have functions between two groups. However, if you consider all functions between two groups, f : G -> H, then you lose all the structure of G when you map elements via f. And this is not really interesting. Instead, we’ll consider only a subset of all possible functions between two groups. These functions will be called “morphisms” and will have two constraints.
Definition (morphism of groups)
A function f : G -> H between two groups G and H is said to be a morphism if the two following conditions hold:
— if a is the neutral element of G, and b the neutral element of H, then f(a) = b
— for all x, y in G, we have f(x ° y) = f(x) ° f(y)
The first condition says that morphisms map the neutral element of the domain to the neutral element of the co-domain. The second condition asks f to be “compatible” with the binary operators of the two groups. (The interested reader can actually show that for groups, the first condition is a consequence of the second one.)
If we look at the two examples of groups that we saw above : the integers Z and the integers modulo 2, written Z/2Z, there is a natural morphism m : Z -> Z/2Z, precisely by taking the modulo 2. Indeed, 0 % 2 = 0 and for all integers x and y, (x+y) % 2 = x % 2 + y % 2. Interestingly, this morphism is the only one from Z to Z/2Z (you could try to prove it), except for the “trivial” morphism that sends everything to 0 — this one always exists between two groups. In comparison, there are “as much” functions between Z and Z/2Z as there are real numbers!
Monoids
A very similar example can be found in monoids. Monoids are like groups, except that it is not required that elements have an inverse.
For example, the set of natural numbers with the addition is a monoid. Strings with concatenation is also a monoid. Indeed, there is a neutral element (the empty string) and it is clear that concatenation is associative (it doesn’t matter where you put the parenthesis). But if I give you a non empty string, for example “hello”, there is no string s such that ”hello” + s is the empty string.
Similar to groups, monoids have a notion of morphisms (of monoids). The requirements for a morphism of monoids is exactly the same as for morphism of groups. (An interesting difference though, is that the first condition — neutral elements are sent to neutral elements — no longer is a consequence of the second condition.)
A nice example of a morphism of monoid, from the monoid of strings to the monoid of natural numbers, is associating to each string its length (you should try to prove that it is indeed a morphism of monoids!).
Others
There are plenty of other examples of mathematical structures that have similarities with groups and monoids. Among them, there are, for example, vector spaces, fields, algebras, topological spaces… Don’t worry if you don’t know any of these terms! They’re just a short list to demonstrate that categories really didn’t come out of nowhere.
Category
With these three examples, we are ready for the definition of a category. The common important point in these examples is that there are stuff (sets, groups, monoids) that may be linked together (respectively with functions, morphisms of groups and monoids). Moreover, these links can be composed together. This is basically all the ideas encapsulated in the definition.
Definition (category)
A category C is composed of
— A class O(C) of objects
— A class A(C) of arrows f: A -> B where A and B are objects
such that the two conditions hold:
— if there is an arrow f: A -> B and an arrow g: B -> C, then there exists an arrow g.f: A -> C
— for each object A, there exists an identity arrow id: A -> A such that, for all arrows f: A -> B, we have f.id = f and for all arrows g: C -> A, we have id.g = g.
Furthermore, arrow composition is associative, i.e., f.(g.h) = (f.g).h.
Monoids, groups, and all the above mentioned examples are indeed categories. An important aspect of the philosophy of category theory is that we don’t care about what the objects are. Rather, we focus on studying the arrows that exist and linked them together. Remember the example of morphisms vs functions from Z to Z/2Z: this is a clear difference between sets and groups, while it doesn’t expose any difference between groups and monoids.
Note that the word “class” has nothing to do with classes in programming languages. A class is a little bit like a set, except for very technical differences. But they are clearly out of the scope of this blog post. You can simply remember that allowing to have the set of all sets quickly leads to disaster logical consequences…
Is there a category of categories?
It is when you ask this kind of questions that you start to understand why some mathematicians end up mad. Yet, the answer will prove extremely relevant in our case. A “category of categories” would be a category where the objects are categories. But what are the arrows? They’re called “functors”, and here is what they do.
Definition (functor)
A functor F from a category C to a category D is such that
— each object O in C is mapped to an object F(O) in D
— each arrow f in C is mapped to an arrow F(f) in D, in such a way that either of these conditions hold:
— — if f: A -> B, then F(f): F(A) -> F(B) and for all arrows f: A -> B and g: B -> C, we have F(g.f) = F(g).F(f) (in this case, we say that the functor is “covariant” — it preserves the direction of the arrows)
— — if f: A -> B, then F(f): F(B) -> F(A) and for all arrows f: A -> B and g: B -> C, we have F(g.f) = F(f).F(g) (in this case, we say that the functor is “contravariant” — it reverses the direction of the arrows)
Moreover, in each of these cases, for all A, the identity of A is mapped to the identity of F(A)
With functors, it is easy to see that there is indeed a category of categories, because you can compose functors and for each category, you can define the identity functor that sends each object and each arrow onto themselves.
Another property that is useful to notice is that if you compose two covariant functors, or two contravariant functors, then you get a covariant functors. However, if you compose a covariant and a contravariant functors, then you get a contravariant functors. (I’ll let the proof as an easy exercise for you.)
Objects are not containers!
In all the common examples of categories, the objects are sets, or at least “containers” of stuff. It is absolutely not required. Even, it is against the philosophy of category theory to consider objects as sets. An example of a category where the objects are not “containers” is the category where objects are natural numbers, and you define that there exists an arrow f: n -> m if, and only if, m is divisible by n. That is, m is 0 modulo n.
Is there a category of functors?
Well yes, but that is going a little bit too far for us today…
We’ve covered everything that we need from category theory. Let’s move on to type variance.
Type Variance
Type variance describes how type inheritance transforms itself in type parameters. Depending on how much you understand this sentence, you can freely skip this section. If not, we are gonna decompose it together.
First, there is the “type inheritance” term. When you have two types A and B, there are three possibilities: the type A inherits from B, B inherits from A, or they are not related at all. In general, a type A will inherit from B if an A is also a B. The example that I’m gonna take are horses and animals. The type “Horse” inherits from the type “Animal”, because obviously a horse is an animal.
Now, there is the “type parameters”. A type can depend on another one (or on several others). For example, you can have a type “List” that would be a “List of T” where T is another type. That means that for each type T, you can create another type “List of T”. That way, you have “List of Horses” and “List of Animals”.
The last thing is, if a Horse is an Animal, is a “List of Horses” a “List of Animals”? Again there are three answers to that:
- yes, a list of horses is a list of animals
- no, a list of horses is completely different from a list of animals
- no, actually a list of animals is a list of horses.
From a logical point of view, the first one probably makes more sense. Something that contains horses contains animals. And so, a list that contains horses is a list that contains animals, and therefore the inheritance relation is “preserved”. In that case, we say that the type parameter of the List is “covariant”.
The second possibility could also be a good choice, especially if you can change your lists. Indeed, if you say that when you have a list of a type T, you can add a new element of type T to the list, then you will quickly face problems like the following. If you have a list of horses, then you also have a list of animals. But with a list of animals, you can add a cat to it. The problem is that your list of horses is no longer a list of horses! Hence, you changed the type of your list of horses without explicitly requiring it. When you decide to remove the inheritance relationship of a type parameter, we say that your type parameter is “invariant”.
The last one doesn’t make sense whatsoever in this context. So let’s move to another context. Imagine you have a type “Painter” that also has a type parameter, so that a “Painter of T” is a painter that can paint you any object of type T. Let’s ask the question again: should a “Painter of Horses” be a “Painter of Animals”? To answer that question, it is easier to see inheritance not as “A inherits from B if an A can be viewed as a B” (that we intuitively used for lists) but rather “A inherits from B if A can do everything B can, and perhaps more”. Now the answer to the question becomes clear. A painter of animals can paint any animal that you like, whereas a painter of horses can only paint, well, horses. In that regard, it is clear that if a horse is an animal, then a painter of animals is a painter of horses. We say that the type parameter is “contravariant”.
The easiest example from every day life where type parameters are contravariant are function arguments. A function from T to U can handle any T. However, if you have a type V that inherits from T, a function from V to U can only handle V’s. From there, it’s clear that function argument types must be contravariant. Function return types, however, need to be covariant.
How type variance fits into category theory
Finally, we arrive at the core of this post, namely the link between type variance and category theory. If you’ve read it through, you can guess that the link stands in the “covariant” and “contravariant” terminologies. We’re going to define a category, and functors from that category to itself, to make the link with type variance.
The category is quite simple. The object of this category will be the types. Between two types U and T, we define that there is an arrow from U to T, denoted U <: T, if, and only if, U inherits from T.
The conditions of a category are satisfied. Indeed, every type inherits from itself (so we have the identity arrow), and if U <: T and V <: U, then V <: T. (Associativity is clear because there is at most one arrow between two types.)
Now, the functors. For each type F that has a type parameter, let us write it F[T], we will have
- a covariant functor if T is covariant as a type parameter (we will write F[+T])
- a contravariant functor if T is contravariant as a type parameter (we will write F[-T])
- no functor (sadly) if T is invariant (we simply write F[T]).
Remember that, intuitively, covariant functors preserve the direction of the arrows, while contravariant functors reverse them. This is precisely what happens to the inheritance relation <: with covariant and contravariant type parameters, respectively. The reason why invariant type parameters do not create a functor is that invariant type parameters do not map the arrows, they only map the types!
From above, we see that List is a covariant functor (that is, we write List[+T]), whereas function arguments are contravariant, so Function[T, U], which is the type “function from T to U”, will be written Function[-T, +U].
Finally, if you ever wondered whether, in a type like Function[List[T], U], T is in a covariant or contravariant position, now the answer is easy! Remember, composition of a contravariant and a covariant functor gives you a contravariant functor, hence T is at a contravariant position. This kind of question is relevant when you build types with covariant and contravariant type parameters. Indeed, in the case of List[+T], you probably want a “method” for lists that takes a T as argument, and returns whether this element is contained in the List. But now you face a problem, because T, in the “contains” method, would be at a contravariant position! Because of that, your method “contains” should have an other type parameter, let’s say U, with the condition that T <: U. On the other hand, would you like to have a method “find”, that returns an element that satisfies some predicate, then you don’t need the above trick, because your type T would appear in the following positions:
Function[Function[T, Boolean], T]
and we know by composition of functors that indeed, T is at a covariant position in both these places.
Conclusion
In this post, we discovered the definition of categories, alongside several examples, we discussed a little bit the topic of type variance, and we linked the two of them in a nice way.
What do we gain from all this? Well, apart from the beauty of abstracting a rather complicated concept (type variance) into an even more complicated one (category theory), not much. However, it opens the door to thinking about what other kind of language structure can enter this world. And believe me, there are! A teasing example, if you’ve come across them, is monads…