I've been doing C# web development for about 15 years. Although I did not really understand much of FP, I was an eager early adopter of all functional-style features C# added over the years (LINQ, anonymous functions/lambdas, etc). Every other year, I'd look up the latest Haskell books or tutorials and give it a go, but it never really worked out for me due to various reasons (no real purpose, no sense of achieving anything, not understanding the benefits of what I'm learning, etc). Ultimately, I was never able to get past toy functions in the interpreter. But I was very intrigued by Haskell's elegance.

One of the best decisions I made was to look up FP-related communities. I stumbled upon the awesome FP Slack community. That's where I found a bunch of passionate, helpful and very knowledgeable people. They pointed me into the correct direction countless times and helped me dodge frustrating experiences. Following is a list of the pitfalls I ran into while learning Haskell, along with what I wish I knew at the time.

In my experience, the best way to install Haskell is using `stack`

. It's a cross-platform tool for developing Haskell projects, and it helps with managing dependencies as well as GHC (the Glasgow Haskell Compiler) versions for each of your projects. Please check their documentation page on how to install. If you already have it, make sure you upgrade it via `stack upgrade`

.

In order to start a new project, you can run:

```
stack new project
$ cd project
$ stack build
$ stack exec -- project-exe $
```

You should now be able to edit either `app/Main.hs`

or `src/Lib.hs`

to change your program.

I regret not listening to people's advice about this sooner and spending a lot of time looking for the best Haskell IDE / development tools. In the end, everybody else was right and unfortunately, the current state of the available tools is *not good enough*. The process that most people seem to use, and the one I also adopted is using my favorite text editor with syntax highlighting. I have a few separate console windows that run `ghcid`

and `ghci`

.

`ghcid`

is a lightning fast way to get feedback about your code as you work on it. Once you are in your project's root, you can run `stack build ghcid`

in order to build the appropriate `ghcid`

version for the current project's `GHC`

. Once you do that, you can run `ghcid`

using `stack exec -- ghcid -c "stack ghci project"`

. As soon as you edit (and save) any of the Haskell files in the project, it will recompile and let you know if anything is wrong.

There are a few interesting tricks you can do with `ghcid`

, out of which I'd like to point out two that I find most useful. The first is really a GHC feature, which really shines when used with `ghcid`

. Whenever you're not sure what to do in a function, you can throw in an underscore and save the file. The compiler will figure out the type of what the underscore needs to be replaced with, and will suggest functions in scope that match that pattern. This feature is called typed holes, and if you want to read more about how it works, I suggest you read Christoph Hegemann's excellent thesis, Implementing type directed search for PureScript.

The other trick with `ghcid`

is, whenever you wish you had *hover-to-see-type*, you can add parenthesis around that expression and add a type annotation, e.g. `(whatTypeIsThis :: ())`

. This basically asserts `whatTypeIsThis`

has type `()`

, or `Unit`

. When you save, you will get an error saying that `()`

cannot be unified with *actual type of the expression* (of course, unless the type actually is `()`

).

The interactive Haskell shell is great when you want to try things out, or do a quick test for a function you wrote. In order to launch it, you need to run `stack ghci project`

. You can then type `:l Lib`

to load the `Lib`

module. Once you do that, you can execute any function defined in `Lib`

. If you change the code in your source files, you'll have to reload using `:r`

.

Whenever I need to look something up, my go-to place is hoogle. You can search for functions or types by name and by signature. This feature was very useful for me as a beginner, since once you get a hang of types, you'll sort of know what signature of a function you need but not know its name. As you look stuff up, I encourage you to look at the source code of the functions you look up.

When looking for packages, start with stackage, mostly because it will only show packages that are compatible with a certain GHC version and among themselves. If you try to mix and match by yourself through hackage, you might end up with frustrating problems.

Once you find a package you'd like to add, you can edit the `package.yaml`

file under `dependencies`

and just add the package you want by name. You'll need to rebuild the project via `stack build`

.

The most recommended book on the FP slack is Haskell Programming from first principles, and for good reason - it was absolutely key in my learning process. It's filled with exercises guiding you through the Haskell language, as well as excellent explanations and tips. There isn't much more I can say other than, if you want to learn Haskell, this is where you should start, and if you do, please work through the exercises.

There are a lot of awesome blogs and resources for Haskell out there, and I want to mention particularly Matt Parson's blog and Stephen Diehl's What I wish I knew when learning Haskell as being excellent resources.

My personal interest in CS led me to reading several books on topics such as Type Theory or Category Theory. They are in no way required or necessary in order to learn Haskell, but they helped me and I think they are interesting:

- Bartosz Mileski's blogs / videos
- David Spivak's videos and books (one and two)
- TAPL
- Type Theory and Formal Proof
- Software Foundations

Soon after I started reading the Haskell Book, I started understanding the benefits of FP. During this time, we were having some issues at work with the quality of our TypeScript code base. The code was brittle and we were often afraid to change it. I proposed we spent some time learning FP and try out PureScript (which is very similar to Haskell, but generates JavaScript).

For the next few months, I went through the Haskell Book and presented it bit by bit to my colleagues and did exercises together. This helped me immensely. Just preparing the chapters in order to present them to other people does wonders on your understanding; and actually presenting and trying to figure out answers to their questions was important for me.

Once we went through most of the book, we turned to PureScript. This has proven to be a very good choice for me, since PureScript is a slightly more modern language, and being already comfortable with JavaScript helped a lot. I was able to build useful software, while building apps that were familiar, in an unfamiliar way.

Shortly after, we started prototyping applications in PureScript and integrating them with our codebase. We were comfortable enough with the language to convince our management and product owners to let us create a PoC using PureScript, and they agreed. The results were impressive; we were able to develop the application about as fast as we would in TypeScript, with less bugs and back and forth with QA. The next app was even faster and the benefits more clear.

I was not sure I was ready to switch to a full-time Haskell job when I found out about Runtime Verification and that they are hiring. In order to apply, I completed the K Challenge. This was ideal for me since I knew I could use as much time as I wanted for it. I ended up not needing too much time to get a decent implementation done, but the ease of mind that I could do it at my own pace mattered a lot for me.

Moreover, I was personally interested in Formal Verification, and the RV approach seems very appealing to me. I'm really happy to be able to be part of the team working on the Kore Language.

Because this is about Haskell, I've left out a lot of details about PureScript. I believe PureScript, as well as its incredibly friendly community, were very important in my journey. They helped me a lot and I'll try my best to give back to this awesome community through code contributions and spreading the word.

I became a strong believer in FP, in strong typing, and in compilers guiding me through programming. I strongly believe that any programmer can benefit from learning Haskell. Even though I am still relatively new in this journey, I consider myself a better programmer than I was when I started this journey and I'm looking forward to learn more about Haskell, Type Theory, Category Theory, and Formal Verification.

Without trying to sound too stuck up in my newly discovered FP ivory tower, I'd like to point out that we, as programmers are extremely fortunate to get very good pay compared to the standard we are held against, most of the time. Chris Allen, one of the authors of the Haskell Book talks about this at LambdaConf, but my takeaway from it is, there's no organization that takes away our programmer's badge if we write terrible code or make bad decisions, or if we fail to stay in touch with technology or practices. But we definitely should, we should challenge ourselves to continuously learn, read other people's code, contribute and, perhaps most importantly, work on projects outside of our comfort zone.

]]>`Cont`

is, almost exactly, This post assumes you are familar with: - the Curry-Howard correspondence, - classical and intuitionistic logic (for example, see it explained using Coq in Software Foundations), and - one of Haskell, Agda, Idris or Coq.

Haskell and PureScript define `MonadCont`

, which represent monads that support the *call-with-current-continuation* (`callCC`

) operation:

```
class Monad m => MonadCont m where
callCC :: ((a -> m b) -> m a) -> m a
```

`callCC`

generally calls the function it receives, passing it the current continuation (the `a -> m b`

). This acts like an `abort`

method, or an early exit.

The interesting part is that this monad looks very similar to *Peirce's law*:

$ ((P \to Q) \to P) \to P $

If we replace `P`

with `a`

(or `m a`

) and `Q`

with `m b`

, we get the exact same thing. Since we are dealing with monads, we need to use Kleisli arrows, so all implications from logic must be lifted as such (so `P -> Q`

becomes `a -> m b`

).

In order to keep things clean, I decided to wrap each equivalent law in its own newtype and write an instance of `Iso`

(which translates to iff) between each of the laws and the *law of excluded middle*.

```
{-# LANGUAGE InstanceSigs #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE ScopedTypeVariables #-}
module Logic where
import Control.Applicative (liftA2)
import Control.Monad ((<=<))
import Data.Void (Void, absurd)
class Iso a b where
to :: a -> b
from :: b -> a
```

This is just a neat way of having to prove both implications in an iff, packed as `to`

and `from`

. Moving on, we can declare the following types:

Starting with the formula from logic, we can easily write out the Haskell type by just keeping in mind we have to transform all implications to Kleisli arrows:

$ \forall P, Q. ((P \to Q) \to P) \to P $

```
newtype Peirce m =
Peirce
forall a b
( . ((a -> m b) -> m a)
-> m a
)
```

The key part to remember here is that negation in classical logic translates to `-> Void`

in intuitionistic logic (and `-> m Void`

in our case, since we are using Kleisli arrows):

$ \forall P. P \lor \neg P $

```
newtype Lem m =
Lem
forall a
( . m (Either a (a -> m Void))
)
```

Nothing new here, just rewriting negation as `-> m Void`

:

$ \forall P. \neg \neg P \to P $

```
newtype DoubleNegation m =
DoubleNegation
forall a
( . ((a -> m Void) -> m Void)
-> m a
)
```

The only new thing here is that we translate `and`

to tuples, and `or`

to Either:

$ \forall P, Q. \neg (\neg P \land \neg Q) \to P \lor Q $

```
newtype DeMorgan m =
DeMorgan
forall a b
( . ((a -> m Void, b -> m Void) -> m Void)
-> m (Either a b)
)
```

$ \forall P, Q. (P \to Q) \to Q \lor \neg P $

```
newtype ImpliesToOr m =
ImpliesToOr
forall a b
( . (a -> m b)
-> m (Either b (a -> m Void))
)
```

If this is interesting to you, this would be a good place to look away and try for yourself. If you do, keep in mind that typed holes are a very useful tool in this process (see this for an example).

```
instance Monad m => Iso (Lem m) (Peirce m) where
to :: Lem m -> Peirce m
Lem lem) = Peirce proof
to (
where
proof :: ((a -> m b) -> m a)
-> m a
= lem >>= either pure (go abort)
proof abort
go :: ((a -> m b) -> m a)
-> (a -> m Void)
-> m a
= abort $ fmap absurd . not_a
go abort not_a
from :: Peirce m -> Lem m
Peirce p) = Lem $ p go
from (
where
go :: (Either a (a -> m Void) -> m Void)
-> m (Either a (a -> m Void))
= pure . Right $ not_lem . Left go not_lem
```

```
instance Monad m => Iso (Lem m) (DoubleNegation m) where
to :: Lem m -> DoubleNegation m
Lem lem) = DoubleNegation proof
to (
where
proof :: ((a -> m Void) -> m Void)
-> m a
= lem >>= either pure (go notNot)
proof notNot
go :: ((a -> m Void) -> m Void)
-> (a -> m Void)
-> m a
= fmap absurd $ notNot notA
go notNot notA
from :: DoubleNegation m -> Lem m
DoubleNegation dne) = Lem $ dne not_exists_dist from (
```

```
instance Monad m => Iso (Lem m) (DeMorgan m) where
to :: Lem m -> DeMorgan m
Lem lem) = DeMorgan proof
to (
where
proof :: ((a -> m Void, b -> m Void) -> m Void)
-> m (Either a b)
= lem >>= either pure (go notNotANotB)
proof notNotANotB
go :: ((a -> m Void, b -> m Void) -> m Void)
-> (Either a b -> m Void)
-> m (Either a b)
=
go notNotANotB fmap absurd
. notNotANotB
. liftA2 (,) (. Left) (. Right)
from :: DeMorgan m -> Lem m
DeMorgan dm) = Lem $ dm go
from (
where
go :: (a -> m Void, (a -> m Void) -> m Void)
-> m Void
= notNotA notA go (notA, notNotA)
```

```
instance Monad m => Iso (Lem m) (ImpliesToOr m) where
to :: Lem m -> ImpliesToOr m
Lem lem) = ImpliesToOr proof
to (
where
proof :: (a -> m b)
-> m (Either b (a -> m Void))
= either Left (go fab) <$> lem
proof fab
go :: (a -> m b)
-> (b -> m Void)
-> Either b (a -> m Void)
= Right $ notB <=< fab
go fab notB
from :: ImpliesToOr m -> Lem m
ImpliesToOr im) = Lem $ im pure from (
```

The full source code is available on my github.

]]>This post will show how a simple proof works in Logic, Type Theory, and Category Theory: given `A ∧ (B ∧ C)`

, prove `(A ∧ B) ∧ C`

.

In logic, there are several systems that allows us to reason about propositions. One of them is the natural deduction system and is defined using introduction and elimination rules. For each connective, or operator, we will have at least one of each introduction and elimination rules.

For example, conjunction (`∧`

) has one introduction rule:

```
A B
------- (∧i)
A ∧ B
```

which means, if we know `A`

and `B`

, then we can use the introduction rule (`∧i`

) to deduce the proposition `A ∧ B`

.

There are two elimination rules for `∧`

:

```
A ∧ B A ∧ B
----- (∧e1) ----- (∧e2)
A B
```

which means, if we know `A ∧ B`

, we can obtain `A`

or `B`

if we use the elimination rules `∧e1`

or `∧e2`

.

So, if we wanted to prove the conclusion`(A ∧ B) ∧ C)`

from the hypothesis `A ∧ (B ∧ C)`

, we would have to: 1. obtain an `A`

by using `∧e1`

on the hypothesis 2. obtain a `B ∧ C`

by using `∧e2`

on the hypothesis 3. obtain a `B`

by using `∧e1`

on (2) 4. obtain a `C`

by using `∧e2`

on (2) 5. obtain a `A ∧ B`

by using `∧i`

on (1) and (3) 6. reach the conclusion `(A ∧ B) ∧ C`

by using `∧i`

on (5) and (4)

In natural deduction, it looks like this:

```
A ∧ (B ∧ C) A ∧ (B ∧ C)
----------- (∧e1) ------------- (∧e2)
A B ∧ C
. --- (∧e1) --- (∧e2)
A B C
--------------------- (∧i) .
A ∧ B C
------------------------------- (∧i)
(A ∧ B) ∧ C
```

The Curry-Howard correspondence tells us that conjunction translates to pairs in type theory, so we'll switch notation to Haskell's tuple type, using the following notation: - Types: capital letters `A`

`B`

`C`

`D`

- Terms: lowercase letters `a`

`b`

`c`

`d`

- Tuple Types: `(A, B)`

for the tuple `A`

`B`

- Tuple Terms: `(a, b)`

for the tuple `a`

`b`

of type `(A, B)`

Typed lambda calculus has a deduction system as well. Tuple introduction looks very similar to `∧i`

:

```
a : A b : B
------------------ ((,)i)
(a, b) : (A, B)
```

which means, given a term `a`

of type `A`

and a term `b`

of type `B`

, then we can obtain a term `(a, b)`

of type `(A, B)`

. Note that we no longer need to say *"given we know A and B"*, since the existence of a term of each type is enough to form the tuple.

Similarly, there are two elimination rules:

```
(a, b) : (A, B) (a, b) : (A, B)
------------------- ((,)e1) ------------------- ((,)e2)
a : A b : B
```

which means, given a tuple `(a, b)`

of type `(A, B)`

we can obtain a term `a`

or `b`

of type `A`

or `B`

.

If we translate the proposition above, then we have to prove `((A, B), C)`

from `(A, (B, C))`

.

```
(a, (b, c) : (A, (B, C)) (a, (b, c)) : (A, (B, C))
------------------------ ((,)e1) ------------------------- ((,)e2)
a : A (b, c) : (B, C)
. ------- ((,)e1) ------- ((,)e2)
a : A b : B c : C
----------------------------------------- ((,)i) .
(a, b) : (A, B) c : C
-------------------------------------------------- ((,)i)
((a, b), c) : ((A, B), C)
```

The form is identical to the logic proof, except we have terms and the rules use `(,)`

instead of `∧`

.

We can write the same thing in Haskell:

```
assoc :: (a, (b, c)) -> ((a, b), c)
= ((a, b), c) assoc (a, (b, c))
```

However, this takes advantage of a powerful Haskell feature known as pattern matching.

Given the proof above, it's easy to noice that `(,)i`

is exactly the tuple constructor, `(,)e1`

is `fst`

and `(,)e2`

is `snd`

. Knowing this, and looking at the proof above, we could say, given hypothesis `h = (a, (b, c)) : (A, (B, C))`

, we can obtain:

`a : A`

from`fst h`

`(b, c) : (B, C)`

from`snd h`

`b : B`

from`fst (snd h)`

`c : C`

from`snd (snd h)`

`(a, b) : (A, B)`

from`(fst h, fst (snd h))`

`((a, b), c) : ((A, B), C)`

from`((fst h, fst (snd h)), snd (snd h))`

So, in Haskell:

```
assoc' :: (a, (b, c)) -> ((a, b), c)
assoc' h = ((fst h, fst (snd h)), snd (snd h))
```

This is a neat effect of the Curry-Howard correspondence: proofs are programs. So, once we write the proof, we also have the program. We could even write the program and then extract the proof -- it's really the same thing.

The Curry-Howard-Lambek extends the correspondence to include CT as well. The correspondence connects propositions to objects, arrows to implication, conjunction to categorical products, etc.

While in logic we said "given a proof of `A`

", and in type theory we said "given a term of type `A`

", the only way we can do the same in CT is to say "given an arrow from the terminal object `T`

to `A`

, `f : T → A`

". This works because the terminal object represents `True`

/ `Unit`

in logic / type theory, so it means "given we can deduce `A`

from `True`

", or "given we can obtain a term `a : A`

from `() : ()`

".

Armed with this, we can now express the same problem in CT terms: - given an arrow `h : T → (A × (B × C))`

- obtain an arrow `p : T → ((A × B) × C))`

Before we begin, let's review what a product is: - given `A × B`

, we know there are two arrows `p : A × B → A`

and `q : A × B → B`

, which we will write as `<p, q>`

- given `A × B`

is the product of `A`

and `B`

, and `C`

is an object with two arrows `p' : C → A`

and `q' : C → B`

, there exists an unique arrow `m : C → A × B`

such that `p ∘ m = p'`

and `q ∘ m = q'`

Also, remember that we can compose any two arrows `f : A → B`

and `g : B → C`

via `g ∘ f`

.

Now we are ready for the proof:

`T`

is the terminal object, and `t : T → A × (B × C)`

is what we start with. We need to be able to obtain an arrow `t' : T → (A × B) × C)`

.

By **product** `A × (B × C)`

, we know there exists `p : A × (B × C) → A`

and `q : A × (B × C) → B × C`

.

By **composition**, we can obtain the arrows `p ∘ t : T → A`

and `q ∘ t : T → B × C`

.

By **product** `B × C`

, we know there exists `p' : B × C → B`

and `q' : B × C → C`

.

By **composition**, we can obtain the arrow `p' ∘ q ∘ t : T → B`

.

So now, we have the following arrows: - `p ∘ t : T → A`

- `p' ∘ q ∘ t : T → B`

By definition of **product**, since we know `A × B`

is the product of `A`

and `B`

, and since we have the arrows `T → A`

and `T → B`

, then we know there must be an unique arrow which we'll name `l : T → A × B`

.

By **composition** we can obtain the arrow `q' ∘ q ∘ t : T → C`

.

Similarly to the step before, by definition of **product**, since we know `(A × B) × C`

is a product of `A × B`

and `C`

, and since we have the arrows `l : T → A × B`

and `q' ∘ q ∘ t : T → C`

, then there must exist an unique arrow `t' : T → (A × B) × C`

.

Note: there are, in fact, as many arrows `T → (A × B) × C`

as are elements in `(A × B) × C`

, but `t'`

is the unique one derived from the initial arrow, `t`

.

Edit: See this twitter thread for a whiteboard proof of sum associativity.

If we follow the CT arrows as we followed the logic proof: - we could rewrite the `l : T → A × B`

arrow as `<i,j> : T → A × B`

, where `i = p ∘ t : T → A`

and `j = p' ∘ q ∘ t : T → B`

. - we already have `k = q' ∘ q ∘ t : T → C`

So, if instead of `t`

we write `a_bc`

to denote our hypothesis, or inputs, let's look closer at what `i`

, `j`

and `k`

are: - `i`

is `p ∘ t`

, which is the left projection of the premise, or `fst a_bc`

You may ask: Why?!? Well, `p ∘ t`

means `p after t`

. In our case, `t`

represents the input, so it's equivalent to `a_bc`

, and `p`

is the left projection, which is equivalent to `fst`

. Keep in mind that `a ∘ b ∘ c`

means `c first, then b, then a`

when reading the following.

`j`

is`p' ∘ q ∘ t`

, which is`fst (snd a_bc)`

`l = <i,j>`

, so`l = (fst a_bc, fst (snd a_bc))`

`k`

is`snd (snd a_bc)`

- the result,
`T → (A × B) × C`

is`< <i,j>, k > = ((fst a_bc, fst (snd a_bc)), snd (snd a_bc))`

If we look back at the Haskell definition: `assoc a_bc = ((fst a_bc, fst (snd a_bc)), snd (snd a_bc))`

Which means we reached the same implementation/proof, again.

Edit: Thank you to Bartosz Milewski and GhiOm for their early feedback.

]]>This post will go a bit further than that and show the type theoretic equivalents of existential and universal quantifiers. I'll then explore some interesting properties of these types. This post will not go into the category theory part of this, although I may do that in a future post.

Forall (∀) is the universal quantifier and is generally written as

`∀ x. P x`

where `x`

is a variable and `P`

is a predicate taking such a variable. A basic example of such a proposition could be: *"For all numbers x, if you add one to x, you get a greater number than x"*, or:

`∀ x. x + 1 > x`

Similarly, exists (∃) is the existential quantifier and is written as

`∃ x. P x`

where `x`

is a variable and `P`

is a predicate, for example: *"there exists a number that is greater than 10"*, or:

`∃ x. x > 10`

Please note that in classical logic, you can prove an existential proposition by either finding an `x`

for which `P(x)`

is *true*, or by assuming there does not exist such an `x`

and reaching a contradiction (proof by contradiction). In *intuitionistic* logic, the latter is not possible: we have to find the `x`

. One could then say that an existential quantifier in intuitionistic logic is described by a pair of `x`

and `P(x)`

.

In the next chapter, we will look at dependent sum and I will say it's the Curry-Howard correspondent of existential quantifiers. Most theorem provers that rely on this correspondence will use make use of proof irrelevance which essentially means that it should not matter whether one picks `11`

or `12`

in order to to prove `∃ x. x > 10`

: the proofs should be equivalent. We will not look into this, nor will we make use of proof irrelevance in this post.

Dependent sums (Σ) are the type theoretic equivalent of existential quantifiers. In Agda, we can define the dependent sum type as:

```
data Σ {A : Set} (P : A → Set) : Set where
: ∀ (a : A) → P a → Σ P Σ_intro
```

The ∑ type is a higher-kinded type which takes a higher-kinded type, `P : A → Set`

-- `P`

takes an `A`

and gives us a new type (`Set`

, in Agda). The nice part about this is that `P`

holds information about both the type of the existential variable (`A`

) as well as the type of the resulting type (`P A`

).

Constructing such a term requires a term of the existential type (*evidence* for `A`

), and a term of the predicate type (*evidence* for `P A`

). For example, the example above could be written as `∑_intro 11 (11 > 10)`

, assuming there exists a type `>`

which expresses the greater-than relationship.

Please note that the above example is a simplification and going into the details of how an inductive type for `>`

works is beyond the scope of this post.

Dependent products (∏) are the type theoretic equivalent of universal quantifiers. In Agda, we can define the dependent product type as:

```
data Π {A : Set} (P : A → Set) : Set where
: (∀ (a : A) → P a) → Π P Π_intro
```

The ∏ type is also a higher-kinded type. Note that this definition is almost identical to the Σ definition, except for the parantheses used in the constructor (`Π_intro`

). This lines up with the intuition that `∀x. P(X)`

can be described by a function `A -> P(x)`

, where `x : A`

.

Constructing a ∏ type takes a function from the quantified variable to the type described by the predicate.

Constructing a term would, for example be `∏_intro (λn. n + 1 > n)`

.

We will first need to define a `constT`

function:

```
: ∀ (X : Set) (Y : Set) → Y → Set
constT = x constT x _ _
```

This takes two types, `X`

and `Y`

. It then takes a value of type `Y`

, and ignores it, returning the type `X`

.

So, if we take `P`

to *not* depend on the quantified item and define it using `constT`

, then we can obtain tuples in the case of ∑ types:

```
-pair : ∀ (A B : Set) → Set
Σ-pair a b = Σ (constT b a) Σ
```

Note that `Σ-pair`

is a type-level function that takes two types and returns the type of pairs.

We can then define a simple pair constructor using the constructor above:

```
-mkPair : ∀ {A : Set} {B : Set} → A → B → Σ-pair A B
Σ-mkPair a b = Σ_intro a b Σ
```

And we can have the two projections by simple pattern match, returning the appropriate value:

```
-fst : ∀ {A B : Set} → Σ-pair A B → A
Σ-fst (Σ_intro a _) = a
Σ
-snd : ∀ {A B : Set} → Σ-pair A B → B
Σ-snd (Σ_intro _ b) = b Σ
```

This works because Σ types are defined as `a -> P a -> Σ P`

, so if we take a `P`

such that `P a`

always is `b`

, then we get `a -> b -> Σ`

which is essentially a tuple of `a`

and `b`

.

We can now say `Σ_snd (Σ_mkPair 1 2)`

and get the result `2`

.

Similarly, if we take `P`

to be `const B A`

, we can obtain functions out of ∏ types:

```
-function : ∀ (A B : Set) → Set
Π-function a b = Π (constT b a)
Π
-mkFunction : ∀ {A B : Set} → (A → B) → Π-function A B
Π-mkFunction f = Π_intro f
Π
-apply : ∀ {A B : Set} → Π-function A B → A → B
Π-apply (Π_intro f) a = f a Π
```

As with sum types, this works because Π types are defined as `(a -> P a) -> Π P`

, so if we take `P`

such that `P a`

always is `b`

, then we get `(a -> b) -> Π`

, which is essentially a function from `a`

to `b`

.

We can now write `Π-apply (Π-mkFunction (λx. x + 1)) 1`

and get the result `2`

.

We can obtain sum types from ∑ types by using `Bool`

as the variable type, and the predicate *returning* type `A`

for `true`

, and type `B`

for `false`

:

```
: ∀ (A B : Set) → Bool → Set
bool = a
bool a _ true = b bool _ b false
```

Note that `a`

and `b`

are types! We can now write:

```
-sum : ∀ (A B : Set) → Set
Σ-sum a b = Σ (bool a b) Σ
```

Now, in order to construct such a type (via *left* or *right*), we just need to pass the appropriate boolean value along with an item of the correct type:

```
-sum_left : ∀ {A : Set} (B : Set) → A → Σ-sum A B
Σ-sum_left _ a = Σ_intro true a
Σ
-sum_right : ∀ {B : Set} (A : Set) → B → Σ-sum A B
Σ-sum_right _ b = Σ_intro false b Σ
```

Eliminating is just a matter of pattern matching on the boolean value and applying the correct function:

```
-sum_elim : ∀ {A B R : Set} → (A → R) → (B → R) → Σ-sum A B → R
Σ-sum_elim f _ (Σ_intro true a) = f a
Σ-sum_elim _ g (Σ_intro false b) = g b Σ
```

As an example, `Σ-sum_elim (const "left") (const "right") (Σ-sum_left Bool 1)`

, and get the result `"left"`

.

Interestingly, we can also obtain sum types from ∏ types: the idea is to encode the eliminator right into our type! For that we will need the following predicate:

```
: ∀ (A B R : Set) → Set
prodPredicate = (a → r) → (b → r) → r prodPredicate a b r
```

This means that given two types `A`

and `B`

, we get a type-level function from `R`

to `(A -> R) -> (B -> R) -> R`

, which is exactly the eliminator type. Don't worry about `Set₁`

or `Π'`

for now:

```
-sum : ∀ (A B : Set) → Set₁
Π-sum a b = Π' (prodPredicate a b) Π
```

This means that in order to build a sum type, we need to pass a type `R`

and a function `(A -> R) -> (B -> R) -> R`

. So, the constructors will look like:

```
-sum-left : ∀ {A : Set} (B : Set) → A → Π-sum A B
Π-sum-left _ a = Π'_intro (\_ f _ → f a) Π
```

The lambda is the only interesting bit: we construct a function that given a type `R`

(first `_`

) and a function `A -> R`

(named `f`

), we can return an `R`

by calling `f a`

(the third `_`

parameter is for the function `g : B -> R`

, which is not required for the *left* constructor).

Similarly, we can write a constructor for *right*:

```
-sum-right : ∀ {A : Set} (B : Set) → B → Π-sum A B
Π-sum-right _ b = Π'_intro (\_ _ g → g b) Π
```

As for the eliminator, we simply require the two functions `A -> R`

and `B -> R`

in order to pass to our dependent product and get an `R`

:

```
-sum-elim : ∀ {A B R : Set} → (A → R) → (B → R) → Π-sum A B → R
Π-sum-elim f g (Π'_intro elim) = elim _ f g Π
```

We've used three type-level functions to generate a few interesting types:

Function | Σ-type | Π-type |
---|---|---|

constT | tuple | function |

bool | either | tuple |

prodPredicate | - | either |

What other interesting type-level functions can you find for Σ and/or Π types?

You can find the source file here.

]]>`* -> *`

), contravariant functors, invariant functors, etc.
This post will show an alternate `Functor`

that can handle all of the above. I got this idea from the awesome Tom Harding, and he apparently got it from @Iceland_jack.

Although this is not new, I could not find any blog post or paper covering it.

The problem is quite straight-forward. Let's say we want to define a functor instance for `(a, b)`

which changes the `a`

, to `c`

using an `a -> c`

function. This should be possible, but there is no way to write it using `Functor`

and `fmap`

.

There are two ways to do this in Haskell using `Prelude`

: - by using `Bifunctor`

/`first`

, or - by using the `Flip`

newtype.

While both the above options work, they are not particularly elegant. On top of that, there is no common *Trifunctor* package, and flipping arguments around and wrapping/unwrapping newtypes is not very appealing, which means the approach doesn't quite scale well.

There are two problems with `Functor`

: - `f`

has the wrong kind if we want to allow higher kinded functors, and - the arrow of the mapped function is the wrong type if we want to allow contravariant or invariant functors (or even other types of mappings!).

We can fix both problems by adding additional types to the class:

```
class FunctorOf (p :: k -> k -> Type) (q :: l -> l -> Type) f where
map :: p a b -> q (f a) (f b)
```

`p`

represents a relationshiop (arrow) between `a`

and `b`

. In case of a regular functor, it's just `->`

, but we can change it to a reverse arrow for contravariants.

`q`

is normally just an optional layer on top of `->`

, in order to allow mapping over other arguments. For example, if we want to map over the second-to-last argument, we'd use natural transforms (`~>`

).

The regular functor instance can be obtained by simply:

```
instance Functor f => FunctorOf (->) (->) f where
map :: forall a b. (a -> b) -> f a -> f b
map = fmap
functorExample :: [String]
= map show ([1, 2, 3, 4] :: [Int]) functorExample
```

I'll use the `Bifunctor`

instance in order to show all bifunctors can have such a `FunctorOf`

instance. Of course, one could define instances manually for any `Bifunctor`

.

Going back to our original example, we can define a `FunctorOf`

instance for `* -> * -> *`

types in the first argument via:

```
newtype (~>) f g = Natural (forall x. f x -> g x)
instance Bifunctor f => FunctorOf (->) (~>) f where
map :: forall a b. (a -> b) -> f a ~> f b
map f = Natural $ first f
```

In order to avoid fiddling about with newtypes, we can define a helper `bimap'`

function for `* -> * -> *`

that maps both arguments:

```
bimap' :: forall a b c d f
. FunctorOf (->) (->) (f a)
=> FunctorOf (->) (~>) f
=> (a -> b)
-> (c -> d)
-> f a c
-> f b d
=
bimap' f g fac case map f of
Natural a2b -> a2b (map g fac)
bifunctorExample :: (String, String)
= bimap' show show (1 :: Int, 1 :: Int) bifunctorExample
```

Okay, cool. But what about *contravariant* functors? We can use `Op`

from `Data.Functor.Contravariant`

(defined as `data Op a b = Op (b -> a)`

):

```
instance Contravariant f => FunctorOf Op (->) f where
map :: forall a b. (Op b a) -> f b -> f a
map (Op f) = contramap f
```

This is pretty cool since we only need to change the mapped function's type to be `Op`

instead of `->`

! As before, we can make things easier by defining a helper:

```
cmap :: forall a b f
. FunctorOf Op (->) f
=> (b -> a)
-> f a
-> f b
= map (Op f) fa
cmap f fa
contraExample :: Predicate Int
= cmap show (Predicate (== "5")) contraExample
```

I'm glad you asked! It's as easy as 1-2-3, or well, as easy as "functor in the last argument" - "contravariant in the previous" - "write helper function":

```
instance Profunctor p => FunctorOf Op (~>) p where
map :: forall a b. (Op b a) -> p b ~> p a
map (Op f) = Natural $ lmap f
dimap' :: forall a b c d p
. FunctorOf (->) (->) (p a)
=> FunctorOf Op (~>) p
=> (b -> a)
-> (c -> d)
-> p a c
-> p b d
=
dimap' f g pac case map (Op f) of
Natural b2a -> b2a (map g pac)
profunctorExample :: String -> String
= dimap' read show (+ (1 :: Int)) profunctorExample
```

Yep. We only need to define a higher-kinded natural transform and write the `FunctorOf`

instance, along with the helper:

```
newtype (~~>) f g = NatNat (forall x. f x ~> g x)
data Triple a b c = Triple a b c deriving (Functor)
instance {-# overlapping #-} FunctorOf (->) (~>) (Triple x) where
map :: forall a b. (a -> b) -> Triple x a ~> Triple x b
map f = Natural $ \(Triple x a y) -> Triple x (f a) y
instance FunctorOf (->) (~~>) Triple where
map :: (a -> b) -> Triple a ~~> Triple b
map f = NatNat $ Natural $ \(Triple a x y) -> Triple (f a) x y
triple :: forall a b c d e f t
. FunctorOf (->) (->) (t a c)
=> FunctorOf (->) (~>) (t a)
=> FunctorOf (->) (~~>) t
=> (a -> b)
-> (c -> d)
-> (e -> f)
-> t a c e
-> t b d f
= a2b . c2d . map h
triple f g h where
Natural c2d) = map g
(NatNat (Natural a2b)) = map f
(
tripleExample :: Triple String String String
= triple show show show (Triple (1 :: Int) (2 :: Int) (3 :: Int)) tripleExample
```

The pattern is pretty simple: - we need a `FunctorOf`

instance for every argument we want to map - for each such argument, we need to use `->`

for variant and `Op`

for contravariant arguments as the first argument to `FunctorOf`

- from right to left, we need to use increasing level of transforms to map the type arguments (`->`

, `~>`

, `~~>`

, etc)

We can define an instance for `Endo`

using:

```
data Iso a b = Iso
to :: a -> b
{ from :: b -> a
,
}
instance FunctorOf Iso (->) Endo where
map :: forall a b. Iso a b -> Endo a -> Endo b
map Iso { to, from } (Endo f) = Endo $ to . f . from
endoExample :: Endo String
= map (Iso show read) (Endo (+ (1 :: Int))) endoExample
```

We can even go further:

```
instance FunctorOf (->) (->) f => FunctorOf Iso Iso f where
map :: Iso a b -> Iso (f a) (f b)
map Iso { to, from } = Iso (map to) (map from)
```

which is to say, given an isomorphism between `a`

and `b`

, we can obtain an isomorphism between `f a`

and `f b`

!

I think this instance can be also used for proofs. For example, using the `Refl`

equality type:

```
data x :~: y where
Refl :: x :~: x
```

And this means we can write transitivity as:

```
instance FunctorOf (:~:) (->) ((:~:) x) where
map :: forall a b. a :~: b -> x :~: a -> x :~: b
map Refl Refl = Refl
proof :: Int :~: String -> Bool :~: Int -> Bool :~: String
= map proof
```

Code is available here.

Another thing worth mentioning is the awesome upcoming GHC extension (being worked on by Csongor Kiss) which allows type families to be partially applied. If you haven't read the paper, you should! Using this feature, one could do something like:

```
type family Id a where Id x = x
instance FunctorOf (->) (->) Id
map = ($)
idExample :: Bool
= map (+1) 1 == 2 idExample
```

Please note I have not tested the above code; it was suggested by Tom Harding (thanks again for the idea and reviewing!).

What other uses can you come up with?

]]>This post was sparked by a few other posts in the Haskell world. They are, to my knowledge, in chronological order:

- Michal Snoyman's Boring Haskell Manifesto
- Matt Parsons' Write Junior Code
- Marco Sampellegrini's My thoughts on Haskell in 2020

Snoyman's manifesto is a call to define a safe subset of the Haskell language and common libraries, provide documentation, tutorials, cookbooks, and continuously evolve, update, and help engineers use and get "boring Haskell" adopted.

Parsons notes that Haskell has a hiring problem: there are few jobs, and most of those are for senior developers. The reason for this is that we over-indulge in fancy Haskell, making our code needlessly complicated. If we wrote simple, junior-level Haskell, we would be able to hire junior developers and have them be productive.

Sampellegrini's post points out a few key problems:

- there's a lot of extensions we need to keep track of, which makes things hard
- if an idea looks good on paper, it doesn't mean it's going to be easy to maintain in the long run
- inclusivity might be a problem: "I don't want a PhD to be a requirement to work with Haskell"
- they argue there's marginal benefit to fancy types/Haskell

While I understand where all of these feelings are coming from, and I agree to some of the ideas, I think they have their marks on the wrong problem.

I think the real problem is that we are not putting up jobs for junior devs. We're not even giving them a chance. And when we are, we usually don't give them enough support (through training and making sure they know who to ask, and that it's okay to do so) to succeed.

I'm really not sure why we're not hiring more junior developers. It might be because seniors like to think that the code they are writing is so complicated that a junior would take too long to be able to understand, so they advise management that a junior cannot possibly be productive. Maybe it's because they don't want to be bothered with training junior devs, and they would rather just work on code instead? Or maybe it's because management doesn't like seniors' time being "wasted" on teaching junior devs?

Whatever the reason, I don't really think writing simpler code will help much. If the on-boarding process is lacking, if the company culture is not welcoming to junior devs, most of them will be set for failure from the get-go, regardless of how fancy or simple the code is.

What is a junior developer? For the purposes of this article, I will define a Haskell junior developer as somebody who's able to confidently use simple monads like `Maybe`

, `Either e`

, `IO`

, and the list monad. Ideally, they would also have a crude understanding of monad transformers (but not necessarily `mtl`

as well). They are able to compose functions, reason about ADTs, and, perhaps most importantly, are motivated to learn more about Haskell and FP.

I currently work on two projects, both in Haskell. One of these projects has two junior Haskell developers, and the other has one. I will briefly go over the details of these projects as well as my mentoring experience in order to establish a baseline.

I have not been working with Haskell for very long. I actually come from OOP-land (you can read my story here), and I have a lot of experience as a team lead. I have hired, trained, and mentored a decent number of junior devs, most of them in my OOP days, but also three of them recently, at the place I currently work. For the past year and a half, I have been the main developer in charge of training and making sure the junior devs are productive.

Our codebases (you can see one of them here) are pretty complicated: besides the fact that they use notoriously complex Haskell libraries such as `lens`

, `servant`

, and `recursion-schemes`

, the domain problem is pretty complicated as well: we're essentially building an automated prover for a rewrite-based executable semantic framework (the other project is a pretty standard servant app, so not too much to go over there, although it does use `lens`

, `generic-lens`

, `persistent`

/`esqueleto`

and obviously `servant`

).

This prelude was needed because I can't really speak about junior developers in general, but I can tell you about my experience with on-boarding junior Haskell developers on our projects. However, before that, I would like to add that the junior devs we hired were all either senior year at the local university, or fresh graduates. They were picked because they are all excited about FP, despite the fact that none of them had any previous professional experience related to FP or Haskell.

I am proud to say that all three junior devs are doing great. I obviously can't take any significant part of the credit (they are all very smart and hard working), but I think that there are a few things that contributed to their success:

**Kindness**we've all gone through this. We're all trying our best. Be kind and supportive. Praise them when they do a good job. Encourage them to come up with ideas and to bring their ideas forward.**Confidence**make sure they know it's okay to not know things; there's a ton of things I don't know, and I make sure to be loud about it. I also make sure to show them how I find the answers to things I don't know. Of course, on top of literally telling them it's okay to ask questions and not know stuff, even if it feels like it's something they should know (there's no such thing, really: we all have our blind spots).**Support**be there for them. We have daily meetings and we make sure we know what everybody's up to. We make sure to ask everybody if they're stuck or not, if they need help or more work.**Training**at least until they get comfortable, make sure you go over the things that are "fancy" in the codebase. At the very least, make sure you go over a few examples and show them how it works. Make sure they understand. A few exercises where you work together can be particularly useful as well.**Clarity**it is vitally important that tasks are as crystal-clear as they can be. Make extra sure the tasks that junior devs work on won't take them too far off the beaten path. Try to add comments/more notes to these tasks: where to start, a very rough sketch of the solution, how to test: anything can help.

Only one of the three junior developers we hired was slightly familiar with monad transformers at the time they were hired. The other two were familiar with monads. We were able to get all three to contribute PRs in less than a week after they started. Within 3 to 6 months, I noticed they started being able to complete tasks with little supervision. One of them has been with us for little over an year, and they are now able to take on complicated tasks (major refactoring, learning new concepts, etc.) pretty much on their own.

Since the subject is hot, I just saw a tweet from Joe Kachmar which expresses the very idea I want to combat: these things aren't THAT hard to teach. Of course a junior won't be able to invent a new type of lenses, add a new layer to our application's monad stack, or re-invent `generic-lens`

, but nobody's expecting them to.

After a week of training, I am sure a junior developer can add a new REST API endpoint that is similar to one that's already in our application. They can use getter lenses similar to the ones we already have, but targeting different fields: they can re-use the existing infrastructure to write "boring" code using whatever level of fancy Haskell is already there as a guide.

And sure, sometimes they'll try something new and they'll get stuck on a 20-page GHC type error. That's when they ask for help, because they know it's okay not knowing things, and there's always someone available that's happy to help (and they won't help by fixing the error for them, but by guiding them into understanding and fixing the problem themselves).

It's hard to focus on multiple solutions to the same problem. I am also worried that the "Boring Haskell Manifesto" can even be harmful in the long run.

Writing programs is really, really hard. Nothing focuses this feeling better than writing pure FP, because it forces you to be clear, precise and thorough about everything: you can't ignore `Nothing`

s, you can't discard `Left`

s implicitly, you don't get to shove things into a mutable global state.

Writing programs is really, really hard for everyone. It's not only hard for junior developers. It's also hard for senior developers. We haven't figured this out, we're not even close. We still have a terrible story for errors: their composability is far from ideal. We still have a lot of competing libraries for effects, and more seem to be coming. There are a lot of libraries to be explored and discovered.

I do think that each team should be careful when adding language extensions and choosing libraries for each project they work on. And I do think the "fancyness" needs to be taken into account. As Parsons put it on slack

fanciness of your code should be gated on the size of your mentoring/training budget if you value hiring juniors

I totally agree, although I would also add that another important aspect worth considering is the benefit of said fancyness.

There are many reasons one might want to stray off the beaten path. Fancy type-level code might save you a ton of code duplication, or it might add features that would otherwise make the code brittle or hard to maintain. For some projects, this may be worth it.

I don't think a blessed set of libraries or extensions will help with this. Which streaming library gets to be picked? Will it be `conduit`

over `pipes`

? What about `streaming`

?

As I said, I think it's the wrong thing to focus on.

We need to stop over-appreciating how hard it is to use "fancy" libraries like `servant`

, `lens`

or `recursion-schemes`

. Give junior developers a fighting chance and they will surprise you.

I don't think there's anything that makes our company's junior developer success story non-reproducible anywhere else. Our local university doesn't focus on FP or Haskell (they do have one course where they teach Haskell, but that's pretty much it). We were actually forced to take this route because there's no other companies that do Haskell locally (as far as I know), so we can't just find Haskell developers around.

I think this is reproducible anywhere, on pretty much any codebase. We just need to open up junior positions, and give them the support they need to succeed. Have you had some different experience? Is it hard to find junior developers that are somewhat familiar with monads?

Go out there, convince your team that they're not actually living in an ivory tower. It's not that hard, and we're not special for understanding how to use these language extensions and libraries. We can teach junior developers how to use them.

]]>`Functor`

class / concept - the functor instance for `Either a`

, `(,) a`

- basic kind knowledge, e.g. the difference between `* -> *`

and `* -> * -> *`

In Haskell, functors can only be defined for types of kind `* -> *`

like `Maybe a`

or `[a]`

. Their instances allow us to use `fmap`

(or `<$>`

) to go from `Maybe a`

to `Maybe b`

using some `a -> b`

, like:

```
> show <$> Just 1
λJust "1"
> show <$> Nothing
λNothing
> show <$> [1, 2, 3]
λ"1", "2", "3"]
[
> show <$> []
λ []
```

We can even define functor instances for higher kinded types, as long as we fix type arguments until we get to `* -> *`

. For example, `Either`

has kind `* -> * -> *`

, but `Either e`

has kind `* -> *`

. So that means that we can have a functor instance for `Either e`

, given some type `e`

. This might sound confusing at first, but all it means is that the `e`

cannot vary, so we can go from `Either e a`

to `Either e b`

using some `a -> b`

, but we cannot go from `Either e1 a`

to `Either e2 a`

or `Either e2 b`

even if we had both `a -> b`

and `e1 -> e2`

. How would we even pass two functions to `fmap`

?

```
> show <$> Right 1
λRight "1"
> show <$> Left True
λLeft True
```

In the first example, we go from `Either a Int`

to `Either a String`

using `show :: Int -> String`

. In the second example, we go from `Either Bool a`

to `Either Bool String`

, where `a`

needs to have a `Show`

instance.

But what if we want to go from `Either a x`

to `Either b x`

, given some `a -> b`

?

Let's see how we could implement it ourselves:

```
mapLeft :: (a -> b) -> Either a x -> Either b x
Left a) = Left (f a)
mapLeft f (= r mapLeft _ r
```

Since we are trying to map the `Left`

, the interesting bit is for that constructor: we apply `f`

under `Left`

. Otherwise, we just leave the value as-is; a `Right`

value of type `x`

(we could have written `mapLeft _ (Right x) = Right x`

and it would work the same).

Here's a few warm-up exercises. The first uses typed holes to guide you and clarify what's meant by "using `either`

". The last exercise can be a bit tricky. Look up what `Const`

is and use typed holes.

*Exercise 1*: re-implement `mapLeft'`

using `either`

:

```
mapLeft' :: (a -> b) -> Either a x -> Either b x
= either _leftCase _rightCase e mapLeft' f e
```

*Exercise 2*: implement `mapFirst`

:

`mapFirst :: (a -> b) -> (a, x) -> (b, x)`

*Exercise 3*: implement `remapConst`

:

```
import Data.Functor.Const (Const(..))
remapConst :: (a -> b) -> Const a x -> Const b x
```

While we can implement `mapLeft`

, `mapFirst`

, and `remapConst`

manually, there is a pattern: some types of kind `* -> * -> *`

allow both their type arguments to be mapped like a `Functor`

, so we can define a new class:

```
class Bifunctor p where
{-# MINIMAL bimap | first, second #-}
bimap :: (a -> b) -> (c -> d) -> p a c -> p b d
first :: (a -> b) -> p a c -> p b c
second :: (b -> c) -> p a b -> p a c
```

`bimap`

takes two functions and is able to map both arguments in a type of kind `* -> * -> *`

. `first`

is a lot like the functions we just defined manually. `second`

is always the same thing as `fmap`

. This class exists in `base`

, under `Data.Bifunctor`

.

*Exercise 4*: implement `bimap`

in terms of `first`

and `second`

.

*Exercise 5*: implement `first`

and `second`

in terms of `bimap`

.

*Exercise 6*: implement the `Bifunctor`

instance for `Either`

:

```
instance Bifunctor Either where
Left a) = _leftCase
bimap f _ (Right b) = _rightCase bimap _ g (
```

*Exercise 7*: Implement the `Bifunctor`

instance for tuples `(a, b)`

.

*Exercise 8*: Implement the `Bifunctor`

instance for `Const`

.

*Exercise 9*: Implement the `Bifunctor`

instance for `(a, b, c)`

.

*Exercise 10*: Find an example of a type with kind `* -> * -> *`

that cannot have a `Bifunctor`

instance.

*Exercise 11*: Find an example of a type with kind `* -> * -> *`

which has a `Functor`

instance when you fix one type argument, but can't have a `Bifunctor`

instance.

`Functor`

class / concept - the functor instance for `(->) r`

Not all higher kinded types `* -> *`

can have a `Functor`

instance. While types like `Maybe a`

, `(x, a)`

, `r -> a`

, `Either e a`

and `[a]`

are `Functors`

in `a`

, there are some types that cannot have a `Functor`

instance. A good example is `Predicate`

:

`newtype Predicate a = Predicate { getPredicate :: a -> Bool }`

We call this type a predicate in `a`

because, given some value of type `a`

we can obtain a `True`

or a `False`

. So, for example: - `Predicate (> 10)`

is a predicate in `Int`

which returns true if the number is greater than 10, - `Predicate (== "hello")`

is a predicate in `String`

which returns true if the string is equal to *"hello"*, and - `Predicate not`

is a predicate in `Bool`

which returns the negation of the boolean value it receives.

We can try writing a `Functor`

instance and see what we can learn:

```
instance Functor Predicate where
fmap :: (a -> b) -> Predicate a -> Predicate b
fmap f (Predicate g) =
Predicate
$ \b -> _welp
```

As the type hole above would suggest, we need to return a `Bool`

value, and we have: - `b :: b`

- `f :: a -> b`

- `g :: a -> Bool`

There is no way we can combine these terms at all, let alone in such a way as to obtain a `Bool`

value. The only way we would be able to obtain a `Bool`

value is by calling `g`

, but for that, we need an `a`

-- but all we have is a `b`

.

What if `f`

was reversed, though? If we had `f' :: b -> a`

, then we could apply `b`

to it `f' b :: a`

and then pass it to `g`

to get a `Bool`

. Let's write this function outside of the `Functor`

class:

```
mapPredicate :: (b -> a) -> Predicate a -> Predicate b
Predicate g) =
mapPredicate f (Predicate
$ \b -> g (f b)
```

This looks very weird, compared to `Functor`

s, right? If you want to go from `Predicate a`

to `Predicate b`

, you need a `b -> a`

function?

*Exercise 1*: fill in the typed hole `_e1`

:

```
greaterThan10 :: Predicate Int
= Predicate (> 10)
greaterThan10
exercise1 :: Predicate String
= mapPredicate _e1 greaterThan10 exercise1
```

*Exercise 2*: write `mapShowable`

for the following type:

```
newtype Showable a = Showable { getShowable :: a -> String }
mapShowable :: (b -> a) -> Showable a -> Showable b
```

*Exercise 3*: Use `mapShowable`

and `showableInt`

to implement `exercise3`

such that `getShowable exercise3 True`

is `"1"`

and `getShowable exercise3 False`

is `"2"`

.

```
showableInt :: Showable Int
= Showable show
showableInt
exercise3 :: Showable Bool
= exercise3
```

`Predicate`

and `Showable`

are very similar, and types like them admit a `Contravariant`

instance. Let's start by looking at it:

```
class Contravariant f where
contramap :: (b -> a) -> f a -> f b
```

The instances for `Predicate`

and `Showable`

are trivial: they are exactly `mapPredicate`

and `mapShowable`

. The difference between `Functor`

and `Contravariant`

is exactly the function they receive: one is "forward" and the other is "backward", and it's all about how the data type is defined.

All `Functor`

types have their type parameter `a`

in what we call a *positive* position. This usually means there can be some `a`

available in the type (which is always the case for tuples, or sometimes the case for `Maybe`

, `Either`

or `[]`

). It can also mean *we can obtain an a*, like is the case for

`r -> a`

. Sure, we need some `r`

to do that, but we are able to obtain an `a`

afterwards.On the opposite side, `Contravariant`

types have their type parameter `a`

in what we call a *negative* position: they *need* to consume an `a`

in order to produce something else (a `Bool`

or a `String`

for our examples).

*Exercise 4*: Look at the following types and decide which can have a `Functor`

instance and which can have a `Contravariant`

instance. Write the instances down:

```
data T0 a = T0 a Int
data T1 a = T1 (a -> Int)
data T2 a = T2L a | T2R Int
data T3 a = T3
data T4 a = T4L a | T4R a
data T5 a = T5L (a -> Int) | T5R (a -> Bool)
```

As with `Functor`

s, we can partially apply higher kinded types to write a `Contravariant`

instance. The most common case is for the flipped version of `->`

:

`newtype Op a b = Op { getOp :: b -> a }`

While `a -> b`

has a `Functor`

instance, because the type is actually `(->) a b`

, and `b`

is in a *positive* position, its flipped version has a `Contravariant`

instance.

*Exercise 5*: Write the `Contravariant`

instance for `Op`

:

```
instance Contravariant (Op r) where
contramap :: (b -> a) -> Op r a -> Op r b
```

*Exercise 6*: Write a `Contravariant`

instance for `Comparison`

:

`newtype Comparison a = Comparison { getComparison :: a -> a -> Ordering }`

*Exercise 7*: Can you think of a type that has both `Functor`

and `Contravariant`

instances?

*Exercise 8*: Can you think of a type that can't have a `Functor`

nor a `Contravariant`

instance? These types are called `Invariant`

functors.

We've seen how types of kind `* -> *`

can have instances for `Functor`

or `Contravariant`

, depending on the position of the type argument. We have also seen that types of kind `* -> * -> *`

can have `Bifunctor`

instances. These types are morally `Functor`

in both type arguments. We're left with one very common type which we can't map both arguments of: `a -> b`

. It does have a `Functor`

instance for `b`

, but the `a`

is morally `Contravariant`

(so it can't have a `Bifunctor`

instance). This is where `Profunctor`

s come in.

Here's a list of a few common types with the instances they allow:

Type | `Functor` |
`Bifunctor` |
`Contravariant` |
`Profunctor` |
---|---|---|---|---|

`Maybe a` |
✓ | |||

`[a]` |
✓ | |||

`Either a b` |
✓ | ✓ | ||

`(a,b)` |
✓ | ✓ | ||

`Const a b` |
✓ | ✓ | ||

`Predicate a` |
✓ | |||

`a -> b` |
✓ | ✓ |

Although there are some exceptions, you will usually see `Contravariant`

or `Profunctor`

instances over function types. `Predicate`

itself is a newtype over `a -> Bool`

, and so are most types with these instances.

Let's take a closer look at `a -> b`

. We can easily map over the `b`

, but what about the `a`

? For example, given `showInt :: Int -> String`

, what do we need to convert this function to `showBool :: Bool -> String`

:

```
showInt :: Int -> String
showInt = show
showBool :: Bool -> String
= _help showBool b
```

We would have access to: - `showInt :: Int -> String`

- `b :: Bool`

and we want to use `showInt`

, so we would need a way to pass `b`

to it, which means we'd need a function `f :: Bool -> Int`

and then `_help`

would become `showInt (f b)`

.

But if we take a step back, in order to go from `Int -> String`

to `Bool -> String`

, we need `Bool -> Int`

, which is exactly the `Contravariant`

way of mapping types.

*Exercise 1*: Implement a `mapInput`

function like:

`mapInput :: (input -> out) -> (newInput -> input) -> (newInput -> out)`

Extra credit: try a pointfree implementation as `mapInput = _`

.

*Exercise 2*: Try to guess how the `Profunctor`

class looks like. Look at `Functor`

, `Contravariant`

, and `Bifunctor`

for inspiration.

`class Profunctor p where`

*Exercise 3*: Implement an instance for `->`

for your `Profunctor`

class.

`instance Profunctor (->) where`

Unlike `Functor`

, `Contravariant`

, and `Bifunctor`

, the `Profunctor`

class is not in `base`

/`Prelude`

. You will need to bring in a package like `profunctors`

to access it.

```
class Profunctor p where
{-# MINIMAL dimap | lmap, rmap #-}
dimap :: (c -> a) -> (b -> d) -> p a b -> p c d
lmap :: (c -> a) -> p a b -> p c b
rmap :: (b -> c) -> p a b -> p a c
```

`dimap`

takes two functions and is able to map both arguments in a type of kind `* -> * -> *`

. `lmap`

is like `mapInput`

. `second`

is always the same thing as `fmap`

.

*Exercise 4*: implement `dimap`

in terms of `lmap`

and `rmap`

.

*Exercise 5*: implement `lmap`

and `rmap`

in terms of `dimap`

.

*Exercise 6*: implement the `Profunctor`

instance for `->`

:

```
instance Profunctor (->) where
-- your pick: dimap or lmap and rmap
```

*Exercise 7*: (hard) implement the `Profunctor`

instance for:

```
data Sum f g a b
= L (f a b)
| R (g a b)
instance (Profunctor f, Profunctor g) => Profunctor (Sum f g) where
```

*Exercise 8*: (hard) implement the `Profunctor`

instance for:

```
newtype Product f g a b = Product (f a b, g a b)
instance (Profunctor f, Profunctor g) => Profunctor (Product f g) where
```