The Comonad.Reader » Kan Extensions http://comonad.com/reader types, (co)monads, substructural logic Thu, 02 May 2013 14:19:12 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 Unnatural Transformations and Quantifiers http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/ http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/#comments Sun, 23 Sep 2012 03:43:13 +0000 Dan Doel http://comonad.com/reader/?p=660 Recently, a fellow in category land discovered a fact that we in Haskell land have actually known for a while (in addition to things most of us probably don't). Specifically, given two categories $\mathcal{C}$ and $\mathcal{D}$, a functor $G : \mathcal{C} \rightarrow \mathcal{D}$, and provided some conditions in $\mathcal{D}$ hold, there exists a monad $T^G$, the codensity monad of $G$.

In category theory, the codensity monad is given by the rather frightening expression:

$ T^G(a) = \int_r \left[\mathcal{D}(a, Gr), Gr\right] $

Where the integral notation denotes an end, and the square brackets denote a power, which allows us to take what is essentially an exponential of the objects of $\mathcal{D}$ by objects of $\mathcal{V}$, where $\mathcal{D}$ is enriched in $\mathcal{V}$. Provided the above end exists, $T^G$ is a monad regardless of whether $G$ has an adjoint, which is the usual way one thinks of functors (in general) giving rise to monads.

It also turns out that this construction is a sort of generalization of the adjunction case. If we do have $F \dashv G$, this gives rise to a monad $GF$. But, in such a case, $T^G \cong GF$, so the codensity monad is the same as the monad given by the adjunction when it exists, but codensity may exist when there is no adjunction.

In Haskell, this all becomes a bit simpler (to my eyes, at least). Our category $\mathcal{D}$ is always $\mathbf{Hask}$, which is enriched in itself, so powers are just function spaces. And all the functors we write will be rather like $\mathbf{Hask}$ (objects will come from kinds we can quantify over), so ends of functors will look like forall r. F r r where $F : \mathcal{C}^{op} \times \mathcal{C} \rightarrow \mathbf{Hask}$. Then:

newtype Codensity f a = Codensity (forall r. (a -> f r) -> f r)

As mentioned, we've known for a while that we can write a Monad instance for Codensity f without caring at all about f.

As for the adjunction correspondence, consider the adjunction between products and exponentials: $ - \times S \dashv S \rightarrow - $

This gives rise to the monad $S \rightarrow (- \times S)$, the state monad. According to the facts above, we should have that Codensity (s ->) (excuse the sectioning) is the same as state, and if we look, we see:

forall r. (a -> s -> r) -> s -> r

which is the continuation passing, or Church (or Boehm-Berarducci) encoding of the monad.

Now, it's also well known that for any monad, we can construct an adjunction that gives rise to it. There are multiple ways to do this, but the most accessible in Haskell is probably via the Kleisli category. So, given a monad $M$ on $\mathbf{Hask}$, there is a category $\mathbf{Hask}_M$ with the same objects, but where $\mathbf{Hask}_M(a, b) = \mathbf{Hask}(a, Mb)$. The identity for each object is return and composition of arrows is:

(f >=> g) x = f x >>= g

Our two functors are:

F a = a
F f = return . f

U a = M a
U f = (>>= f)

Verifying that $F \dashv U$ requires only that $\mathbf{Hask}_M(F-, =) \cong \mathbf{Hask}(-, U\!\!=)$, but this is just $\mathbf{Hask}(-, M\!\!=) \cong \mathbf{Hask}(-, M\!\!=)$, which is a triviality. Now we should have that $T^U = M$.

So, one of the simplest monads is reader, $(e \rightarrow)$. Now, $U$ just takes objects in the Kleisli category (which are objects in $\mathbf{Hask}$) and applies $M$ to them, so we should have Codensity (e ->) is reader. But earlier we had Codensity (e ->) was state. So reader is state, right?

We can actually arrive at this result another way. One of the most famous pieces of category theory is the Yoneda lemma, which states that the following correspondence holds for any functor $F : \mathcal{C} \rightarrow \mathbf{Set}$:

$ Fa \,\, \cong \, \mathbf{Set}^\mathcal{C}\left(C(a,-), F\right) $

This also works for any functor into $\mathbf{Hask}$ and looks like:

F a ~= forall r. (a -> r) -> F r

for $F : \mathbf{Hask} \rightarrow \mathbf{Hask}$. But we also have our functor $U : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$, which should look more like:

U a ~= forall r. (a -> M r) -> U r
M a ~= forall r. (a -> M r) -> M r

So, we fill in M = (e ->) and get that reader is isomorphic to state, right? What's going on?

To see, we have to take a closer look at natural transformations. Given two categories $\mathcal{C}$ and $\mathcal{D}$, and functors $F, G : \mathcal{C} \rightarrow \mathcal{D}$, a natural transformation $\phi : F \Rightarrow G$ is a family of maps $\phi_a : Fa \rightarrow Ga$ such that for every $f : a \rightarrow b$ the following diagram commutes:

$ \bfig \square<1000,1000>[Fa`Ga`Fb`Gb;\phi_a`Ff`Gf`\phi_b]\efig $

The key piece is what the morphisms look like. It's well known that parametricity ensures the naturality of t :: forall a. F a -> G a for $F, G : \mathbf{Hask} \rightarrow \mathbf{Hask}$, and it also works when the source is $\mathbf{Hask}^{op}$. It should also work for a category, call it $\mathbf{Hask}^{\sim}$, which has Haskell types as objects, but where $\mathbf{Hask}^{\sim}(a, b) = \mathbf{Hask}(a, b) \times \mathbf{Hask}(b, a)$, which is the sort of category that newtype Endo a = Endo (a -> a) is a functor from. So we should be at liberty to say:

Codensity Endo a = forall r. (a -> r -> r) -> r -> r ~= [a]

However, hom types for $\mathbf{Hask}_M$ are not merely made up of $\mathbf{Hask}$ hom types on the same arguments, so naturality turns out not to be guaranteed. A functor $F : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$ must take a Kleisli arrow $f : b \rightarrow Mc$ to an arrow $Ff : Fb \rightarrow Fc$, and transformations must commute with that mapping. So, if we look at our use of Yoneda, we are considering transformations $\phi : \mathbf{Hask}_M(a, -) \Rightarrow U$:

$ \bfig\square<1000,1000>[\mathbf{Hask}_M(a,b)`Ub`\mathbf{Hask}_M(a,c)`Uc;\phi_a`\mathbf{Hask}_M(a,f)`Uf`\phi_h]\efig $

Now, $\mathbf{Hask}_M(a,b) = \mathbf{Hask}(a, Mb)$ and $Ub = Mb$. So

t :: forall r. (a -> M r) -> M r

will get us the right type of maps. But, the above commutative square corresponds to the condition that for all f :: b -> M c:

t . (>=> f) = (>>= f) . t

So, if we have h :: a -> M b, Kleisli composing it with f and then feeding to t is the same as feeding h to t and then binding the result with f.

Now, if we go back to reader, we can consider the reader morphism:

f = const id :: a -> e -> e

For all relevant m and g, m >>= f = id and g >=> f = f. So the
naturality condition here states that t f = id.

Now, t :: forall r. (a -> e -> r) -> e -> r. The general form of these is state actions (I've split e -> (a, e) into two pieces):

t f e = f (v e) (st e)
  where
  rd :: e -> a
  st :: e -> e

If f = const id, then:

t (const id) e = st e
 where
 st :: e -> e

But our naturality condition states that this must be the identity, so we must have st = id. That is, the naturality condition selects t for which the corresponding state action does not change the state, meaning it is equivalent to a reader action! Presumably the definition of an end (which involves dinaturality) enforces a similar condition, although I won't work through it, as it'd be rather more complicated.

However, we have learned a lesson. Quantifiers do not necessarily enforce (di)naturality for every category with objects of the relevant kind. It is important to look at the hom types, not just the objects .In this case, the point of failure seems to be the common, extra s. Even though the type contains nautral transformations for the similar functors over $\mathbf{Hask}$, they can (in general) still manipulate the shared parameter in ways that are not natural for the domain in question.

I am unsure of how exactly one could enforce the above condition in (Haskell's) types. For instance, if we consider:

forall r m. Monad m => (a -> m r) -> m r

This still contains transformations of the form:

t k = k a >> k a

And for this to be natural would require:

(k >=> f) a >> (k >=> f) a = (k a >> k a) >>= f

Which is not true for all possible instantiations of f. It seems as though leaving m unconstrained would be sufficient, as all that could happen is t feeding a value to k and yielding the result, but it seems likely to be over-restrictive.

]]>
http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/feed/ 3
Searching Infinity Parametrically http://comonad.com/reader/2011/searching-infinity/ http://comonad.com/reader/2011/searching-infinity/#comments Sun, 25 Dec 2011 06:19:43 +0000 Edward Kmett http://comonad.com/reader/?p=510 Andrej Bauer recently gave a really nice talk on how you can exploit side-effects to make a faster version of Martin Escardo's pseudo-paradoxical combinators.

A video of his talk is available over on his blog, and his presentation is remarkably clear, and would serve as a good preamble to the code I'm going to present below.

Andrej gave a related invited talk back at MSFP 2008 in Iceland, and afterwards over lunch I cornered him (with Dan Piponi) and explained how you could use parametricity to close over the side-effects of monads (or arrows, etc) but I think that trick was lost in the chaos of the weekend, so I've chosen to resurrect it here, and improve it to handle some of his more recent performance enhancements, and show that you don't need side-effects to speed up the search after all!

First, we'll need to import a few things:

 
{-# LANGUAGE RankNTypes #-}
 
import Data.Maybe (fromMaybe)
import Control.Applicative
import Data.IntMap (IntMap)
import qualified Data.IntMap as IntMap
import Control.Monad
import Control.Monad.Trans.Class
import Data.Functor.Identity
 

What are looking for is an implementation of Hilbert's epsilon.

This is a formal mechanism for eliminating existentials over some non-empty set X by defining a function

 
ε: (X -> Prop) -> X
 

such that if there exists an x in X such that p(X) holds then p(ε(p)) holds.

As noted by Andrej, we could reify this constructively as a function "epsilon :: (X -> Bool) -> X" for some X.

Now, for some sets, Hilbert's epsilon is really easy to define. If X is a finite set, you can just exhaustively enumerate all of the options returning a member of X such that the property holds if any of them do, otherwise since X is non-empty, just return one of the elements that you tested.

This would be a pretty boring article and I'd be back to eating Christmas dinner with my family if that was all there was to it. However, certain infinite spaces can also be searched.

Last year, Luke Palmer wrote a post on "Searchable Data Types" that might also serve as a good introduction. In that article he led off with the easiest infinite space to search, the lazy naturals, or the 'one point compactification of the naturals'. That is to say the natural numbers extended with infinity.

 
data LazyNat = Zero | Succ LazyNat
infinity :: LazyNat
infinity = Succ infinity
 

Now we can implement Palmer's epsilon (called lyingSearch in his article).

 
palmer :: (LazyNat -> Bool) -> LazyNat
palmer p
  | p Zero = Zero
  | otherwise = Succ $ palmer $ p . Succ
 

The trick to making this work is that we place a requirement that the predicate that you pass has to terminate in a bounded amount of time no matter what input you give it, and since we're working with the naturals extended with infinity, if no natural satisfies the predicate, we'll just keep returning a longer and longer chain of Succ's, effectively yielding infinity.

To check to see if the returned number satisfies the predicate you can always use p (palmer p). The predicate is required to terminate in finite time, even when given infinity, so this will yield a Bool and not bottom out unless the user supplied predicate takes an unbounded amount of time.

I posted a reply to Luke's article when it came up on reddit which included a Hinze-style generic implementation of his lyingSearch predicate, which you can see now is just Hilbert's epsilon for arbitrary recursive polynomial data types.

http://www.reddit.com/r/haskell/comments/e7nij/searchable_data_types/c15zs6l

Another space we can search is the Cantor space 2^N.

 
type Cantor = Integer -> Bool
 

With that we jump clear from countable infinity to uncountable infinity, but it can still be searched in finite time!

This is the space we'll be paying attention to for the rest of this article.

First we'll define how to "book a room in Hilbert's Hotel."

 
infixr 0 #
(#) :: Bool -> Cantor -> Cantor
(x # a) 0 = x
(x # a) i = a (i - 1)
 

Then this can be used to obtain the following implementation of Hilbert's epsilon for the Cantor space, attributed by Andrej to Ulrich Berger.

 
berger :: (Cantor -> Bool) -> Cantor
berger p =
  if ex $ \a -> p $ False # a
  then False # berger $ \a -> p $ False # a
  else True  # berger $ \a -> p $ True # a
  where ex q = q (berger q)
 

This version is particularly close in structure to the one for searching the LazyNats, but it is dreadfully slow!

It would be nice to be able to search the space faster and that is just what Martin Escardo's improved version does, through a more sophisticated divide and conquer technique.

 
escardo :: (Cantor -> Bool) -> Cantor
escardo p = go x l r where
  go x l r n =  case divMod n 2 of
    (0, 0) -> x
    (q, 1) -> l q
    (q, 0) -> r $ q-1
  x = ex $ \l -> ex $ \r -> p $ go True l r
  l = escardo $ \l -> ex $ \r -> p $ go x l r
  r = escardo $ \r -> p $ go x l r
  ex q = q (escardo q)
 

To proceed from here I'll need a State monad:

 
newtype S s a = S { runS :: s -> (a, s) }
 
instance Functor (S s) where
  fmap f (S m) = S $ \s -> case m s of
    (a, s') -> (f a, s')
 
instance Applicative (S s) where
  pure = return
  (< *>) = ap
 
instance Monad (S m) where
  return a = S $ \s -> (a, s)
  S m >>= k = S $ \s -> case m s of
    (a, s') -> runS (k a) s'
 

And now we've reached the point. From here, Andrej's pure code ends, and his side-effecting ocaml and custom programming language start. The first thing he does is compute the modulus of continuity by using a side-effect that writes to a reference cell which he very carefully ensures doesn't leak out of scope, so he doesn't have to concern himself with the proposition code editing the value of the reference.

 
let mu f a =
  let r = ref 0 in
  let b n = (r := max n ! r; a n) in
    ignore (f b);
    !r
 

To obtain the same effect we'll instead make a predicate using the state monad to model the single reference cell.

 
-- bad
modulus :: (Num b, Ord s) =>
  ((s -> S s a) -> S b c) -> (s -> a) -> b
 

We can mash b and s together, and try to make the ordering and number agree by claiming that it is instead Real and we'd get the slightly more reasonable looking type:

 
-- still bad
modulus :: Real a =>
  ((a -> S n b) -> S n c) -> (a -> b) -> a
 

In the imperative code, lexical scoping had ensured that no other code could edit the reference cell, but with this type we don't have that. The predicate is allowed to use arbitrary state actions to muck with the modulus of convergence even though the only thing that should be editing it is the wrapper beta that we placed around alpha.

But how can we ensure that the end user couldn't gain access to any of the additional functionality from the monad? Parametricity!

 
-- getting better
modulus :: Real a =>
  (forall f. Monad f => (a -> f b) -> f c) ->
  (a -> b) ->
  a
 

Here the only thing you are allowed to assume about f is that it forms a monad. This gives you access to return and >>=, but the predicate can't do anything interesting with them. All it can do is work with what is effectively the identity monad, since it knows no additional properties!

We can have mercy on the end user and give them a little bit more syntactic sugar, since it doesn't cost us anything to let them also have access to the Applicative instance.

 
-- good
modulus :: Real a =>
  (forall f. (Monad f, Applicative f) => (a -> f b) -> f c) ->
  (a -> b) ->
  a
 

With that we can show Andrej's version of the modulus of convergence calculation does not need side-effects!

>
> modulus (\a -> a 10 >>= a) (\n -> n * n)
100
 

Admittedly plumbing around the monadic values in our proposition is a bit inconvenient.

His next example was written in a custom ocaml-like programming language. For translating his effect type into Haskell using parametricity, we'll need a CPS'd state monad, so we can retry from the current continuation while we track a map of assigned values.

 
newtype K r s a = K { runK :: (a -> s -> r) -> s -> r }
 
instance Functor (K r s) where
  fmap = liftM
 
instance Applicative (K r s) where
  pure = return
  (< *>) = ap
 
instance Monad (K r s) where
  return a = K $ \k -> k a
  K m >>= f = K $ \k -> m $ \a -> runK (f a) k
 

For those of you who have been paying attention to my previous posts, K r s is just a Codensity monad!

 
neighborhood ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  IntMap Bool
neighborhood phi = snd $ runK (phi beta) (,) IntMap.empty where
  beta n = K $ \k s -> case IntMap.lookup n s of
    Just b -> k b s
    Nothing -> case k True (IntMap.insert n True s) of
      (False, _) -> k False (IntMap.insert n False s)
      r -> r
 

With that we can adapt the final version of Hilbert's epsilon for the Cantor space that Andrej provided to run in pure Haskell.

 
bauer ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Cantor
bauer p = \n -> fromMaybe True $ IntMap.lookup n m where
  m = neighborhood p
 

With a little work you can implement a version of an exists and forAll predicate on top of that by running them through the identity monad.

 
exists ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Bool
forAll ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Bool
 

I've gone further in playing with this idea, using monad homomorphisms rather than simply relying on the canonical homomorphism from the identity monad. You can get the gist of it here:

https://gist.github.com/1518767

This permits the predicates themselves to embed some limited monadic side-effects, but then you get more extensional vs. intensional issues.

An obvious direction from here is to fiddle with a version of Martin Escardo's search monad that takes advantage of these techniques, but I'll leave the exploration of these ideas to the reader for now and go enjoy Christmas dinner.

Happy Holidays,
Edward Kmett

]]>
http://comonad.com/reader/2011/searching-infinity/feed/ 11
A Product of an Imperfect Union http://comonad.com/reader/2011/a-product-of-an-imperfect-union/ http://comonad.com/reader/2011/a-product-of-an-imperfect-union/#comments Fri, 01 Jul 2011 03:49:10 +0000 Edward Kmett http://comonad.com/reader/?p=337 In the last few posts, I've been talking about how we can derive monads and monad transformers from comonads. Along the way we learned that there are more monads than comonads in Haskell.

The question I hope to answer this time, is whether or not we turn any Haskell Comonad into a comonad transformer.

Comonads from Comonads

In Monads from Comonads, we built the comonad-to-monad transformer

 
newtype Co w m a = Co (forall r. w (a -> r) -> r)
 

by sandwiching a Comonad w in the middle of a trivial Codensity monad, then proceeded to show that at least in the case where our comonad was given rise to by an adjunction f -| g : Hask -> Hask, we could reason about this as if we had

 
Co w ~ Co (f . g) ~ g . f
 

Now, Codensity monads are a right Kan extension.

So, what happens if we try to do the same thing to a Left Kan extension?

Using

 
{-# LANGUAGE GADTs, FlexibleInstances #-}
 
import Control.Comonad
import Control.Comonad.Trans.Class
 

we can define

 
data L w a where
  L :: w (r -> a) -> r -> L w a
 

and a number of instances pop out for free, cribbed largely from the definition for Density.

 
instance Functor w => Functor (L w) where
  fmap f (L w r) = L (fmap (f .) w) r
 
instance ComonadTrans L where
  lower (L w r) = fmap ($r) w
 
instance Extend w => Extend (L w) where
  duplicate (L w s) = L (extend L w) s
 
instance Comonad w => Comonad (L w) where
  extract (L w r) = extract w r
 

Reasoning as before about w as if it were composed of an adjunction f -| g : Hask -> Hask to build some intuition, we can see:

 
L w a ~ exists r. (w (r -> a), r)
      ~ exists r. (f (g (r -> a)), r)
      ~ exists r. (f (), g (r -> a), r)
      ~ exists r. (f (), f () -> r -> a, r)
      ~ exists r. (f () -> r -> a, f r)
      ~ exists r. (f r -> a, f r)
      ~ Density f a
      ~ Lan f f a
      ~ (f . Lan f Id) a
      ~ (f . g) a
      ~ w a
 

The latter few steps require identities established in my second post on Kan extensions.

With that we obtain the "remarkable" insight that L ~ IdentityT, which I suppose is much more obvious when just looking at the type

 
data L w a where
  L :: w (r -> a) -> r -> L w a
 

and seeing the existentially quantified r as a piece of the environment, being used to build an a, since there is nothing else we can do with it, except pass it in to each function wrapped by w! So at first blush, we've gained nothing.

The key observation is that in one case we would up with something isomorphic to the codensity monad of our right adjoint, while in the other case we would up with the density comonad of our left adjoint. The former is isomorphic to the monad given by our adjunction, while the latter is isomorphic to the comonad, which is, unfortunately, right where we started!

In The Future All Comonads are Comonad Transformers!

Of course, we don't have to just modify a trivial left Kan extension. Let's tweak the Density comonad of another comonad!

 
data D f w a where
  D :: w (f r -> a) -> f r -> D f w a
 

Since both arguments will be comonads, and I want this to be a comonad transformer, I'm going to swap the roles of the arguments relative to the definition of CoT w m. The reason is that D f w is a Comonad, regardless of the properties of f, so long as w is a Comonad This is similar to how Density f is a Comonad regardless of what f is, as long as it has kind * -> *.

The implementation of D is identical to L above, just as CoT and Co share implementations and ContT and Cont do.

 
instance Functor w => Functor (D f w) where
  fmap f (D w r) = D (fmap (f .) w) r
 
instance Extend w => Extend (D f w) where
  duplicate (D w s) = D (extend D w) s
 
instance Comonad w => Comonad (D f w) where
  extract (D w r) = extract w r
 
instance ComonadTrans (D f) where
  lower (D w r) = fmap ($r) w
 

But in addition to being able to lower :: D f w a -> w a, we can also lower to the other comonad!

 
fstD :: (Extend f, Comonad w) => D f w a -> f a
fstD (D w r) = extend (extract w) r
 
sndD :: Comonad w => D f w a -> w a
sndD = lower
 

This means that if either comonad provides us with a piece of functionality we can exploit it.

Selling Products

In general Monad products always exist:

newtype Product m n a = Pair { runFst :: m a, runSnd :: n a }

instance (Monad m, Monad n) => Monad (Product m n) where
   return a = Pair (return a) (return a)
   Pair ma na >>= f = Pair (ma >>= runFst . f) (na >>= runSnd . f)

and Comonad coproducts always exist:

 
newtype Coproduct f g a = Coproduct { getCoproduct :: Either (f a) (g a) }
 
left :: f a -> Coproduct f g a
left = Coproduct . Left
 
right :: g a -> Coproduct f g a
right = Coproduct . Right
 
coproduct :: (f a -> b) -> (g a -> b) -> Coproduct f g a -> b
coproduct f g = either f g . getCoproduct
 
instance (Extend f, Extend g) => Extend (Coproduct f g) where
  extend f = Coproduct . coproduct
    (Left . extend (f . Coproduct . Left))
    (Right . extend (f . Coproduct . Right))
 
instance (Comonad f, Comonad g) => Comonad (Coproduct f g) where
  extract = coproduct extract extract
 

but Christoph Lüth and Neil Ghani showed that monad coproducts don't always exist!

On the other hand what we built up above looks a lot like a comonad product!

Too see that, first we'll note some of the product-like things we can do:

fstD and sndD act a lot like fst and snd, projecting our parts of our product and it turns out we can "braid" our almost-products, interchanging the left and right hand side.

 
braid :: (Extend f, Comonad w) => D f w a -> D w f a
braid (D w r) = D (extend (flip extract) r) w
 

(I use scary air-quotes around braid, because it doesn't let us braid them in a categorical sense, as we'll see.)

After braiding, one of our projections swaps places as we'd expect:

 
sndD (braid (D w r)) = -- by braid def
sndD (D (extend (flip extract) r) w) = -- by sndD (and lower) def
fmap ($w) (extend (flip extract) r) = -- extend fmap fusion
extend (($w) . flip extract) r = -- @unpl
extend (\t -> flip extract t w) r = -- flip . flip = id
extend (extract w) r = -- by fstD def
fstD (D w r)
 

But we stall when we try to show fstD . braid = sndD.

Why is that?

A Product of an Imperfect Union

Last time, when we inspected CoT w m a we demonstrated that on one hand given a suitable adjunction f -| g, such that w = f . g, Co w ~ Co (f . g) ~ (g . f), but on the other CoT w m a was bigger than g . m . f, and that if n -| m, then CoT w m a ~ g . m . n . f.

Of course, these two results agree, if you view Co w as CoT w Identity, where Identity -| Identity, since Identity ~ Identity . Identity

Therefore it should come as no surprise that given w = f . g, for a suitable adjunction f -| g, then D w j a is bigger than f . j . g. In fact if, j -| k, then D w j ~ f . j . k . g.

So what is happening is that we have only managed to "break one of our comonads in half", and D w j a lets you do 'too much stuff' with the j portion of the comonad. This keeps us from being symmetric.

Moreover it turns out to be a bit trickier to build one than to just hand in a w (f a) or w a and an f a to build our product-like construction.

Even so, exploiting Density was enough to transform any comonad into a comonad-transformer and to enable us to access the properties of either the comonad we are transforming with, or the comonad that we are transforming.

]]>
http://comonad.com/reader/2011/a-product-of-an-imperfect-union/feed/ 0
More on Comonads as Monad Transformers http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/ http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/#comments Thu, 30 Jun 2011 19:15:22 +0000 Edward Kmett http://comonad.com/reader/?p=328 Last time in Monad Transformers from Comonads I showed that given any comonad we can derive the monad-transformer

 
newtype CoT w m a = CoT { runCoT :: w (a -> m r) -> m r
 

and so demonstrated that there are fewer comonads than monads in Haskell, because while every Comonad gives rise to a Monad transformer, there are Monads that do not like IO, ST s, and STM.

I want to elaborate a bit more on this topic.

In Monads from Comonads we observed that for non-transformer version of CoT

 
type Co w = CoT w Identity
 

under the assumption that w = f . g for f -| g : Hask -> Hask, then

 
Co w ~ Co (f . g) ~ g . f
 

This demonstrated that the Co w is isomorphic to the monad we obtain by composing the adjunction that gave rise to our comonad the other way around.

But what about CoT?

Sadly CoT is a bit bigger.

We can see by first starting to apply the same treatment that we gave Co.

 
CoT w m a ~ forall r. w (a -> m r) -> m r
          ~ forall r. f (g (a -> m r)) -> m r
          ~ forall r. f (f() -> a -> m r) -> m r
          ~ forall r. f (a -> f () -> m r) -> m r
          ~ forall r. (a -> f () -> m r, f ()) -> m r
          ~ forall r. (a -> f () -> m r) -> f () -> m r
          ~ forall r. (a -> g (m r)) -> g (m r)
          ~ Codensity (g . m) a
 

(I'm using . to represent Compose for readability.)

But we've seen before that Codensity g a is in a sense bigger than g a, since given an Adjunction f -| g, Codensity g a ~ (g . f) a, not g a.

Moreover can compose adjunctions:

 
instance
    (Adjunction f g, Adjunction f' g') =>
    Adjunction (Compose f' f) (Compose g g') where
  unit   = Compose . leftAdjunct (leftAdjunct Compose)
  counit = rightAdjunct (rightAdjunct getCompose) . getCompose
 

So if n -| m, then we can see that Codensity (g . m) a ~ g . m . n . f, rather than the smaller g . m . f, which we can obtain using AdjointT f g m from Control.Monad.Trans.Adjoint in adjunctions.

So CoT isn't the smallest monad transformer that would be given by an adjunction.

In fact, it is isomorphic to AdjointT f g (Codensity m) a instead of AdjointT f g m a.

Sadly, there doesn't appear to be a general purpose construction of the smaller transformer just given an unseparated w = f . g.

]]>
http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/feed/ 3
Monad Transformers from Comonads http://comonad.com/reader/2011/monad-transformers-from-comonads/ http://comonad.com/reader/2011/monad-transformers-from-comonads/#comments Wed, 29 Jun 2011 01:51:43 +0000 Edward Kmett http://comonad.com/reader/?p=321 Last time, I showed that we can transform any Comonad in Haskell into a Monad in Haskell.

Today, I'll show that we can go one step further and derive a monad transformer from any comonad!

A Comonad to Monad-Transformer Transformer

Given

 
newtype CoT w m a = CoT { runCoT :: forall r. w (a -> m r) -> m r }
 

we can easily embed the type of the previous Co and create a smart constructor and deconstructor in the style of the MTL.

 
type Co w = CoT w Identity
 
co :: Functor w => (forall r. w (a -> r) -> r) -> Co w a
co f = CoT (Identity . f . fmap (fmap runIdentity))
 
runCo :: Functor w => Co w a -> w (a -> r) -> r
runCo m = runIdentity . runCoT m . fmap (fmap Identity)
 

In fact, as with between Cont and ContT, none of the major instances even change!

 
instance Functor w => Functor (CoT w m) where
  fmap f (CoT w) = CoT (w . fmap (. f))
 
instance Extend w => Apply (CoT w m) where
  mf < .> ma = mf >>- \f -> fmap f ma
 
instance Extend w => Bind (CoT w m) where
  CoT k >>- f = CoT (k . extend (\wa a -> runCoT (f a) wa))
 
instance Comonad w => Applicative (CoT w m) where
  pure a = CoT (`extract` a)
  mf < *> ma = mf >>= \f -> fmap f ma
 
instance Comonad w => Monad (CoT w m) where
  return a = CoT (`extract` a)
  CoT k >>= f = CoT (k . extend (\wa a -> runCoT (f a) wa))
 

We can use CoT as a Monad transformer, or lift IO actions:

 
instance Comonad w => MonadTrans (CoT w) where
  lift m = CoT (extract . fmap (m >>=))
 
instance (Comonad w, MonadIO m) => MonadIO (CoT w m) where
  liftIO = lift . liftIO
 

(This monad transformer is available in my kan-extensions package as of 1.9.0 on hackage.)

And as before we can lift and lower CoKleisli arrows, although the results are monadic when lowered.

 
liftCoT0 :: Comonad w => (forall a. w a -> s) -> CoT w m s
liftCoT0 f = CoT (extract < *> f)
 
lowerCoT0 :: (Functor w, Monad m) => CoT w m s -> w a -> m s
lowerCoT0 m = runCoT m . (return < $)
 
lowerCo0 :: Functor w => Co w s -> w a -> s
lowerCo0 m = runIdentity . runCoT m . (return < $)
 
liftCoT1 :: (forall a. w a -> a) -> CoT w m ()
liftCoT1 f = CoT (`f` ())
 
lowerCoT1 :: (Functor w, Monad m) => CoT w m () -> w a -> m a
lowerCoT1 m = runCoT m . fmap (const . return)
 
lowerCo1 :: Functor w => Co w () -> w a -> a
lowerCo1 m = runIdentity . runCoT m . fmap (const . return)
 

Since we could mean the MonadFoo instance derived from its comonadic equivalent or from the one we wrap as a monad transformer, we choose to default to the one from the monad, but we can still provide the lifted comonadic actions:

 
posW :: (ComonadStore s w, Monad m) => CoT w m s
posW = liftCoT0 pos
 
peekW :: (ComonadStore s w, Monad m) => s -> CoT w m ()
peekW s = liftCoT1 (peek s)
 
peeksW :: (ComonadStore s w, Monad m) => (s -> s) -> CoT w m ()
peeksW f = liftCoT1 (peeks f)
 
askW :: (ComonadEnv e w, Monad m) => CoT w m e
askW = liftCoT0 (Env.ask)
 
asksW :: (ComonadEnv e w, Monad m) => (e -> a) -> CoT w m a
asksW f = liftCoT0 (Env.asks f)
 
traceW :: (ComonadTraced e w, Monad m) => e -> CoT w m ()
traceW e = liftCoT1 (Traced.trace e)
 

and we just lift the monadic actions as usual:

 
instance (Comonad w, MonadReader e m) => MonadReader e (CoT w m) where
  ask = lift Reader.ask
  local f m = CoT (local f . runCoT m)
 
instance (Comonad w, MonadState s m) => MonadState s (CoT w m) where
  get = lift get
  put = lift . put
 
instance (Comonad w, MonadWriter e m) => MonadWriter e (CoT w m) where
  tell = lift . tell
  pass m = CoT (pass . runCoT m . fmap aug) where
    aug f (a,e) = liftM (\r -> (r,e)) (f a)
  listen = error "Control.Monad.Co.listen: TODO"
 
instance (Comonad w, MonadError e m) => MonadError e (CoT w m) where
  throwError = lift . throwError
  catchError = error "Control.Monad.Co.catchError: TODO"
 
instance (Comonad w, MonadCont m) => MonadCont (CoT w m) where
  callCC = error "Control.Monad.Co.callCC: TODO"
 

I welcome help working through the missing methods above.

This should go a long way towards showing the fact that there are strictly fewer comonads than monads in Haskell, and of course that there are no analogues to IO, STM and ST s in the world of Haskell comonads!

Every comonad gives you a monad-transformer, but not every monad is a monad transformer.

]]>
http://comonad.com/reader/2011/monad-transformers-from-comonads/feed/ 1
Monads from Comonads http://comonad.com/reader/2011/monads-from-comonads/ http://comonad.com/reader/2011/monads-from-comonads/#comments Mon, 27 Jun 2011 20:50:32 +0000 Edward Kmett http://comonad.com/reader/?p=291 Today I'll show that you can derive a Monad from any old Comonad you have lying around.

But first, we'll need to take a bit of a bit of a detour.

A Monad Sandwich

We'll need the definition of an adjunction on the category of Haskell types, which we can strip down and borrow from my adjunctions package.

 
class (Functor f, Representable u) =>
         Adjunction f u | f -> u, u -> f where
    leftAdjunct :: (f a -> b) -> a -> u b
    rightAdjunct :: (a -> u b) -> f a -> b
 

Here we can define our Adjunction by defining leftAdjunct and rightAdjunct, such that they witness an isomorphism from (f a -> b) to (a -> u b)

Every Adjunction F -| G : C -> D, gives rise to a monad GF on D and a Comonad FG on C.

In addition to this, you can sandwich an additional monad M on C in between GF to give a monad GMF on D:

Control.Monad.Trans.Adjoint

and you can sandwich a comonad W on D in between F and G to yield the comonad FWG on C:

Control.Comonad.Trans.Adjoint

A Contravariant Comonad Sandwich

As was first shown to me me by Derek Elkins, this construction works even when you C is not the category of Haskell types!

Consider the Contravariant functor Op r:

 
newtype Op a b = Op { getOp :: b -> a } 
 
instance Contravariant (Op a) where
  contramap f g = Op (getOp g . f)
 

We can view Op r as a functor from Hask^op -> Hask or as one from Hask -> Hask^op.

We can define a notion of a contravariant adjunction F -| G : Hask^op -> Hask.

Data.Functor.Contravariant.Adjunction

 
class (Contravariant f, Corepresentable g) =>
       Adjunction f g | f -> g, g -> f where
    leftAdjunct :: (b -> f a) -> a -> g b
    rightAdjunct :: (a -> g b) -> b -> f a
 

Where, now, leftAdjunct and rightAdjunct witness the isomorphism from (f a < - b) to (a -> g b), which means once you flip the arrow around both seem to be going the same way. Ultimately any contravariant adjunction on Hask is comprised of two isomorphic functors, each self-adjoint.

This gives rise to one notion of a comonad-to-monad transformer!

Control.Monad.Trans.Contravariant.Adjoint

But we can we do better?

An End as the Means

First, some boilerplate.

 
{-# LANGUAGE Rank2Types, FlexibleInstances, FlexibleContexts, MultiParamTypeClasses, UndecidableInstances #-}
 
import Data.Monoid
import Control.Comonad
import Control.Applicative
import Control.Comonad.Store.Class
import Control.Comonad.Env.Class as Env
import Control.Comonad.Traced.Class as Traced
import Control.Monad.Reader.Class
import Control.Monad.Writer.Class
import Control.Monad.State.Class
import Data.Functor.Bind
 

Our new comonad to monad transformer is given by

 
newtype Co w a = Co { runCo :: forall r. w (a -> r) -> r }
 

What we've done is added a quantifier to prevent the use of the type r, as we did when describing Codensity and Ran, categorically we've taken some kind of end. This idea came to me after an observation was made by Russell O'Connor that Conts (Store s) a was pretty close to a continuation passing style version of State s.

Now, we can start spitting out instances for this type.

 
instance Functor w => Functor (Co w) where
   fmap f (Co w) = Co (w . fmap (. f))
 
instance Comonad w => Monad (Co w) where
   return a = Co (`extract` a)
   Co k >>= f = Co (k .extend (\wa a -> runCo (f a) wa))
 
instance Comonad w => Applicative (Co w) where
   mf < *> ma = mf >>= \f -> fmap f ma
   pure a = Co (`extract` a)
 

In my break-out of category-extras, I've split off the semigroupoid structure of Kleisli-, co-Kleisli-, and static- arrow composition as Bind, Extend and Apply respectively, so we can make use of slightly less structure and get slightly less structure in turn:

 
instance Extend w => Bind (Co w) where
   Co k >>- f = Co (k .extend (\wa a -> runCo (f a) wa))
 
instance Extend w => Apply (Co w) where
   mf < .> ma = mf >>- \f -> fmap f ma
 

From comonad-transformers to the mtl

We can look at how this transforms some particular comonads.

The comonadic version of State is Store. Looking at Co (Store s) a

 
Co (Store s) a ~ forall r. ((s -> a -> r, s) -> r)
               ~ forall r. (s -> a -> r) -> s -> r
               ~ forall r. (a -> s -> r) -> s -> r
               ~ Codensity ((->)s) a
               ~ State s a
 

This gives rise to a leap of intuition that we'll motivate further below:

 
instance ComonadStore s m => MonadState s (Co m) where
   get = Co (\w -> extract w (pos w))
   put s = Co (\w -> peek s w ())
 

Sadly this breaks down a little for Writer and Reader as the mtl unfortunately has historically included a bunch of extra baggage in these classes. In particular, in reader, the notion of local isn't always available, blocking some otherwise perfectly good MonadReader instances, and I've chosen not to repeat this mistake in comonad-transformers.

 
instance ComonadEnv e m => MonadReader e (Co m) where
   ask = Co (\w -> extract w (Env.ask w))
   local = error "local"
 

Ideally, local belongs in a subclass of MonadReader.

 
class Monad m => MonadReader e m | m -> e where
   ask :: m a -> e
 
class MonadReader e m => MonadLocal e m | m -> e where
   local :: (e -> e) -> m a -> m a
 

Similarly there is a lot of baggage in the MonadWriter. The Monoid constraint isnt necessary for the class itself, just for most instances, and the listen and pass members should be a member of a more restricted subclass as well to admit some missing MonadWriter instances, but we can at least provide the notion of tell that is critical to Writer.

 
instance (Monoid e, ComonadTraced e m) => MonadWriter e (Co m) where
   tell m = Co (\w -> Traced.trace m w ())
   listen = error "listen"
   pass = error "pass"
 

But given the split out

 
instance Monad m => MonadWriter e m | m -> e where
    tell :: e -> m ()
 
instance MonadWriter e m => MonadListen e m | m -> e
    listen :: m a -> m (a, w)
    pass :: m (a, w -> w) -> m a
 

We could provide this functionality more robustly. (There is a similar subset of Comonads that can provide listen and pass analogues.)

While I am now the maintainer of the mtl, I can't really justify making the above corrections to the class hierarchy at this time. They would theoretically break a lot of code. I would be curious to see how much code would break in practice though.

Combinators Please!

There is a recurring pattern in the above code, so we can also improve this construction by providing some automatic lifting combinators that take certain cokleisli arrows and give us monadic values

 
lift0 :: Comonad w => (forall a. w a -> s) -> Co w s
lift0 f = Co (extract < *> f)
 
lift1 :: (forall a. w a -> a) -> Co w ()
lift1 f = Co (`f` ())
 

along with their inverses

 
lower0 :: Functor w => Co w s -> w a -> s
lower0 (Co f) w = f (id < $ w)
 
lower1 :: Functor w => Co w () -> w a -> a
lower1 (Co f) w = f (fmap const w)
 

(The proofs that these are inverses are quite hairy, and lean heavily on parametricity.)

Then in the above, the code simplifies to:

 
get = lift0 pos
put s = lift1 (peek s)
ask = lift0 Env.ask
tell s = lift1 (tell s)
 

Co-Density?

Co and Codensity are closely related.

Given any Comonad W, it is given rise to by the composition FG for some adjunction F -| G : Hask -> C.

Considering only the case where C = Hask for now, we can find that

 
Co w a ~ forall r. (f (g (a -> r)) -> r).
 

Since f -| g, we know that g is Representable by f (), as witnessed by:

 
tabulateAdjunction :: Adjunction f u => (f () -> b) -> u b
tabulateAdjunction f = leftAdjunct f ()
 
indexAdjunction :: Adjunction f u => u b -> f a -> b
indexAdjunction = rightAdjunct . const
 

therefore

 
Co w a ~ f (g (a -> r)) -> r ~ f (f () -> a -> r) -> r
 

Since f is a left adjoint functor, f a ~ (a, f ()) by Sjoerd Visscher's elegant little split combinator:

 
split :: Adjunction f u => f a -> (a, f ())
split = rightAdjunct (flip leftAdjunct () . (,))
 

which has the simple inverse

 
unsplit :: Adjunction f g => a -> f () -> f a
unsplit a = fmap (const a)
 

so we can apply that to our argument:

 
Co w a ~ forall r. f (f () -> a -> r) -> r ~
         forall r. (f () -> a -> r, f ()) -> r
 

and curry to obtain

 
Co w a ~ forall r. (f () -> a -> r) -> f () -> r
 

and swap the arguments

 
Co w a ~ forall r. (a -> f () -> r) -> f () -> r
 

then we can tabulate the two subtypes of the form (f () -> r)

 
Co w a ~ forall r. (a -> g r) -> g r
 

and so we find that

 
Co w a ~ Codensity g a
 

Finally,

 
Codensity g a ~ Ran g g a
 

but we showed back in my second article on Kan extensions that given f -| g that

 
Ran g g a ~ g (f a)
 

So Co w ~ Co (f . g) ~ (g . f), the monad given rise to by composing our adjunction the other way!

Comonads from Monads?

Now, given all this you might ask

Is there is a similar construction that lets you build a comonad out of a monad?

Sadly, it seems the answer in Haskell is no.

Any adjunction from Hask -> Hask^op would require two functions

 
class (Contravariant f, Contravariant g) => DualContravariantAdjunction f g where
    leftAdjunct :: (f a -> b) -> g b -> a
    rightAdjunct :: (g b -> a) -> f a -> b
 

where both functors are contravariant.

Surmounting the intuitionistic impossibility of this, then given any such adjunction, there would be a nice coend we could take, letting us sandwich any Monad in the middle as we did above.

There does exist one such very boring Contravariant Functor.

 
newtype Absurd a = Absurd (Absurd a)
 
absurdity :: Absurd a -> b
absurdity (Absurd a) = absurdity a
 
instance Contravariant Absurd where
   contramap f (Absurd as) = Absurd (contramap f as)
 
instance DualContravariantAdjunction Absurd Absurd where
    leftAdjunct _ = absurdity
    rightAdjunct _ = absurdity
 

We can safely sandwich IO within this adjunction from Hask -> Hask^op to obtain a comonad.

 
newtype Silly m a = Silly { runSilly :: Absurd (m (Absurd a)) }
 
instance Monad m => Extend (Silly m) where
    extend f (Silly m) = absurdity m
 
instance Monad m => Comonad (Silly m) where
    extract (Silly m) = absurdity m
 

But for any more interesting such type that actually lets us get at its contents, we would be able to derive a circuitous path to unsafePerformIO!

Since unsafePerformIO should not be constructible without knowing IO specifics, no useful DualContravariantAdjunctions should exist.

]]>
http://comonad.com/reader/2011/monads-from-comonads/feed/ 9
Free Monads for Less (Part 3 of 3): Yielding IO http://comonad.com/reader/2011/free-monads-for-less-3/ http://comonad.com/reader/2011/free-monads-for-less-3/#comments Fri, 24 Jun 2011 06:41:06 +0000 Edward Kmett http://comonad.com/reader/?p=251 Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot. ]]> Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot.

Yield for Less

Last month at TPDC 2011, Roshan James and Amr Sabry presented Yield: Mainstream Delimited Continuations.

Without calling it such they worked with the free monad of the indexed store comonad. Ignoring the comonad, and just looking at the functor we can see that

 
data Store i o r = Store (i -> r) o
    deriving (Functor)
 

admits the operation

 
class Functor y => Yieldable y i o | y -> i o where
   yield :: o -> y i
 
instance Yieldable (Store i o) i o where
   yield = Store id
 

The free monad of Store i o is a nice model for asymmetric coroutines.

 
type Yield i o = Free (Store i o)
 
liftFree :: Functor f => f a -> Free f a
liftFree = Free . fmap Pure
 
instance Yieldable y i o => Yieldable (Free y) i o where
   yield = liftFree . yield
 

With its Monad, you can write computations like:

 
foo :: Num o => Yield () o ()
foo = do
   yield 1
   yield 2
   yield 3
 

or to streamline one of James and Sabry's examples

 
walk :: Traversable f => f o -> Yield i o (f i)
walk = traverse yield
 

is an asymmetric coroutine that yields each of the elements in a traversable container in turn, replacing them with the responses from whatever is driving the coroutine.

James and Sabry called this the naive frame grabbing implementation. It is inefficient for the same reasons that we discussed before about retraversing the common trunk in free monads in general. Note that the unchanging trunk here isn't the data structure that we're traversing, but instead the chain of Store i o actions we took to get to the current instruction.

James and Sabry then proceeded to optimize it by hitting it with Codensity.

 
type Iterator i o = Codensity (Yield i o)
 
instance (Monad y, Yieldable y i o) => Yieldable (Codensity y) i o where
   yield = liftCodensity . yield
 

But we've now seen that we can get away with something smaller and get the same benefits.

 
liftF :: Functor f => f a -> F f a
liftF f = F (\kp kf -> kf (fmap kp f))
 
instance Yieldable y i o => Yieldable (F y) i o where
   yield = liftF . yield
 

Flattened, and with the store untupled the new optimized representation looks like:

 
newtype Iterator i o a = Iterator
  { runIterator ::
    forall r. (a -> r) -> (o -> (i -> r) -> r) -> r)
  }
 

and provides the same performance improvements for asymmetric coroutines as the Codensity version, used by James and Sabry, which would flatten to the much larger and less satisfying:

 
newtype RSIterator i o a = RSIterator
    { runRSIterator :: forall r.
          (a -> (o -> (i -> r) -> r) -> r)
             -> (o -> (i -> r) -> r) -> r
    }
 

They proceed to give an encoding of delimited continuations into this type and vice versa, but the rest of their material is of no further use to us here.

As an aside the performance benefits of encoding Oleg's iteratees in continuation passing style arise for much the same reason. The resuting encoding is a right Kan extension!

Who Needs the RealWorld?

As Runar recently tweeted, we have put this to good use here at ClariFI. (Yes, we are hiring! If the contents of my blog make sense to you then email me and let's talk.)

At ClariFI have a strongly typed functional language that bears a strong resemblance to Haskell with rank-n types and a number of other interesting type system features that are particularly suited to our problem domain.

However, as with Haskell, we needed a story for how to deal with IO.

Now, GHC models IO with the type

 
newtype IO a =
   IO (State# RealWorld -> (# a, State# RealWorld #))
 

where they model IO by working in a strict state monad, passing around a real world that they promise not to mutate or copy. (In practice, the world is passed around as a 0-byte token.

This is somewhat problematic semantically, for a number of reasons.

First, There is always the risk of copying it or plumbing it through backwards, so we carefully hide the State# RealWorld from the end user. So this model really wants some notion of uniqueness or linear typing to render it perfectly safe. Heck, the entire Clean language arose from just trying to address this concern.

Second, you don't really get to pass the real world around! We have multiple cores working these days. Stuff is happening in the back end, and as much as you might want it to be, your program isn't responsible for everything that happens in the RealWorld!.

Third, if in some sense all bottoms are the same, then forever (putStrLn "Hello World") and undefined are the same in that sense, despite the slew of side-effects that arise from the first one. Now, in Haskell you are allowed to catch some bottoms in the IO monad, and thereby escape from certain doom, but it is still a reasonable objection.

One alternate model for talking about IO is to view it as a free monad of some set of operations. This approach was taken by Wouter Swierstra's Functional Pearl: Data Types a la Carte.

You can then supply some sort of external interpreter that pumps that tree structure, performing the individual actions.

This is unsatisfying because of two things:

First, the performance is abysmal using the common ADT encoding of a free monad. Janis Voigtländer of course showed, that this can be rectified by using the Codensity monad.

Second, the set of FFI operations is closed.

What we've done instead is to define our primitive IO actions externally as some FFI type:

 
type FFI o i -- external, side-effecting computation taking o, returning i
 

In practice, these are obtained by reflection by our foreign import statements since we run in the JVM.

Then we looked at the free monad of

 
newtype OI a = forall o i. OI (FFI o i) o (i -> a) deriving Functor
 

where OI is the indexed store comonad used as the building block above, yielding arguments to FFI of type o, and representing a computation that would resume with a value of type i to obtain a result of type a.

In some sense this yields a more useful notion than Richard Kieburtz's novel, but largely unimplementable, OI comonad from Codata and Comonads in Haskell.

Flattening Free OI would yield the naive

 
-- data FIO a where
--    Return :: a -> FIO a
--    FIO :: FFI o i -> o -> (i -> FIO a) -> FIO a
 

which would be interpreted by the runtime system.

But once we've converted to our Church-encoded Free monad and flattened we obtain:

 
newtype IO a = IO
    (forall r. (a -> r) ->
                 (forall i o. FFI o i -> o -> (i -> r) -> r) ->
                 r)
 

with the Functor and Monad instances defined above.

This then gives us a number of choices on how we implement the runtime system:

We can use the machinery described earlier to convert from IO a to Free OI a or FIO a, and then have the runtime system pattern match on that structure on our main method, taking the FFI actions and their arguments and passing the results in to the language, or we can invert control, and implement things more directly by just defining

 
FFI = (->)
 

while letting the FFI'd methods have side-effects, and then defining

 
unsafePerformIO :: IO a -> a
unsafePerformIO (IO m) = m id (\ oi o ir -> ir (oi o))
 

But regardless of how FFI is implemented, this model provides a clear structural difference between forever (putStrLn "Hello") and undefined and does not require us to believe the pleasant fiction that we can get our hands on the real world and pass it around.

Our actual IO representation is only slightly more complicated than the one presented here in order to deal with the plumbing of an extra continuation to deal with Java exceptions, but the substance of this approach isn't changed by this addition.

[Edit: incorporated a minor typographical fix into Iterator from Max Bolingbroke]
[Edit: fixed Store to be data, an liftM that should have been an fmap and added the missing Functor constraint that was present in my actual implementation but didn't make it to the web, and a couple of typos in the implementation of RSIterator, all noted by Clumsy.]

]]>
http://comonad.com/reader/2011/free-monads-for-less-3/feed/ 9
Free Monads for Less (Part 2 of 3): Yoneda http://comonad.com/reader/2011/free-monads-for-less-2/ http://comonad.com/reader/2011/free-monads-for-less-2/#comments Fri, 24 Jun 2011 04:49:46 +0000 Edward Kmett http://comonad.com/reader/?p=243 Last time, I started exploring whether or not Codensity was necessary to improve the asymptotic performance of free monads.

This time I'll show that the answer is no; we can get by with something smaller.

The Yoneda Lemma

Another form of right Kan extension arises from the Yoneda lemma.

I covered it briefly in my initial article on Kan extensions, but the inestimable Dan Piponi wrote a much nicer article on how it implies in Haskell that given a Functor instance on f, this type

 
newtype Yoneda f a = Yoneda (forall r. (a -> r) -> f r)
 

is isomorphic to f a, witnessed by these natural transformations:

 
liftYoneda :: Functor f => f a -> Yoneda f a
liftYoneda a = Yoneda (\f -> fmap f a)
 
lowerYoneda :: Yoneda f a -> f a
lowerYoneda (Yoneda f) = f id
 

That said, you are not limited to applying Yoneda to types that have Functor instances.

This type and these functions are provided by Data.Functor.Yoneda from the kan-extensions package.

Codensity vs. Yoneda

Note, Yoneda f is in some sense smaller than Codensity f, as Codensity f a is somewhat 'bigger' than f a, despite providing an embedding, while Yoneda f a is isomorphic.

For example, Codensity ((->) s) a is isomorphic to State s a, not to s -> a as shown by:

 
instance MonadState s (Codensity ((->) s)) where
   get = Codensity (\k s -> k s s)
   put s = Codensity (\k _ -> k () s)
 

Now, Codensity is a particular form of right Kan extension, which always yields a Monad, without needing anything from f.

Here we aren't so fortunate, but we do have the fact that Yoneda f is always a Functor, regardless of what f is, as shown by:

 
instance Functor (Yoneda f) where
  fmap f (Yoneda m) = Yoneda (\k -> m (k . f))
 

which was obtained just by cutting and pasting the appropriate definition from Codensity or ContT, and comes about because Yoneda is a right Kan extension, like all of those.

To get a Monad instance for Yoneda f we need to lean on f somehow.

One way is to just borrow a Monad instance from f, since f a is isomorphic to Yoneda f a, if we have a Functor for f, and if we have a Monad, we can definitely have a Functor.

 
instance Monad m => Monad (Yoneda m) where
  return a = Yoneda (\f -> return (f a))
  Yoneda m >>= k = Yoneda (\f -> m id >>= \a -> runYoneda (k a) f)
 

Map Fusion and Reassociating Binds

Unlike Codensity the monad instance above isn't very satisfying, because it uses the >>= of the underlying monad, and as a result the >>=s will wind up in the same order they started.

On the other hand, the Functor instance for Yoneda f is still pretty nice because the (a -> r) part of the type acts as an accumulating parameter fusing together uses of fmap.

This is apparent if you expand lowerYoneda . fmap f . fmap g . liftYoneda , whereupon you can see we only call fmap on the underlying Functor once.

Intuitively, you can view Yoneda as a type level construction that ensures that you get fmap fusion, while Codensity is a type level construction that ensures that you right associate binds. It is important to note that Codensity also effectively accumulates fmaps, as it uses the same definition for fmap as Yoneda!

With this in mind, it doesn't usually make much sense to use Codensity (Codensity m) or Yoneda (Yoneda m) because the purpose being served is redundant.

Less obviously, Codensity (Yoneda m) is also redundant, because as noted above, Codensity also does fmap accumulation.

Other Yoneda-transformed Monads

Now, I said one way to define a Monad for Yoneda f was to borrow an underlying Monad instance for f, but this isn't the only way.

Consider Yoneda Endo. Recall that Endo from Data.Monoid is given by

 
newtype Endo a = Endo { appEndo :: a -> a }
 

Clearly Endo is not a Monad, it can't even be a Functor, because a occurs in both positive and negative position.

Nevertheless Yoneda Endo can be made into a monad -- the continuation passing style version of the Maybe monad!

 
newtype YMaybe a = YMaybe (forall r. (a -> r) -> r -> r)
 

I leave the rather straightforward derivation of this Monad for the reader. A version of it is present in monad-ran.

This lack of care for capital-F Functoriality also holds for Codensity, Codensity Endo can be used as a two-continuation list monad. It is isomorphic to the non-transformer version of Oleg et al.'s LogicT, which is available on hackage as logict from my coworker, Dan Doel.

The Functor, Applicative, Monad, MonadPlus and many other instances for LogicT can be rederived in their full glory from Codensity (GEndo m) automatically, where

 
newtype GEndo m r = GEndo (m r -> m r)
 

without any need for conscious thought about how the continuations are plumbed through in the Monad.

Bananas in Space

One last digression,

 
newtype Rec f r = (f r -> r) -> r
 

came up once previously on this blog in Rotten Bananas. In that post, I talked about how Fegaras and Sheard used a free monad (somewhat obliquely) in "Revisiting catamorphisms over datatypes with embedded functions" to extend catamorphisms to deal with strong HOAS, and then talked further about how Stephanie Weirich and Geoffrey Washburn used Rec to replace the free monad used by Fegaras and Sheard. That said, they did so in a more restricted context, where any mapping was done by giving us both an embedding and a projection pair.

Going to Church

We can't just use Rec f a instead of Free f a here, because Free f a is a functor, while Rec f a is emphatically not.

However, if we apply Yoneda to Rec f, we obtain a Church-encoded continuation-passing-style version of Free!

 
newtype F f a = F { runF :: forall r. (a -> r) -> (f r -> r) -> r }
 

Since this is of the form of Yoneda (Rec f), it is clearly a Functor:

 
instance Functor (F f) where
   fmap f (F g) = F (\kp -> g (kp . f))
 

And nicely, without knowing anything about f, we also get a Monad!

 
instance Monad (F f) where
   return a = F (\kp _ -> kp a)
   F m >>= f = F (\kp kf -> m (\a -> runF (f a) kp kf) kf)
 

But when we >>= all we do is change the continuation for (a -> r), leaving the f-algebra, (f r -> r), untouched.

Now, F is a monad transformer:

 
instance MonadTrans F where
   lift f = F (\kp kf -> kf (liftM kp f))
 

which is unsurprisingly, effectively performing the same operation as lifting did in Free.

Heretofore, we've ignored everything about f entirely.

This has pushed the need for the Functor on f into the wrapping operation:

 
instance Functor f => MonadFree f (F f) where
   wrap f = F (\kp kf -> kf (fmap (\ (F m) -> m kp kf) f))
 

Now, we can clearly transform from our representation to any other free monad representation:

 
fromF :: MonadFree f m => F f a -> m a
fromF (F m) = m return wrap
 

or to it from our original canonical ADT-based free monad representation:

 
toF :: Functor f => Free f a -> F f a
toF xs = F (\kp kf -> go kp kf xs) where
  go kp _  (Pure a) = kp a
  go kp kf (Free fma) = kf (fmap (go kp kf) fma)
 

So, F f a is isomorphic to Free f a.

So, looking at Codensity (F f) a as Codensity (Yoneda (Rec f)), it just seems silly.

As we mentioned before, we should be able to go from Codensity (Yoneda (Rec f)) a to Codensity (Rec f) a, since Yoneda was just fusing uses of fmap, while Codensity was fusing fmap while right-associating (>>=)'s.

Swallowing the Bigger Fish

So, the obvious choice is to try to optimize to Codensity (Rec f) a. If you go through the motions of encoding that you get:

 
newtype CF f a = CF (forall r. (a -> (f r -> r) -> r) -> (f r -> r) -> r)
 

which is in some sense larger than F f a, because the first continuation gets both an a and an f-algebra (f r -> r).

But tellingly, once you write the code, the first continuation never uses the extra f-algebra you supplied it!

So Codensity (Yoneda (Rec f)) a gives us nothing of interest that we don't already have in Yoneda (Rec f) a.

Consequently, in this special case rather than letting Codensity (Yoneda x) a swallow the Yoneda to get Codensity x a we can actually let the Yoneda swallow the surrounding Codensity obtaining Yoneda (Rec f) a, the representation we started with.

Scott Free

Finally, you might ask if a Church encoding is as simple as we could go. After all a Scott encoding

 
newtype ScottFree f a = ScottFree
    { runScottFree :: forall r.
       (a -> r) -> (f (ScottFree f a) -> r) -> r
    }
 

would admit easier pattern matching, and a nice pun, and seems somewhat conceptually simpler, while remaining isomorphic.

But the Monad instance:

 
instance Functor f => Monad (ScottFree f) where
   return a = ScottFree (\kp _ -> kp a)
   ScottFree m >>= f = ScottFree
       (\kb kf -> m (\a -> runScottFree (f a) kb kf) (kf . fmap (>>= f)))
 

needs to rely on the underlying bind, and you can show that it won't do the right thing with regards to reassociating.

So, alas, we cannot get away with ScottFree.

Nobody Sells for Less

So, now we can rebuild Voigtländer's improve using our Church-encoded / Yoneda-based free monad F, which is precisely isomorphic to Free, by using

 
lowerF :: F f a -> Free f a
lowerF (F f) = f Pure Free
 

to obtain

 
improve :: (forall a. MonadFree f m => m a) -> Free f a
improve m = lowerF m
 

And since our Church-encoded free monad is isomorphic to the simple ADT encoding, our new solution is as small as it can get.

Next time, we'll see this construction in action!

]]>
http://comonad.com/reader/2011/free-monads-for-less-2/feed/ 3
Free Monads for Less (Part 1 of 3): Codensity http://comonad.com/reader/2011/free-monads-for-less/ http://comonad.com/reader/2011/free-monads-for-less/#comments Fri, 24 Jun 2011 03:59:58 +0000 Edward Kmett http://comonad.com/reader/?p=218 Janis Voigtländer wrote a nice paper on how one can use the codensity monad to improve the asymptotic complexity of algorithms using the free monads. This has been shown to be a sufficient tool for this task, but is it necessary?]]> A couple of years back Janis Voigtländer wrote a nice paper on how one can use the codensity monad to improve the asymptotic complexity of algorithms using the free monads. He didn't use the name Codensity in the paper, but this is essentially the meaning of his type C.

I just returned from running a workshop on domain-specific languages at McMaster University with the more than able assistance of Wren Ng Thornton. Among the many topics covered, I spent a lot of time talking about how to use free monads to build up term languages for various DSLs with simple evaluators, and then made them efficient by using Codensity.

This has been shown to be a sufficient tool for this task, but is it necessary?

First, some context:

Monads for Free

Given that f is a Functor, we get that

 
data Free f a = Pure a | Free (f (Free f a))
 

is a Monad for free:

 
instance Functor f => Functor (Free f) where
   fmap f (Pure a) = Pure (f a)
   fmap f (Free as) = Free (fmap (fmap f) as)
 
instance Functor f => Monad (Free f) where
   return = Pure
   Pure a >>= f = f a -- the first monad law!
   Free as >>= f = Free (fmap (>>= f) as)
 

The definition is also free in a particular categorical sense, that if f is a monad, then, and you have a forgetful functor that forgets that it is a monad and just yields the functor, then the the free construction above is left adjoint to it.

This type and much of the code below is actually provided by Control.Monad.Trans.Free in the comonad-transformers package on hackage.

For a while, Free lived in a separate, now defunct, package named free with its dual Cofree, but it was merged into comonad-transformers due to complications involving comonads-fd, the comonadic equivalent of the mtl. Arguably, a better home would be transformers, to keep symmetry.

Free is a Monad Transformer

 
instance MonadTrans Free where
    lift = Free . liftM Pure
 

and there exists a retraction for lift

 
retract :: Monad f => Free f a -> f a
retract (Pure a) = return a
retract (Free as) = as >>= retract
 

such that retract . lift = id. I've spoken about this on Stack Overflow, including the rather trivial proof, previously.

This lets us work in Free m a, then flatten back down to a single layer of m.

This digression will be useful in a subsequent post.

MonadFree

What Janis encapsulated in his paper is the notion that we can abstract out the extra power granted by a free monad to add layers of f to some monad m, and then use a better representation to improve the asymptotic performance of the monad.

The names below have been changed slightly from his presentation.

 
class (Monad m, Functor f) => MonadFree f m | m -> f where
    wrap :: f (m a) -> m a
 
instance Functor f => MonadFree f (Free f) where
    wrap = Free
 

instances can easily be supplied to lift MonadFree over the common monad transformers. For instance:

 
instance (Functor f, MonadFree f m) => MonadFree f (ReaderT e m) where
    wrap fs = ReaderT $ \e -> wrap $ fmap (`runReaderT` e) fs
 

This functionality is provided by Control.Monad.Free.Class.

Janis then proceeded to define the aforementioned type C, which is effectively identical to

 
newtype Codensity f a = Codensity (forall r. (a -> f r) -> f r)
 

This type is supplied by Control.Monad.Codensity from my kan-extensions package on hackage.

I have spoken about this type (and another that will arise in a subsequent post) on this blog previously, in a series of posts on Kan Extensions. [ 1, 2, 3]

Codensity f is a Monad, regardless of what f is!

In fact, you can quite literally cut and paste much of the definitions for return, fmap, and (>>=) from the code for the ContT monad transformer! Compare

 
instance Functor (Codensity k) where
  fmap f (Codensity m) = Codensity (\k -> m (k . f))
 
instance Monad (Codensity f) where
  return x = Codensity (\k -> k x)
  m >>= k = Codensity (\c -> runCodensity m (\a -> runCodensity (k a) c))
 
instance MonadTrans Codensity where
   lift m = Codensity (m >>=)
 

from Control.Monad.Codensity in kan-extensions with

 
instance Functor (ContT r m) where
    fmap f m = ContT $ \c -> runContT m (c . f)
 
instance Monad (ContT r m) where
    return a = ContT ($ a)
    m >>= k  = ContT $ \c -> runContT m (\a -> runContT (k a) c)
 
instance MonadTrans (ContT r) where
    lift m = ContT (m >>=)
 

from the Control.Monad.Trans.Cont in transformers.

Codensity m a is effectively forall r. ContT r m a. This turns out to be just enough of a restriction to rule out the use of callCC, while leaving the very powerful fact that when you lower them back down using

 
lowerCodensity :: Monad m => Codensity m a -> m a
lowerCodensity (Codensity m) = m return
 

or

 
runContT :: ContT r m a -> m r
runContT (ContT m) = m return
 

ContT and Codensity both yield a result in which all of the uses of the underlying monad's (>>=) are right associated.

This can be convenient for two reasons:

First, some almost-monads are not associative, and converting to ContT or Codensity can be used to fix this fact.

Second, in many monads, when you build a big structure using left associated binds, like:

 
    (f >>= g) >>= h
 

rather than use right associated binds like

 
   f >>= \x -> g x >>= h
 

then you wind up building a structure, then tearing it down and building up a whole new structure. This can compromise the productivity of the result, and it can also affect the asymptotic performance of your code.

Even though the monad laws say these two yield the same answer.

The dual of substitution is redecoration

To see that, first, it is worth noting that about ten years back, Tarmo Uustalu and Varmo Vene wrote "The dual of substitition is redecoration", which among other things quite eloquently described how monads are effectively about substituting new tree-like structures, and then renormalizing.

This can be seen in terms of the more categorical viewpoint, where we define a monad in terms of return, fmap and join, rather than return and (>>=). In that presentation:

 
m >>= f = join (fmap f m)
 

fmap is performing substitution. and join is dealing with any renormalization.

Done this way, (m >>= f) on the Maybe monad would first fmap to obtain Just (Just a), Just Nothing or Nothing before flattening.

In the Maybe a case, the association of your binds is largely immaterial, the normalization pass fixes things up to basically the same size, but in the special case of a free monad the monad is purely defined in terms of substitution, since:

 
-- join :: Functor f => Free f (Free f a) -> Free f a
-- join (Pure a) = a
-- join (Free as) = Free (fmap join as)
 

This means that every time you >>= a free monad you are accumulating structure -- structure that you have traverse past to deal with subsequent left-associated invocations of >>=! Free monads never shrink after a bind and the main body of the tree never changes.

More concretely, you could build a binary tree with

 
-- data Tree a = Tip a | Bin (Tree a) (Tree a)
 

and make a monad out of it, writing out your return and (>>=), etc. by hand

The same monad could be had 'for free' by taking the free monad of

 
data Bin a = Bin a a
    deriving (Functor, Foldable, Traversable)
    -- using LANGUAGE DeriveFunctor, DeriveFoldable, DeriveTraversable
 

yielding the admittedly slightly less convenient type signature

 
type Tree = Free Bin
 

Now you can use return for Tip, and

 
bin :: MonadFree Bin m => m a -> m a -> m a
bin l r = wrap (Bin l r)
 

to construct a binary tree node, using any free monad representation.

Now, if that representation is Free Bin (or the original more direct Tree type above) then code that looks like f >>= \x -> g x >>= h performs fine, but (f >>= g) >>= h will retraverse the unchanging 'trunk' of the tree structure twice. That isn't so bad, but given n uses of >>= we'll traverse an ever-growing trunk over and over n times!

Putting Codensity to Work

But, we have a tool that can fix this, Codensity!

 
instance MonadFree f m => MonadFree f (Codensity m) where
  wrap t = Codensity (\h -> wrap (fmap (\p -> runCodensity p h) t))
 

Janis packaged up the use of Codensity into a nice combinator that you can sprinkle through your code, so that your users never need know it exists. Moreover, it prevents them from accidentally using any of the extra power of the intermediate representation. If your code typechecks before you use improve somewhere within it, and it type checks after, then it will yield the same answer.

 
improve :: Functor f => (forall m. MonadFree f m => m a) -> Free f a
improve m = lowerCodensity m
 

By now, it should be clear that the power of Codensity is sufficient to the task, but is it necessary?

More Soon.

[Edit; Fixed minor typographical errors pointed out by ShinNoNoir, ivanm, and Josef Svenningsson, including a whole bunch of them found by Noah Easterly]

]]>
http://comonad.com/reader/2011/free-monads-for-less/feed/ 5
Kan Extensions III: As Ends and Coends http://comonad.com/reader/2008/kan-extension-iii/ http://comonad.com/reader/2008/kan-extension-iii/#comments Tue, 27 May 2008 00:22:07 +0000 Edward Kmett http://comonad.com/reader/2008/kan-extension-iii/ Grant B. asked me to post the derivation for the right and left Kan extension formula used in previous Kan Extension posts (1,2). For that we can turn to the definition of Kan extensions in terms of ends, but first we need to take a couple of steps back to find a way to represent (co)ends in Haskell.

Dinatural Transformations

Rather than repeat the definition here, we'll just note we can define a dinatural transformation in Haskell letting polymorphism represent the family of morphisms.

 
type Dinatural f g = forall a. f a a -> g a a
 

Ends

So what is an end?

An end is a universal dinatural transformation from some object e to some functor s.

Diving into the formal definition:

Given a functor $F : \mathcal{C}^{op} \times \mathcal{C} -> \mathcal{D}$, and end of $F$ is a pair $(e,\omega)$ where $e$ is an object of $\mathcal{D}$ and omega is a dinatural transformation from e to S such that given any other dinatural transformation $\beta$ to S from another object x in $\mathcal{D}$, there exists a unique morphism $h : x -> e$, such that $\beta_a = \omega_a \cdot h$ for every $a$ in $\mathcal{C}$.

We usually choose to write ends as $e = \int_c S(c,c)$, and abuse terminology calling $e$ the end of $S$.

Note this uses a dinatural transformation from an object $x$ in $\mathcal{D}$, which we can choose to represent an arbitrary dinatural transformation from an object $x$ to a functor $S$ in terms of a dinatural transformation from the constant bifunctor:

 
newtype Const x a b = Const { runConst :: x }
 

This leaves us with the definition of a dinatural transformation from an object as:

 
Dinatural (Const x) s ~ forall a. Const x a a -> s a a
 

but the universal quantification over the Const term is rather useless and the Const bifunctor is supplying no information so we can just specialize that down to:

 
type DinaturalFromObject x s = x -> forall a. s a a
 

Now, clearly for any such transformation, we could rewrite it trivially using the definition:

 
type End s = forall a. s a a
 

with $\omega$ = id.

 
type DinaturalFromObject x s = x -> End s
 

And so End above fully complies with the definition for an end, and we just say e = End s abusing the terminology as earlier. The function $\omega$ = id is implicit.

For End to be a proper end, we assume that s is contravariant in its first argument and covariant in its second argument.

Example: Hom

A good example of this is (->) in Haskell, which is as category theory types would call it the Hom functor for Hask. Then End (->) = forall a. a -> a which has just one inhabitant id if you discard cheating inhabitants involving fix or undefined.

We write $\mathcal{C}(a,b)$ or $\mathrm{Hom}_\mathcal{C}(a,b)$ to denote a -> b. Similarly we use b^a to denote an exponential object within a category, and since in Haskell we have first class functions using the same syntax this also translates to a -> b. We'll need these later.

Example: Natural Transformations

If we define:

 
newtype HomFG f g a b =
        HomFG { runHomFG :: f a -> g b }
 

then we could of course choose to define natural transformations in terms of End as:

 
type Nat f g = End (HomFG f g) -- forall a. f a -> g a
 

Right Kan Extension as an End

Turning to Wikipedia or Categories for the Working Mathematician you can find the following definition for right Kan extension of $T$ along $K$ in terms of ends.

$(\mathrm{Ran}_KT)c=\int_m Tm^{\mathbf{C}(c,Km)}$

Working by rote, we note that $\mathbf{C}(c,Km)$ is just c -> K m as noted above, and that $(Tm)^{\mathbf{C}(c,Km)}$ is then just (c -> K m) -> T m'. So now we just have to take the end over that, and read off:

 
newtype RanT k t c m m' = (c -> k m) -> t m'
type Ran k t c = End (RanT k t c)
 

Which, modulo newtype noise is the same as the type previously supplied type:

 
newtype Ran f g c = Ran { runRan :: forall m. (c -> f m) -> g m }
 

Coends

The derivation for the left Kan extension follows similarly from defining coends over Hask in terms of existential quantification and copowers as products.

The coend derivation is complicated slightly by Haskell's semantics. Disposing of the constant bifunctor as before we get:

 
type DinaturalToObject s c = forall a. (s a a -> c)
 

Which since we want to be able to box up the s a a term separately, we need to use existential quantification.

 
(forall a. s a a -> c) ~ (exists a. s a a) -> c
 

We cannot represent this directly in terms of a type annotation in Haskell, but we can do so with a data type:

 
data Coend f = forall a. Coend (f a a)
 

Recall that in Haskell, existential quantification is represented by using universal quantification outside of the type constructor.

The main difference is that in our coend $(e,\zeta)$ the function $\zeta$ is now runCoend instead of id, because we have a Coend data constructor around the existential. Technicalities make it a little more complicated than that even, because you can't define a well-typed runCoend and have to use pattern matching on the Coend data constructor to avoid having an existentially quantified type escape the current scope, but the idea is the same.

Left Kan Extension as a Coend

Then given the definition for the left Kan extension of $T$ along $K$ as a coend:

$(\mathrm{Lan}_KT)c=\int^m \mathbf{C}(Km,c)\cdot Tm$

we can read off:

 
data LanT k t c m m' = LanT (k m -> c) (t m')
type Lan k t c = Coend (LanT k t c)
 

Which is almost isomorphic to the previously supplied type:

 
data Lan k t c = forall m. Lan (k m -> c) (t m)
 

except for the fact that we had to use two layers of data declarations when using the separate Coend data type, so we introduced an extra place for a $\perp$ to hide.

A newtype isn't allowed to use existential quantification in Haskell, so this form forces a spurious case analysis that we'd prefer to do without. This motivates why category-extras uses a separate definition for Lan rather than the more elaborate definition in terms of Coend.

[Edit: Fixed a typo in the definition of RanT]

]]>
http://comonad.com/reader/2008/kan-extension-iii/feed/ 14