The Comonad.Reader » Comonads http://comonad.com/reader types, (co)monads, substructural logic Thu, 02 May 2013 14:19:12 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 A Product of an Imperfect Union http://comonad.com/reader/2011/a-product-of-an-imperfect-union/ http://comonad.com/reader/2011/a-product-of-an-imperfect-union/#comments Fri, 01 Jul 2011 03:49:10 +0000 Edward Kmett http://comonad.com/reader/?p=337 In the last few posts, I've been talking about how we can derive monads and monad transformers from comonads. Along the way we learned that there are more monads than comonads in Haskell.

The question I hope to answer this time, is whether or not we turn any Haskell Comonad into a comonad transformer.

Comonads from Comonads

In Monads from Comonads, we built the comonad-to-monad transformer

 
newtype Co w m a = Co (forall r. w (a -> r) -> r)
 

by sandwiching a Comonad w in the middle of a trivial Codensity monad, then proceeded to show that at least in the case where our comonad was given rise to by an adjunction f -| g : Hask -> Hask, we could reason about this as if we had

 
Co w ~ Co (f . g) ~ g . f
 

Now, Codensity monads are a right Kan extension.

So, what happens if we try to do the same thing to a Left Kan extension?

Using

 
{-# LANGUAGE GADTs, FlexibleInstances #-}
 
import Control.Comonad
import Control.Comonad.Trans.Class
 

we can define

 
data L w a where
  L :: w (r -> a) -> r -> L w a
 

and a number of instances pop out for free, cribbed largely from the definition for Density.

 
instance Functor w => Functor (L w) where
  fmap f (L w r) = L (fmap (f .) w) r
 
instance ComonadTrans L where
  lower (L w r) = fmap ($r) w
 
instance Extend w => Extend (L w) where
  duplicate (L w s) = L (extend L w) s
 
instance Comonad w => Comonad (L w) where
  extract (L w r) = extract w r
 

Reasoning as before about w as if it were composed of an adjunction f -| g : Hask -> Hask to build some intuition, we can see:

 
L w a ~ exists r. (w (r -> a), r)
      ~ exists r. (f (g (r -> a)), r)
      ~ exists r. (f (), g (r -> a), r)
      ~ exists r. (f (), f () -> r -> a, r)
      ~ exists r. (f () -> r -> a, f r)
      ~ exists r. (f r -> a, f r)
      ~ Density f a
      ~ Lan f f a
      ~ (f . Lan f Id) a
      ~ (f . g) a
      ~ w a
 

The latter few steps require identities established in my second post on Kan extensions.

With that we obtain the "remarkable" insight that L ~ IdentityT, which I suppose is much more obvious when just looking at the type

 
data L w a where
  L :: w (r -> a) -> r -> L w a
 

and seeing the existentially quantified r as a piece of the environment, being used to build an a, since there is nothing else we can do with it, except pass it in to each function wrapped by w! So at first blush, we've gained nothing.

The key observation is that in one case we would up with something isomorphic to the codensity monad of our right adjoint, while in the other case we would up with the density comonad of our left adjoint. The former is isomorphic to the monad given by our adjunction, while the latter is isomorphic to the comonad, which is, unfortunately, right where we started!

In The Future All Comonads are Comonad Transformers!

Of course, we don't have to just modify a trivial left Kan extension. Let's tweak the Density comonad of another comonad!

 
data D f w a where
  D :: w (f r -> a) -> f r -> D f w a
 

Since both arguments will be comonads, and I want this to be a comonad transformer, I'm going to swap the roles of the arguments relative to the definition of CoT w m. The reason is that D f w is a Comonad, regardless of the properties of f, so long as w is a Comonad This is similar to how Density f is a Comonad regardless of what f is, as long as it has kind * -> *.

The implementation of D is identical to L above, just as CoT and Co share implementations and ContT and Cont do.

 
instance Functor w => Functor (D f w) where
  fmap f (D w r) = D (fmap (f .) w) r
 
instance Extend w => Extend (D f w) where
  duplicate (D w s) = D (extend D w) s
 
instance Comonad w => Comonad (D f w) where
  extract (D w r) = extract w r
 
instance ComonadTrans (D f) where
  lower (D w r) = fmap ($r) w
 

But in addition to being able to lower :: D f w a -> w a, we can also lower to the other comonad!

 
fstD :: (Extend f, Comonad w) => D f w a -> f a
fstD (D w r) = extend (extract w) r
 
sndD :: Comonad w => D f w a -> w a
sndD = lower
 

This means that if either comonad provides us with a piece of functionality we can exploit it.

Selling Products

In general Monad products always exist:

newtype Product m n a = Pair { runFst :: m a, runSnd :: n a }

instance (Monad m, Monad n) => Monad (Product m n) where
   return a = Pair (return a) (return a)
   Pair ma na >>= f = Pair (ma >>= runFst . f) (na >>= runSnd . f)

and Comonad coproducts always exist:

 
newtype Coproduct f g a = Coproduct { getCoproduct :: Either (f a) (g a) }
 
left :: f a -> Coproduct f g a
left = Coproduct . Left
 
right :: g a -> Coproduct f g a
right = Coproduct . Right
 
coproduct :: (f a -> b) -> (g a -> b) -> Coproduct f g a -> b
coproduct f g = either f g . getCoproduct
 
instance (Extend f, Extend g) => Extend (Coproduct f g) where
  extend f = Coproduct . coproduct
    (Left . extend (f . Coproduct . Left))
    (Right . extend (f . Coproduct . Right))
 
instance (Comonad f, Comonad g) => Comonad (Coproduct f g) where
  extract = coproduct extract extract
 

but Christoph Lüth and Neil Ghani showed that monad coproducts don't always exist!

On the other hand what we built up above looks a lot like a comonad product!

Too see that, first we'll note some of the product-like things we can do:

fstD and sndD act a lot like fst and snd, projecting our parts of our product and it turns out we can "braid" our almost-products, interchanging the left and right hand side.

 
braid :: (Extend f, Comonad w) => D f w a -> D w f a
braid (D w r) = D (extend (flip extract) r) w
 

(I use scary air-quotes around braid, because it doesn't let us braid them in a categorical sense, as we'll see.)

After braiding, one of our projections swaps places as we'd expect:

 
sndD (braid (D w r)) = -- by braid def
sndD (D (extend (flip extract) r) w) = -- by sndD (and lower) def
fmap ($w) (extend (flip extract) r) = -- extend fmap fusion
extend (($w) . flip extract) r = -- @unpl
extend (\t -> flip extract t w) r = -- flip . flip = id
extend (extract w) r = -- by fstD def
fstD (D w r)
 

But we stall when we try to show fstD . braid = sndD.

Why is that?

A Product of an Imperfect Union

Last time, when we inspected CoT w m a we demonstrated that on one hand given a suitable adjunction f -| g, such that w = f . g, Co w ~ Co (f . g) ~ (g . f), but on the other CoT w m a was bigger than g . m . f, and that if n -| m, then CoT w m a ~ g . m . n . f.

Of course, these two results agree, if you view Co w as CoT w Identity, where Identity -| Identity, since Identity ~ Identity . Identity

Therefore it should come as no surprise that given w = f . g, for a suitable adjunction f -| g, then D w j a is bigger than f . j . g. In fact if, j -| k, then D w j ~ f . j . k . g.

So what is happening is that we have only managed to "break one of our comonads in half", and D w j a lets you do 'too much stuff' with the j portion of the comonad. This keeps us from being symmetric.

Moreover it turns out to be a bit trickier to build one than to just hand in a w (f a) or w a and an f a to build our product-like construction.

Even so, exploiting Density was enough to transform any comonad into a comonad-transformer and to enable us to access the properties of either the comonad we are transforming with, or the comonad that we are transforming.

]]>
http://comonad.com/reader/2011/a-product-of-an-imperfect-union/feed/ 0
More on Comonads as Monad Transformers http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/ http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/#comments Thu, 30 Jun 2011 19:15:22 +0000 Edward Kmett http://comonad.com/reader/?p=328 Last time in Monad Transformers from Comonads I showed that given any comonad we can derive the monad-transformer

 
newtype CoT w m a = CoT { runCoT :: w (a -> m r) -> m r
 

and so demonstrated that there are fewer comonads than monads in Haskell, because while every Comonad gives rise to a Monad transformer, there are Monads that do not like IO, ST s, and STM.

I want to elaborate a bit more on this topic.

In Monads from Comonads we observed that for non-transformer version of CoT

 
type Co w = CoT w Identity
 

under the assumption that w = f . g for f -| g : Hask -> Hask, then

 
Co w ~ Co (f . g) ~ g . f
 

This demonstrated that the Co w is isomorphic to the monad we obtain by composing the adjunction that gave rise to our comonad the other way around.

But what about CoT?

Sadly CoT is a bit bigger.

We can see by first starting to apply the same treatment that we gave Co.

 
CoT w m a ~ forall r. w (a -> m r) -> m r
          ~ forall r. f (g (a -> m r)) -> m r
          ~ forall r. f (f() -> a -> m r) -> m r
          ~ forall r. f (a -> f () -> m r) -> m r
          ~ forall r. (a -> f () -> m r, f ()) -> m r
          ~ forall r. (a -> f () -> m r) -> f () -> m r
          ~ forall r. (a -> g (m r)) -> g (m r)
          ~ Codensity (g . m) a
 

(I'm using . to represent Compose for readability.)

But we've seen before that Codensity g a is in a sense bigger than g a, since given an Adjunction f -| g, Codensity g a ~ (g . f) a, not g a.

Moreover can compose adjunctions:

 
instance
    (Adjunction f g, Adjunction f' g') =>
    Adjunction (Compose f' f) (Compose g g') where
  unit   = Compose . leftAdjunct (leftAdjunct Compose)
  counit = rightAdjunct (rightAdjunct getCompose) . getCompose
 

So if n -| m, then we can see that Codensity (g . m) a ~ g . m . n . f, rather than the smaller g . m . f, which we can obtain using AdjointT f g m from Control.Monad.Trans.Adjoint in adjunctions.

So CoT isn't the smallest monad transformer that would be given by an adjunction.

In fact, it is isomorphic to AdjointT f g (Codensity m) a instead of AdjointT f g m a.

Sadly, there doesn't appear to be a general purpose construction of the smaller transformer just given an unseparated w = f . g.

]]>
http://comonad.com/reader/2011/more-on-comonads-as-monad-transformers/feed/ 3
Monad Transformers from Comonads http://comonad.com/reader/2011/monad-transformers-from-comonads/ http://comonad.com/reader/2011/monad-transformers-from-comonads/#comments Wed, 29 Jun 2011 01:51:43 +0000 Edward Kmett http://comonad.com/reader/?p=321 Last time, I showed that we can transform any Comonad in Haskell into a Monad in Haskell.

Today, I'll show that we can go one step further and derive a monad transformer from any comonad!

A Comonad to Monad-Transformer Transformer

Given

 
newtype CoT w m a = CoT { runCoT :: forall r. w (a -> m r) -> m r }
 

we can easily embed the type of the previous Co and create a smart constructor and deconstructor in the style of the MTL.

 
type Co w = CoT w Identity
 
co :: Functor w => (forall r. w (a -> r) -> r) -> Co w a
co f = CoT (Identity . f . fmap (fmap runIdentity))
 
runCo :: Functor w => Co w a -> w (a -> r) -> r
runCo m = runIdentity . runCoT m . fmap (fmap Identity)
 

In fact, as with between Cont and ContT, none of the major instances even change!

 
instance Functor w => Functor (CoT w m) where
  fmap f (CoT w) = CoT (w . fmap (. f))
 
instance Extend w => Apply (CoT w m) where
  mf < .> ma = mf >>- \f -> fmap f ma
 
instance Extend w => Bind (CoT w m) where
  CoT k >>- f = CoT (k . extend (\wa a -> runCoT (f a) wa))
 
instance Comonad w => Applicative (CoT w m) where
  pure a = CoT (`extract` a)
  mf < *> ma = mf >>= \f -> fmap f ma
 
instance Comonad w => Monad (CoT w m) where
  return a = CoT (`extract` a)
  CoT k >>= f = CoT (k . extend (\wa a -> runCoT (f a) wa))
 

We can use CoT as a Monad transformer, or lift IO actions:

 
instance Comonad w => MonadTrans (CoT w) where
  lift m = CoT (extract . fmap (m >>=))
 
instance (Comonad w, MonadIO m) => MonadIO (CoT w m) where
  liftIO = lift . liftIO
 

(This monad transformer is available in my kan-extensions package as of 1.9.0 on hackage.)

And as before we can lift and lower CoKleisli arrows, although the results are monadic when lowered.

 
liftCoT0 :: Comonad w => (forall a. w a -> s) -> CoT w m s
liftCoT0 f = CoT (extract < *> f)
 
lowerCoT0 :: (Functor w, Monad m) => CoT w m s -> w a -> m s
lowerCoT0 m = runCoT m . (return < $)
 
lowerCo0 :: Functor w => Co w s -> w a -> s
lowerCo0 m = runIdentity . runCoT m . (return < $)
 
liftCoT1 :: (forall a. w a -> a) -> CoT w m ()
liftCoT1 f = CoT (`f` ())
 
lowerCoT1 :: (Functor w, Monad m) => CoT w m () -> w a -> m a
lowerCoT1 m = runCoT m . fmap (const . return)
 
lowerCo1 :: Functor w => Co w () -> w a -> a
lowerCo1 m = runIdentity . runCoT m . fmap (const . return)
 

Since we could mean the MonadFoo instance derived from its comonadic equivalent or from the one we wrap as a monad transformer, we choose to default to the one from the monad, but we can still provide the lifted comonadic actions:

 
posW :: (ComonadStore s w, Monad m) => CoT w m s
posW = liftCoT0 pos
 
peekW :: (ComonadStore s w, Monad m) => s -> CoT w m ()
peekW s = liftCoT1 (peek s)
 
peeksW :: (ComonadStore s w, Monad m) => (s -> s) -> CoT w m ()
peeksW f = liftCoT1 (peeks f)
 
askW :: (ComonadEnv e w, Monad m) => CoT w m e
askW = liftCoT0 (Env.ask)
 
asksW :: (ComonadEnv e w, Monad m) => (e -> a) -> CoT w m a
asksW f = liftCoT0 (Env.asks f)
 
traceW :: (ComonadTraced e w, Monad m) => e -> CoT w m ()
traceW e = liftCoT1 (Traced.trace e)
 

and we just lift the monadic actions as usual:

 
instance (Comonad w, MonadReader e m) => MonadReader e (CoT w m) where
  ask = lift Reader.ask
  local f m = CoT (local f . runCoT m)
 
instance (Comonad w, MonadState s m) => MonadState s (CoT w m) where
  get = lift get
  put = lift . put
 
instance (Comonad w, MonadWriter e m) => MonadWriter e (CoT w m) where
  tell = lift . tell
  pass m = CoT (pass . runCoT m . fmap aug) where
    aug f (a,e) = liftM (\r -> (r,e)) (f a)
  listen = error "Control.Monad.Co.listen: TODO"
 
instance (Comonad w, MonadError e m) => MonadError e (CoT w m) where
  throwError = lift . throwError
  catchError = error "Control.Monad.Co.catchError: TODO"
 
instance (Comonad w, MonadCont m) => MonadCont (CoT w m) where
  callCC = error "Control.Monad.Co.callCC: TODO"
 

I welcome help working through the missing methods above.

This should go a long way towards showing the fact that there are strictly fewer comonads than monads in Haskell, and of course that there are no analogues to IO, STM and ST s in the world of Haskell comonads!

Every comonad gives you a monad-transformer, but not every monad is a monad transformer.

]]>
http://comonad.com/reader/2011/monad-transformers-from-comonads/feed/ 1
Monads from Comonads http://comonad.com/reader/2011/monads-from-comonads/ http://comonad.com/reader/2011/monads-from-comonads/#comments Mon, 27 Jun 2011 20:50:32 +0000 Edward Kmett http://comonad.com/reader/?p=291 Today I'll show that you can derive a Monad from any old Comonad you have lying around.

But first, we'll need to take a bit of a bit of a detour.

A Monad Sandwich

We'll need the definition of an adjunction on the category of Haskell types, which we can strip down and borrow from my adjunctions package.

 
class (Functor f, Representable u) =>
         Adjunction f u | f -> u, u -> f where
    leftAdjunct :: (f a -> b) -> a -> u b
    rightAdjunct :: (a -> u b) -> f a -> b
 

Here we can define our Adjunction by defining leftAdjunct and rightAdjunct, such that they witness an isomorphism from (f a -> b) to (a -> u b)

Every Adjunction F -| G : C -> D, gives rise to a monad GF on D and a Comonad FG on C.

In addition to this, you can sandwich an additional monad M on C in between GF to give a monad GMF on D:

Control.Monad.Trans.Adjoint

and you can sandwich a comonad W on D in between F and G to yield the comonad FWG on C:

Control.Comonad.Trans.Adjoint

A Contravariant Comonad Sandwich

As was first shown to me me by Derek Elkins, this construction works even when you C is not the category of Haskell types!

Consider the Contravariant functor Op r:

 
newtype Op a b = Op { getOp :: b -> a } 
 
instance Contravariant (Op a) where
  contramap f g = Op (getOp g . f)
 

We can view Op r as a functor from Hask^op -> Hask or as one from Hask -> Hask^op.

We can define a notion of a contravariant adjunction F -| G : Hask^op -> Hask.

Data.Functor.Contravariant.Adjunction

 
class (Contravariant f, Corepresentable g) =>
       Adjunction f g | f -> g, g -> f where
    leftAdjunct :: (b -> f a) -> a -> g b
    rightAdjunct :: (a -> g b) -> b -> f a
 

Where, now, leftAdjunct and rightAdjunct witness the isomorphism from (f a < - b) to (a -> g b), which means once you flip the arrow around both seem to be going the same way. Ultimately any contravariant adjunction on Hask is comprised of two isomorphic functors, each self-adjoint.

This gives rise to one notion of a comonad-to-monad transformer!

Control.Monad.Trans.Contravariant.Adjoint

But we can we do better?

An End as the Means

First, some boilerplate.

 
{-# LANGUAGE Rank2Types, FlexibleInstances, FlexibleContexts, MultiParamTypeClasses, UndecidableInstances #-}
 
import Data.Monoid
import Control.Comonad
import Control.Applicative
import Control.Comonad.Store.Class
import Control.Comonad.Env.Class as Env
import Control.Comonad.Traced.Class as Traced
import Control.Monad.Reader.Class
import Control.Monad.Writer.Class
import Control.Monad.State.Class
import Data.Functor.Bind
 

Our new comonad to monad transformer is given by

 
newtype Co w a = Co { runCo :: forall r. w (a -> r) -> r }
 

What we've done is added a quantifier to prevent the use of the type r, as we did when describing Codensity and Ran, categorically we've taken some kind of end. This idea came to me after an observation was made by Russell O'Connor that Conts (Store s) a was pretty close to a continuation passing style version of State s.

Now, we can start spitting out instances for this type.

 
instance Functor w => Functor (Co w) where
   fmap f (Co w) = Co (w . fmap (. f))
 
instance Comonad w => Monad (Co w) where
   return a = Co (`extract` a)
   Co k >>= f = Co (k .extend (\wa a -> runCo (f a) wa))
 
instance Comonad w => Applicative (Co w) where
   mf < *> ma = mf >>= \f -> fmap f ma
   pure a = Co (`extract` a)
 

In my break-out of category-extras, I've split off the semigroupoid structure of Kleisli-, co-Kleisli-, and static- arrow composition as Bind, Extend and Apply respectively, so we can make use of slightly less structure and get slightly less structure in turn:

 
instance Extend w => Bind (Co w) where
   Co k >>- f = Co (k .extend (\wa a -> runCo (f a) wa))
 
instance Extend w => Apply (Co w) where
   mf < .> ma = mf >>- \f -> fmap f ma
 

From comonad-transformers to the mtl

We can look at how this transforms some particular comonads.

The comonadic version of State is Store. Looking at Co (Store s) a

 
Co (Store s) a ~ forall r. ((s -> a -> r, s) -> r)
               ~ forall r. (s -> a -> r) -> s -> r
               ~ forall r. (a -> s -> r) -> s -> r
               ~ Codensity ((->)s) a
               ~ State s a
 

This gives rise to a leap of intuition that we'll motivate further below:

 
instance ComonadStore s m => MonadState s (Co m) where
   get = Co (\w -> extract w (pos w))
   put s = Co (\w -> peek s w ())
 

Sadly this breaks down a little for Writer and Reader as the mtl unfortunately has historically included a bunch of extra baggage in these classes. In particular, in reader, the notion of local isn't always available, blocking some otherwise perfectly good MonadReader instances, and I've chosen not to repeat this mistake in comonad-transformers.

 
instance ComonadEnv e m => MonadReader e (Co m) where
   ask = Co (\w -> extract w (Env.ask w))
   local = error "local"
 

Ideally, local belongs in a subclass of MonadReader.

 
class Monad m => MonadReader e m | m -> e where
   ask :: m a -> e
 
class MonadReader e m => MonadLocal e m | m -> e where
   local :: (e -> e) -> m a -> m a
 

Similarly there is a lot of baggage in the MonadWriter. The Monoid constraint isnt necessary for the class itself, just for most instances, and the listen and pass members should be a member of a more restricted subclass as well to admit some missing MonadWriter instances, but we can at least provide the notion of tell that is critical to Writer.

 
instance (Monoid e, ComonadTraced e m) => MonadWriter e (Co m) where
   tell m = Co (\w -> Traced.trace m w ())
   listen = error "listen"
   pass = error "pass"
 

But given the split out

 
instance Monad m => MonadWriter e m | m -> e where
    tell :: e -> m ()
 
instance MonadWriter e m => MonadListen e m | m -> e
    listen :: m a -> m (a, w)
    pass :: m (a, w -> w) -> m a
 

We could provide this functionality more robustly. (There is a similar subset of Comonads that can provide listen and pass analogues.)

While I am now the maintainer of the mtl, I can't really justify making the above corrections to the class hierarchy at this time. They would theoretically break a lot of code. I would be curious to see how much code would break in practice though.

Combinators Please!

There is a recurring pattern in the above code, so we can also improve this construction by providing some automatic lifting combinators that take certain cokleisli arrows and give us monadic values

 
lift0 :: Comonad w => (forall a. w a -> s) -> Co w s
lift0 f = Co (extract < *> f)
 
lift1 :: (forall a. w a -> a) -> Co w ()
lift1 f = Co (`f` ())
 

along with their inverses

 
lower0 :: Functor w => Co w s -> w a -> s
lower0 (Co f) w = f (id < $ w)
 
lower1 :: Functor w => Co w () -> w a -> a
lower1 (Co f) w = f (fmap const w)
 

(The proofs that these are inverses are quite hairy, and lean heavily on parametricity.)

Then in the above, the code simplifies to:

 
get = lift0 pos
put s = lift1 (peek s)
ask = lift0 Env.ask
tell s = lift1 (tell s)
 

Co-Density?

Co and Codensity are closely related.

Given any Comonad W, it is given rise to by the composition FG for some adjunction F -| G : Hask -> C.

Considering only the case where C = Hask for now, we can find that

 
Co w a ~ forall r. (f (g (a -> r)) -> r).
 

Since f -| g, we know that g is Representable by f (), as witnessed by:

 
tabulateAdjunction :: Adjunction f u => (f () -> b) -> u b
tabulateAdjunction f = leftAdjunct f ()
 
indexAdjunction :: Adjunction f u => u b -> f a -> b
indexAdjunction = rightAdjunct . const
 

therefore

 
Co w a ~ f (g (a -> r)) -> r ~ f (f () -> a -> r) -> r
 

Since f is a left adjoint functor, f a ~ (a, f ()) by Sjoerd Visscher's elegant little split combinator:

 
split :: Adjunction f u => f a -> (a, f ())
split = rightAdjunct (flip leftAdjunct () . (,))
 

which has the simple inverse

 
unsplit :: Adjunction f g => a -> f () -> f a
unsplit a = fmap (const a)
 

so we can apply that to our argument:

 
Co w a ~ forall r. f (f () -> a -> r) -> r ~
         forall r. (f () -> a -> r, f ()) -> r
 

and curry to obtain

 
Co w a ~ forall r. (f () -> a -> r) -> f () -> r
 

and swap the arguments

 
Co w a ~ forall r. (a -> f () -> r) -> f () -> r
 

then we can tabulate the two subtypes of the form (f () -> r)

 
Co w a ~ forall r. (a -> g r) -> g r
 

and so we find that

 
Co w a ~ Codensity g a
 

Finally,

 
Codensity g a ~ Ran g g a
 

but we showed back in my second article on Kan extensions that given f -| g that

 
Ran g g a ~ g (f a)
 

So Co w ~ Co (f . g) ~ (g . f), the monad given rise to by composing our adjunction the other way!

Comonads from Monads?

Now, given all this you might ask

Is there is a similar construction that lets you build a comonad out of a monad?

Sadly, it seems the answer in Haskell is no.

Any adjunction from Hask -> Hask^op would require two functions

 
class (Contravariant f, Contravariant g) => DualContravariantAdjunction f g where
    leftAdjunct :: (f a -> b) -> g b -> a
    rightAdjunct :: (g b -> a) -> f a -> b
 

where both functors are contravariant.

Surmounting the intuitionistic impossibility of this, then given any such adjunction, there would be a nice coend we could take, letting us sandwich any Monad in the middle as we did above.

There does exist one such very boring Contravariant Functor.

 
newtype Absurd a = Absurd (Absurd a)
 
absurdity :: Absurd a -> b
absurdity (Absurd a) = absurdity a
 
instance Contravariant Absurd where
   contramap f (Absurd as) = Absurd (contramap f as)
 
instance DualContravariantAdjunction Absurd Absurd where
    leftAdjunct _ = absurdity
    rightAdjunct _ = absurdity
 

We can safely sandwich IO within this adjunction from Hask -> Hask^op to obtain a comonad.

 
newtype Silly m a = Silly { runSilly :: Absurd (m (Absurd a)) }
 
instance Monad m => Extend (Silly m) where
    extend f (Silly m) = absurdity m
 
instance Monad m => Comonad (Silly m) where
    extract (Silly m) = absurdity m
 

But for any more interesting such type that actually lets us get at its contents, we would be able to derive a circuitous path to unsafePerformIO!

Since unsafePerformIO should not be constructible without knowing IO specifics, no useful DualContravariantAdjunctions should exist.

]]>
http://comonad.com/reader/2011/monads-from-comonads/feed/ 9
Free Monads for Less (Part 3 of 3): Yielding IO http://comonad.com/reader/2011/free-monads-for-less-3/ http://comonad.com/reader/2011/free-monads-for-less-3/#comments Fri, 24 Jun 2011 06:41:06 +0000 Edward Kmett http://comonad.com/reader/?p=251 Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot. ]]> Last time, I said that I was going to put our cheap new free monad to work, so let's give it a shot.

Yield for Less

Last month at TPDC 2011, Roshan James and Amr Sabry presented Yield: Mainstream Delimited Continuations.

Without calling it such they worked with the free monad of the indexed store comonad. Ignoring the comonad, and just looking at the functor we can see that

 
data Store i o r = Store (i -> r) o
    deriving (Functor)
 

admits the operation

 
class Functor y => Yieldable y i o | y -> i o where
   yield :: o -> y i
 
instance Yieldable (Store i o) i o where
   yield = Store id
 

The free monad of Store i o is a nice model for asymmetric coroutines.

 
type Yield i o = Free (Store i o)
 
liftFree :: Functor f => f a -> Free f a
liftFree = Free . fmap Pure
 
instance Yieldable y i o => Yieldable (Free y) i o where
   yield = liftFree . yield
 

With its Monad, you can write computations like:

 
foo :: Num o => Yield () o ()
foo = do
   yield 1
   yield 2
   yield 3
 

or to streamline one of James and Sabry's examples

 
walk :: Traversable f => f o -> Yield i o (f i)
walk = traverse yield
 

is an asymmetric coroutine that yields each of the elements in a traversable container in turn, replacing them with the responses from whatever is driving the coroutine.

James and Sabry called this the naive frame grabbing implementation. It is inefficient for the same reasons that we discussed before about retraversing the common trunk in free monads in general. Note that the unchanging trunk here isn't the data structure that we're traversing, but instead the chain of Store i o actions we took to get to the current instruction.

James and Sabry then proceeded to optimize it by hitting it with Codensity.

 
type Iterator i o = Codensity (Yield i o)
 
instance (Monad y, Yieldable y i o) => Yieldable (Codensity y) i o where
   yield = liftCodensity . yield
 

But we've now seen that we can get away with something smaller and get the same benefits.

 
liftF :: Functor f => f a -> F f a
liftF f = F (\kp kf -> kf (fmap kp f))
 
instance Yieldable y i o => Yieldable (F y) i o where
   yield = liftF . yield
 

Flattened, and with the store untupled the new optimized representation looks like:

 
newtype Iterator i o a = Iterator
  { runIterator ::
    forall r. (a -> r) -> (o -> (i -> r) -> r) -> r)
  }
 

and provides the same performance improvements for asymmetric coroutines as the Codensity version, used by James and Sabry, which would flatten to the much larger and less satisfying:

 
newtype RSIterator i o a = RSIterator
    { runRSIterator :: forall r.
          (a -> (o -> (i -> r) -> r) -> r)
             -> (o -> (i -> r) -> r) -> r
    }
 

They proceed to give an encoding of delimited continuations into this type and vice versa, but the rest of their material is of no further use to us here.

As an aside the performance benefits of encoding Oleg's iteratees in continuation passing style arise for much the same reason. The resuting encoding is a right Kan extension!

Who Needs the RealWorld?

As Runar recently tweeted, we have put this to good use here at ClariFI. (Yes, we are hiring! If the contents of my blog make sense to you then email me and let's talk.)

At ClariFI have a strongly typed functional language that bears a strong resemblance to Haskell with rank-n types and a number of other interesting type system features that are particularly suited to our problem domain.

However, as with Haskell, we needed a story for how to deal with IO.

Now, GHC models IO with the type

 
newtype IO a =
   IO (State# RealWorld -> (# a, State# RealWorld #))
 

where they model IO by working in a strict state monad, passing around a real world that they promise not to mutate or copy. (In practice, the world is passed around as a 0-byte token.

This is somewhat problematic semantically, for a number of reasons.

First, There is always the risk of copying it or plumbing it through backwards, so we carefully hide the State# RealWorld from the end user. So this model really wants some notion of uniqueness or linear typing to render it perfectly safe. Heck, the entire Clean language arose from just trying to address this concern.

Second, you don't really get to pass the real world around! We have multiple cores working these days. Stuff is happening in the back end, and as much as you might want it to be, your program isn't responsible for everything that happens in the RealWorld!.

Third, if in some sense all bottoms are the same, then forever (putStrLn "Hello World") and undefined are the same in that sense, despite the slew of side-effects that arise from the first one. Now, in Haskell you are allowed to catch some bottoms in the IO monad, and thereby escape from certain doom, but it is still a reasonable objection.

One alternate model for talking about IO is to view it as a free monad of some set of operations. This approach was taken by Wouter Swierstra's Functional Pearl: Data Types a la Carte.

You can then supply some sort of external interpreter that pumps that tree structure, performing the individual actions.

This is unsatisfying because of two things:

First, the performance is abysmal using the common ADT encoding of a free monad. Janis Voigtländer of course showed, that this can be rectified by using the Codensity monad.

Second, the set of FFI operations is closed.

What we've done instead is to define our primitive IO actions externally as some FFI type:

 
type FFI o i -- external, side-effecting computation taking o, returning i
 

In practice, these are obtained by reflection by our foreign import statements since we run in the JVM.

Then we looked at the free monad of

 
newtype OI a = forall o i. OI (FFI o i) o (i -> a) deriving Functor
 

where OI is the indexed store comonad used as the building block above, yielding arguments to FFI of type o, and representing a computation that would resume with a value of type i to obtain a result of type a.

In some sense this yields a more useful notion than Richard Kieburtz's novel, but largely unimplementable, OI comonad from Codata and Comonads in Haskell.

Flattening Free OI would yield the naive

 
-- data FIO a where
--    Return :: a -> FIO a
--    FIO :: FFI o i -> o -> (i -> FIO a) -> FIO a
 

which would be interpreted by the runtime system.

But once we've converted to our Church-encoded Free monad and flattened we obtain:

 
newtype IO a = IO
    (forall r. (a -> r) ->
                 (forall i o. FFI o i -> o -> (i -> r) -> r) ->
                 r)
 

with the Functor and Monad instances defined above.

This then gives us a number of choices on how we implement the runtime system:

We can use the machinery described earlier to convert from IO a to Free OI a or FIO a, and then have the runtime system pattern match on that structure on our main method, taking the FFI actions and their arguments and passing the results in to the language, or we can invert control, and implement things more directly by just defining

 
FFI = (->)
 

while letting the FFI'd methods have side-effects, and then defining

 
unsafePerformIO :: IO a -> a
unsafePerformIO (IO m) = m id (\ oi o ir -> ir (oi o))
 

But regardless of how FFI is implemented, this model provides a clear structural difference between forever (putStrLn "Hello") and undefined and does not require us to believe the pleasant fiction that we can get our hands on the real world and pass it around.

Our actual IO representation is only slightly more complicated than the one presented here in order to deal with the plumbing of an extra continuation to deal with Java exceptions, but the substance of this approach isn't changed by this addition.

[Edit: incorporated a minor typographical fix into Iterator from Max Bolingbroke]
[Edit: fixed Store to be data, an liftM that should have been an fmap and added the missing Functor constraint that was present in my actual implementation but didn't make it to the web, and a couple of typos in the implementation of RSIterator, all noted by Clumsy.]

]]>
http://comonad.com/reader/2011/free-monads-for-less-3/feed/ 9
The Pointed-Set Comonad http://comonad.com/reader/2008/the-pointed-set-comonad/ http://comonad.com/reader/2008/the-pointed-set-comonad/#comments Thu, 04 Dec 2008 18:56:15 +0000 Edward Kmett http://comonad.com/reader/2008/the-pointed-set-comonad/ Last night, Chung-Chieh Shan posted an example of a pointed-set monad on his blog, which happens to be isomorphic to a non-empty stream monad with a different emphasis.

But, I thought I should point out that the pointed set that he posted also has a comonadic structure, which may be exploited since it is just a variation on the "zipper comonad," a structure that is perhaps more correctly called a "pointing comonad."

But first, a little background:

With combinatorial species you point a data structure by marking a single element in it as special. We can represent that with the product of an element and the derivative of the original type.

F*[A] = A * F'[A]

So, then looking at Shan's pointed set, we can ask what combinatorial species has a list as its derivative?

The answer is a cycle, not a set.

This fact doesn't matter to the monad, since the only way a monadic action interacts with that extra structure is safely through bind, but does for the comonad where every comonadic action has access to that structure, but no control over the shape of the result.

However, we don't really have a way to represent an unordered set in Haskell, so if you are treating a list as a set, the derivative of a set is another set then we can also view the a * [a] as a pointed set, so long as we don't depend on the order of the elements in the list in any way in obtaining the result of our comonadic actions.

I've changed the name of his data type to PointedSet to avoid conflicting with the definitions of Pointed and Copointed functors in category extras.

 
module PointedSet where
 
import Control.Comonad -- from my category-extras library
import Data.List (inits,tails) -- used much later below
 
data PointedSet a = PointedSet a [a] deriving (Eq, Ord, Show, Read)
 
instance Functor PointedSet where
    fmap f (PointedSet x xs) = PointedSet (f x) $ fmap f xs
 

The definition for extract is obvious, since you have already selected a point, just return it.

 
instance Copointed PointedSet where
    extract (PointedSet x _) = x
 

On the other hand, for duplicate we have a couple of options. An obvious and correct, but boring implementation transforms a value as follows:

 
boring_duplicate :: PointedSet a -> PointedSet (PointedSet a)
boring_duplicate xxs@(PointedSet x xs) =
    PointedSet xxs $ fmap (flip PointedSet []) xs
 
 
*PointedSet> boring_duplicate $ PointedSet 0 [1..3]
PointedSet (PointedSet 0 [1..3]) [
    PointedSet 1 [],
    PointedSet 2 [],
    PointedSet 3 []
]
 

but that just abuses the fact that we can always return an empty list.

Another fairly boring interpretation is to just use the guts of the definition of the Stream comonad, but that doesn't model a set with a single memory singled out.

A more interesting version refocuses on each element of the list in turn, which makes the connection to the zipper comonad much more obvious. Since we want a pointed set and not a pointed cycle, we can focus on an element just by swapping out the element in the list in that position for the focus.

Again, since we can't specify general species in Haskell, this is as close as we can come to the correct comonadic structure for a pointed set. Due to the limitations of our type system, the comonadic action can still see the order of elements in the set, but it shouldn't use that information.

Since we don't care to preserve the order of the miscellaneous set elements, the refocus helper function below can just accumulate preceding elements in an accumulating parameter in reverse order.

 
instance Comonad PointedSet where
    duplicate xxs@(PointedSet x xs) = PointedSet xxs $ refocus [] x xs
      where
        refocus :: [a] -> a -> [a] -> [PointedSet a]
        refocus acc x (y:ys) =
            PointedSet y (acc ++ (x:ys)) : refocus (y:acc) x ys
        refocus acc x [] = []
 

Now,

 
*PointedSet> duplicate $ PointedSet 0 [1..3] =
PointedSet (PointedSet 0 [1,2,3]) [
    PointedSet 1 [0,2,3],
    PointedSet 2 [1,0,3],
    PointedSet 3 [2,1,0]
]
 

With that in hand we can define comonadic actions that can look at an entire PointedSet and return a value, then extend them comonadically to generate new pointed sets.

For instance, if we had a numerical pointed set and wanted to blur our focus somewhat we could weight an average between the focused and unfocused elements:

smooth :: Fractional a => a -> PointedSet a -> a
smooth w (PointedSet a as) =
    w * a +
    (1 - w) * sum as / fromIntegral (length as)

Smoothing is a safe pointed-set comonadic operation because it doesn't care about the order of the elements in the list.

And so now we can blur the distinction between the focused element and the rest of the set:

 
*PointedSet> extend (smooth 0.5) $ PointedSet 10 [1..5]
PointedSet 6.5 [2.9,3.3,3.7,4.1,4.5]
 

A quick pass over the comonad laws shows that they all check out.

As noted above, if your comonadic action uses the order of the elements in the list beyond the selection of the focus, then it isn't really a valid pointed set comonadic operation. This is because we are abusing a list to approximate a (multi)set.

The Pointed-Cycle Comonad

A slight variation on this theme keeps the order of the elements the same in exchange for a more expensive refocusing operation and just rotates them through the focus.

 
data PointedCycle a = PointedCycle a [a] deriving (Eq, Ord, Show,Read)
 
instance Functor PointedCycle where
    fmap f (PointedCycle x xs) = PointedCycle (f x) $ fmap f xs
 
instance Copointed PointedCycle where
    extract (PointedCycle x _) = x
 
instance Comonad PointedCycle where
   duplicate xxs@(PointedCycle x xs) =
        PointedCycle xxs . fmap listToCycle . tail $ rotations (x:xs)
     where
        rotations :: [a] -> [[a]]
        rotations xs = init $ zipWith (++) (tails xs) (inits xs)
        listToCycle (x:xs) = PointedCycle x xs
 

With that you acknowledge that you really have a pointed cycle and the writer of the comonadic action can safely use the ordering information intrinsic to the list as a natural consequence of having taken the derivative of a cycle.

 
*PointedSet> duplicate $ PointedCycle 0 [1..3]
PointedCycle (PointedCycle 0 [1,2,3]) [
    PointedCycle 1 [2,3,0],
    PointedCycle 2 [3,0,1],
    PointedCycle 3 [0,1,2]
]
 
]]>
http://comonad.com/reader/2008/the-pointed-set-comonad/feed/ 3
Kan Extensions II: Adjunctions, Composition, Lifting http://comonad.com/reader/2008/kan-extensions-ii/ http://comonad.com/reader/2008/kan-extensions-ii/#comments Fri, 23 May 2008 00:13:13 +0000 Edward Kmett http://comonad.com/reader/2008/kan-extensions-ii/ I want to spend some more time talking about Kan extensions, composition of Kan extensions, and the relationship between a monad and the monad generated by a monad.

But first, I want to take a moment to recall adjunctions and show how they relate to some standard (co)monads, before tying them back to Kan extensions.

Adjunctions 101

An adjunction between categories $\mathcal{C}$ and $\mathcal{D}$ consists of a pair of functors $F : \mathcal{C} -> \mathcal{D}$, and $G : \mathcal{D} -> \mathcal{C}$ and a natural isomorphism:

$\phi : \mathrm{Hom}_\mathcal{D} (F-, =) -> \mathrm{Hom}_\mathcal{C} (-, G=)$

We call $F$ the left adjoint functor, and $G$ the right adjoint functor and $(F,G)$ an adjoint pair, and write this relationship as $F \dashv G$

Borrowing a Haskell definition from Dave Menendez, an adjunction from the category of Haskell types (Hask) to Hask given by a pair of Haskell Functor instances can be defined as follows, where phi is witnessed by $\phi$ = leftAdjunct and $\phi^{-1}$ = rightAdjunct. [haddock]

 
class (Functor f, Functor g) =>
    Adjunction f g | f -> g, g -> f where
        unit   :: a -> g (f a)
        counit :: f (g a) -> a
        leftAdjunct  :: (f a -> b) -> a -> g b
        rightAdjunct :: (a -> g b) -> f a -> b
 
        unit = leftAdjunct id
        counit = rightAdjunct id
        leftAdjunct f = fmap f . unit
        rightAdjunct f = counit . fmap f
 

Currying and Uncurrying

The most well known adjunction to a Haskell programmer is between the functors given by ((,)e) and ((->)e). (Recall that you can read ((,)e) as (e,) and ((->)e) as (e->); however, the latter syntax isn't valid Haskell as you aren't allowed to make (,) and (->) sections. We use this adjunction most every day in the form of the functions curry and uncurry.

 
curry :: ((a, b) -> c) -> a -> b -> c
curry f x y = f (x,y)
 
uncurry :: (a -> b -> c) -> (a, b) -> c
uncurry f ~(x,y) = f x y
 

However the arguments are unfortunately slightly flipped around when we go to define this as an adjunction.

 
instance Adjunction ((,)e) ((->)e) where
        leftAdjunct f a e  = f (e,a)
        rightAdjunct f ~(e,a) = f a e
 

This adjunction defines the relationship between the anonymous reader monad and the anonymous reader comonad (aka the product comonad).

All Readers are the Same

As an aside, if you look at the reader arrow, reader monad and reader comonad all side by side you can see that they are all basically the same thing. Kleisli arrows for the anonymous reader monad have the form a -> e -> b. The Reader arrow takes the form arr (a, e) b, which when arr is (->) this reads as (a,e) -> b, which is just a curried Kleisli arrow for the Reader monad. On the other hand the reader comonad is ((,)e), and its CoKleisli arrows have the form (e,a) -> b. So, putting these side by side:

 
a -> e -> b
(a , e) -> b
(e , a) -> b
 

You can clearly see these are all the same thing!

State and Composing Adjunctions

Once we define functor composition:

 
newtype O f g a = Compose { decompose :: f (g a) }
instance (Functor f, Functor g) => Functor (f `O` g) where
        fmap f = Compose . fmap (fmap f) . decompose
 

We can see that every adjunction gives rise to a monad:

 
instance Adjunction f g => Monad (g `O` f) where
        return = Compose . unit
        m >>= f =
             Compose .
             fmap (rightAdjunct (decompose . f)) $
             decompose m
 

and if you happen to have a Comonad typeclass lying around, a comonad:

 
class Comonad w where
        extract :: w a -> a
        duplicate :: w a -> w (w a)
        extend :: (w a -> b) -> w a -> w b
        extend f = fmap f . duplicate
        duplicate = extend id
 
instance Adjunction f g => Comonad (f `O` g) where
        extract = counit . decompose
        extend f =
                Compose .
                fmap (leftAdjunct (f . Compose)) .
                decompose
 

In reality, adjunction composition is of course not the only way you could form a monad by composition, so in practice a single composition constructor leads to ambiguity. Hence why in category-extras there is a base CompF functor, and specialized variations for different desired instances. For simplicity, I'll stick to `O` here.

We can compose adjunctions, yielding an adjunction, so long as we are careful to place things in the right order:

 
instance (Adjunction f1 g1, Adjunction f2 g2) =>
    Adjunction (f2 `O` f1) (g1 `O` g2) where
        counit =
               counit .
               fmap (counit . fmap decompose) .
               decompose
        unit =
               Compose .
               fmap (fmap Compose . unit) .
               unit
 

In fact, if we use the adjunction defined above, we can see that its just the State monad!

 
instance MonadState e ((->)e `O` (,)e) where
        get = compose $ \s -> (s,s)
        put s = compose $ const (s,())
 

Not that I'd be prone to consider using that representation, but we can also see that we get the context comonad this way:

 
class Comonad w =>
    ComonadContext s w | w -> s where
        getC :: w a -> s
        modifyC :: (s -> s) -> w a -> a
 
instance ComonadContext e ((,)e `O` (->)e) where
        getC = fst . decompose
        modifyC f = uncurry (flip id . f) . decompose
 

Adjunctions as Kan Extensions

Unsurprisingly, since pretty much all of category theory comes around to being an observation about Kan extensions in the end, we can find some laws relating left- and right- Kan extensions to adjunctions.

Recall the definitions for right and left Kan extensions over Hask:

 
newtype Ran g h a = Ran
        { runRan :: forall b. (a -> g b) -> h b }
data Lan g h a = forall b. Lan (g b -> a) (h b)
 

Formally, F \dashv G if and only if the right Kan extension $\mathrm{Ran}_G 1$ exists and is preserved by $G$. (Saunders Mac Lane, Categories for the Working Mathematician p248). We can use this in Haskell to define a natural isomorphism between f and Ran g Identity witnessed by adjointToRan and ranToAdjoint below:

 
adjointToRan :: Adjunction f g => f a -> Ran g Identity a
adjointToRan f = Ran (\a -> Identity $ rightAdjunct a f)
 
ranToAdjoint :: Adjunction f g => Ran g Identity a -> f a
ranToAdjoint r = runIdentity (runRan r unit)
 

We can construct a similar natural isomorphism for the right adjoint g of a Functor f and Lan f Identity:

 
adjointToLan :: Adjunction f g => g a -> Lan f Identity a
adjointToLan = Lan counit . Identity
 
lanToAdjoint :: Adjunction f g => Lan f Identity a -> g a
lanToAdjoint (Lan f v) = leftAdjunct f (runIdentity v)
 

So, with that in hand we can see that Ran f Identity -| f -| Lan f Identity, presuming Ran f Identity and Lan f Identity exist.

A More General Connection

Now, the first isomorphism above can be seen as a special case of a more general law relating functor composition and Kan extensions, where h = Identity in the composition below:

 
ranToComposedAdjoint ::
        Adjunction f g =>
        Ran g h a -> (h `O` f) a
ranToComposedAdjoint r = Compose (runRan r unit)
 
composedAdjointToRan ::
        (Functor h, Adjunction f g) =>
        (h `O` f) a -> Ran g h a
composedAdjointToRan f =
        Ran (\a -> fmap (rightAdjunct a) (decompose f))
 

Similarly , we get the more generalize relationship for Lan:

 
lanToComposedAdjoint ::
        (Functor h, Adjunction f g) =>
        Lan f h a -> (h `o` g) a
lanToComposedAdjoint (Lan f v) =
        Compose (fmap (leftAdjunct f) v)
 
composedAdjointToLan ::
        Adjunction f g =>
        (h `o` g) a -> Lan f h a
composedAdjointToLan = Lan counit . decompose
 

Composing Kan Extensions

Using the above with the laws for composing right Kan extensions:

 
composeRan :: Ran f (Ran g h) a -> Ran (f `O` g) h a
composeRan r =
        Ran (\f -> runRan (runRan r (decompose . f)) id)
 
decomposeRan ::
        Functor f =>
        Ran (f `O` g) h a ->  Ran f (Ran g h) a
decomposeRan r =
        Ran (\f -> Ran (\g -> runRan r (Compose . fmap g . f)))
 

or the laws for composing left Kan extensions:

 
composeLan ::
        Functor f =>
        Lan f (Lan g h) a -> Lan (f `O` g) h a
composeLan (Lan f (Lan g h)) =
        Lan (f . fmap g . decompose) h
 
decomposeLan :: Lan (f `O` g) h a -> Lan f (Lan g h) a
decomposeLan (Lan f h) = Lan (f . compose) (Lan id h)
 

can give you a lot of ways to construct monads:

Right Kan Extension as (almost) a Monad Transformer

You can lift many of operations from a monad m to the codensity monad of m. Unfortunately, we don't have quite the right type signature for an instance of MonadTrans, so we'll have to make do with our own methods:

[Edit: this has been since factored out into Control.Monad.Codensity to allow Codensity to actually be an instance of MonadTrans]

 
liftRan :: Monad m => m a -> Ran m m a
liftRan m = Ran (m >>=)
 
lowerRan :: Monad m => Ran m m a -> m a
lowerRan a = runRan a return
 
instance MonadReader r m =>
    MonadReader r (Ran m m) where
        ask = liftRan ask
        local f m = Ran (\c -> ask >>=
              \r -> local f (runRan m (local (const r) . c)))
 
instance MonadIO m =>
    MonadIO (Ran m m) where
        liftIO = liftRan . liftIO
 
instance MonadState s m =>
    MonadState s (Ran m m) where
        get = liftRan get
        put = liftRan . put
 

In fact the list of things you can lift is pretty much the same as what you can lift over the ContT monad transformer due to the similarity in the types. However, just because you lifted the operation into the right or left Kan extension, doesn't mean that it has the same asymptotic performance.

Similarly we can lift many comonadic operations to the Density comonad of a comonad using Lan.

[Edit: Refactored out into Control.Comonad.Density]

Changing Representation

Given a f -| g, g `O` f is a monad, and Ran (g `O` f) (g `O` f) is the monad generated by (g `O` f), described in the previous post. We showed above that this monad can do many of the same things that the original monad could do. From there you can decomposeRan to get Ran g (Ran f (g `O` f)), which you can show to be yet another monad, and you can continue on from there.

Each of these monads may have different operational characteristics and performance tradeoffs. For instance the codensity monad of a monad can offer better asymptotic performance in some usage scenarios.

Similarly the left Kan extension can be used to manipulate the representation of a comonad.

All of this code is encapsulated in category-extras [docs] [darcs] as of release 0.51.0

]]>
http://comonad.com/reader/2008/kan-extensions-ii/feed/ 6
Kan Extensions http://comonad.com/reader/2008/kan-extensions/ http://comonad.com/reader/2008/kan-extensions/#comments Wed, 21 May 2008 01:39:54 +0000 Edward Kmett http://comonad.com/reader/2008/kan-extensions/ I think I may spend a post or two talking about Kan extensions.

They appear to be black magic to Haskell programmers, but as Saunders Mac Lane said in Categories for the Working Mathematician:

All concepts are Kan extensions.

So what is a Kan extension? They come in two forms: right- and left- Kan extensions.

First I'll talk about right Kan extensions, since Haskell programmers have a better intuition for them.

Introducing Right Kan Extension

If we observe the type for a right Kan extension over the category of Haskell types:

 
newtype Ran g h a = Ran
        { runRan :: forall b. (a -> g b) -> h b }
 

This is defined in category-extras under Control.Functor.KanExtension along with a lot of the traditional machinery for working with them.

We say that Ran g h is the right Kan extension of h along g. and mathematicians denote it $\mathbf{Ran}_G H$. It has a pretty diagram associated with it, but thats as deep as I'll let the category theory go.

This looks an awful lot like the type of a continuation monad transformer:

 
newtype ContT r m a = ContT
        { runContT :: (a -> m r) -> m r }
 

The main difference is that we have two functors involved and that the body of the Kan extension is universally quantified over the value it contains, so the function it carries can't just hand you back an m r it has lying around unless the functor it has closed over doesn't depend at all on the type r.

Interestingly we can define an instance of Functor for a right Kan extension without even knowing that g or h are functors! Anything of kind * -> * will do.

 
instance Functor (Ran g h) where
        fmap f m = Ran (\\k -> runRan m (k . f))
 

The monad generated by a functor

We can take the right Kan extension of a functor f along itself (this works for any functor in Haskell) and get what is known as the monad generated by f or the codensity monad of f:

 
instance Monad (Ran f f) where
	return x = Ran (\\k -> k x)
	m >>= k = Ran (\\c -> runRan m (\\a -> runRan (k a) c))
 

This monad is mentioned in passing in Opmonoidal Monads by Paddy McCrudden and dates back further to Ross Street's "The formal theory of monads" from 1972. The term codensity seems to date back at least to Dubuc's thesis in 1974.

Again, this monad doesn't care one whit about the fact that f is a Functor in the Haskell sense.

This monad provides a useful opportunity for optimization. For instance Janis Voigtländer noted in Asymptotic improvement of functions over Free Monads that a particular monad could be used to improve performance -- Free monads as you'll recall are the tool used in Wouter Sweirstra's Data Types á la Carte, and provide an approach for, among other things, decomposing the IO monad into something more modular, so this is by no means a purely academic exercise!

Voigtländer's monad,

 
newtype C m a = C (forall b. (a -> m b) -> m b)
 

turns out to be just the right Kan extension of another monad along itself, and can equivalently be thought of as a ContT that has been universally quantified over its result type.

The improvement results from the fact that the continuation passing style transformation it applies keeps you from traversing back and forth over the entire tree when performing substitution in the free monad.

The Yoneda Lemma

Heretofore we've only used right Kan extensions where we have extended a functor along itself. Lets change that:

Dan Piponi posted a bit about the Yoneda lemma a couple of years back, which ended with the observation that the Yoneda lemma says that check and uncheck are inverses:

 
> check :: Functor f => f a -> (forall b . (a -> b) -> f b)
> check a f = fmap f a
 
> uncheck :: (forall b . (a -> b) -> f b) -> f a
> uncheck t = t id
 

We can see that this definition for a right Kan extension just boxes up that universal quantifier in a newtype and that we could instantiate:

 
> type Yoneda = Ran Identity
 

and we can define check and uncheck as:

 
check' :: Functor f => f a -> Yoneda f a
check' a = Ran (\\f -> fmap (runIdentity . f) a)
 
uncheck' :: Yoneda f a -> f a
uncheck' t = runRan t Identity
 

Limits

We can go on and define categorical limits in terms of right Kan extensions using the Trivial functor that maps everything to a category with a single value and function. In Haskell, this is best expressed by:

 
data Trivial a = Trivial
instance Functor Trivial where
        fmap f _ = Trivial
trivialize :: a -> Trivial b
trivialize _ = Trivial
 
type Lim = Ran Trivial
 

Now, in Haskell, this gives us a clear operational understanding of categorical limits.

 
Lim f a ~ forall b. (a -> Trivial b) -> f b
 

This says that we can't use any information of the value a we supply, or given by the function (a -> Trivial b) when constructing f b, but we have to be able to define an f b for any type b requested. However, we have no way to get any b to plug into the functor! So the only (non-cheating) member of Lim Maybe a is Nothing, of Lim [] a is [], etc.

Left Kan extensions

Left Kan extensions are a bit more obscure to a Haskell programmer, because where right Kan extensions relate to the well-known ContT monad transformer, the left Kan extension is related to a less well known comonad transformer.

First, the a Haskell type for the Left Kan extension of h along g:

 
data Lan g h a = forall b. Lan (g b -> a) (h b)
 

This is related to the admittedly somewhat obscure state-in-context comonad transformer, which I constructed for category-extras.

 
newtype ContextT s w a = ContextT
        { runContextT :: (w s -> a, w s) }
 

However, the left Kan extension provides no information about the type b contained inside of its h functor and g and h are not necessarily the same functor.

As before we get that Lan g h is a Functor regardless of what g and h are, because we only have to map over the right hand side of the contained function:

 
instance Functor (Lan f g) where
	fmap f (Lan g h) = Lan (f . g) h
 

The comonad generated by a functor

We can also see that the left Kan extension of any functor f along itself is a comonad, even if f is not a Haskell Functor. This is of course known as the comonad generated by f, or the density comonad of f.

 
instance Comonad (Lan f f) where
	extract (Lan f a) = f a
	duplicate (Lan f ws) = Lan (Lan f) ws
 

Colimits

Finally we can derive colimits, by:

 
type Colim = Lan Trivial
 

then Colim f a ~ exists b. (Trivial b -> a, f b), and we can see that operationally, we have an f of some unknown type b and for all intents and purposes a value of type a since we can generate a Trivial b from thin air, so while limits allow only structures without values, colimits allow arbitrary structures, but keep you from inspecting the values in them by existential quantification. So for instance you could apply a length function to a Colim [] a, but not add up its values.

You can also build up a covariant analog of the traditional Yoneda lemma using Lan Identity, but I leave that as an exercise for the reader.

I've barely scratched the surface of what you can do with Kan extensions, but I just wanted to shine a little light on this dark corner of category theory.

For more information feel free to explore category-extras. For instance, both right and left Kan extensions along a functor are higher-order functors, and hence so are Yoneda, Lim, and Colim as defined above.

Thats all I have time for now.

Code for right and left Kan extensions, limits, colimits and the Yoneda lemma are all available from category-extras on hackage.

[Edit: the code has since been refactored to treat Yoneda, CoYoneda, Density and Codensity as separate newtypes to allow for instance both Yoneda and Codensity to be different monads]

]]>
http://comonad.com/reader/2008/kan-extensions/feed/ 7
Elgot (Co)Algebras http://comonad.com/reader/2008/elgot-coalgebras/ http://comonad.com/reader/2008/elgot-coalgebras/#comments Tue, 20 May 2008 02:31:26 +0000 Edward Kmett http://comonad.com/reader/2008/elgot-algebras/   > import Control.Arrow ((|||),(&&&),left) > newtype Mu f = InF { outF :: f (Mu f) }  

I want to talk about a novel recursion scheme that hasn't received a lot of attention from the Haskell community and its even more obscure dual -- which is necessarily more obscure because I believe this is the first time anyone has talked about it.

Jiri Adámek, Stefan Milius and Jiri Velebil have done a lot of work on Elgot algebras. Here I'd like to translate them into Haskell, dualize them, observe that the dual can encode primitive recursion, and provide some observations.

You can kind of think an Elgot algebra as a hylomorphism that cheats.

 
> elgot :: Functor f => (f b -> b) -> (a -> Either b (f a)) -> a -> b
> elgot phi psi = h where h = (id ||| phi . fmap h) . psi
 

If you look at the signature for a hylomorphism:

 
> hylo :: Functor f => (f b -> b) -> (a -> f a) -> a -> b
> hylo phi psi = h where h = phi . fmap h . psi
 

Then you can see that an Elgot algebra is basically a hylomorphism that is allowed to shortcircuit the infinite tower of fmaps and return an intermediate result directly.

In some sense you can say that the coalgebra-like side of the hylomorphism is no longer oblivious to the algebra used to deconstruct the intermediate result.

We can take the Elgot algebra and dualize it to get a novel construction where the algebra-like side is no longer oblivious to the coalgebra. This allows your algebra to cheat and just use the intermediate results constructed by the anamorphism to return an answer. I'll choose to call this co-(Elgot algebra) an Elgot coalgebra in the sequel.

 
> coelgot :: Functor f => ((a, f b) -> b) -> (a -> f a) -> a -> b
> coelgot phi psi = h where h = phi . (id &&& fmap h . psi)
 

In a lot of ways an Elgot algebra resembles Vene and Uustalu's apomorphism.

 
> apo :: Functor f => (a -> f (Either (Mu f) a)) -> a -> Mu f
> apo psi = h where h = InF . fmap h . (fmap Left . outF ||| psi) . Right
 

However, we have 'unfixed' the algebra to be used from InF to something more general and the layering of Either and f is different.

Now, a generalized apomorphism does something similar entangling two coalgebras, but the signature doesn't quite match up either, since a generalized apomorphism uses an F-coalgebras and an F-(b + _)-monadic coalgebra.

 
> g_apo :: Functor f => (b -> f b) -> (a -> f (Either b a)) -> a -> Mu f
> g_apo g f = h where h = InF . fmap h . (fmap Left . g ||| f) . Right
 

Similarly a zygomorphism, or more generally a mutumorphism entangles two algebras.

An Elgot algebra occupies a somewhat rare spot in the theory of constructive algorithmics or recursion schemes in that it while it mixes an algebra with a coalgebra like a hylomorphism or metamorphisms, it entangles them in a novel way.

If we specialize the Elgot algebra by fixing its algebra to InF we get:

 
> elgot_apo :: Functor f => (a -> Either (Mu f) (f a)) -> a -> Mu f
> elgot_apo psi = h where h = (id ||| InF . fmap h) . psi
 

We can see that the type is now closely related to that of an apomorphism with some slight changes in design decisions. Instead of wrapping a functor around further seeds, a, or a finished structure, this specialized Elgot algebra returns the finished structure directly or an f wrapped around seeds.

The Good

So can we convert between an apomorphism and an Elgot algebra? For a somewhat circuitous path to that answer lets recall the definition of strength from my post a couple of weeks ago. Flipping the arguments and direction of application for strength to simplify what is coming we get:

 
> strength' :: Functor f => t -> f a -> f (t, a)
> strength' fa b = fmap ((,)b) fa
 

With that in hand we quickly find that we can rederive paramorphisms (and hence primitive recursion) from the novel notion of an Elgot coalgebra that we defined above by leaning on the strength of our functor.

 
> para :: Functor f => (f (Mu f, c) -> c) -> Mu f -> c
> para f = coelgot (f . uncurry strength') outF
 

This result tells us that the shiny new Elgot coalgebras we defined above are strong enough to encode primitive recursion when working in Haskell.

The Bad

This tempts us to try to derive apomorphisms from Elgot algebras using the dual case, costrength. However, if you'll recall from my previous post on comonadic costrength, we can't do that in general. The result is only defined for Traversable functors; not every functor is costrong in Haskell!

Consequently and counterintuitively, though we can define a paramorphism in terms of Elgot coalgebras, we can only define an apomorphism in terms of Elgot algebras for traversable functors.

The Ugly

Now, worse news. Since the tower of functors we build up doesn't run off to infinity we lose the ability to generalize Elgot (co)algebras using the same machinery we can use to generalize the various traditional recursion schemes by parameterizing it by a (co)monad and distributive law.

At least the straightforward translation fails. For instance in the case of an Elgot algebra, the obvious addition would be to allow for the algebra (f a -> a) to be replaced with a F-W-comonadic algebra (f (w a) -> a) for some comonad w. However, attempts to do so run afoul of the fact that the coalgebra-like structure feeds us an 'a' not a 'w a'. We can of course change the signature of the coalgebra to give us the comonad, but the breakdown of modularity is unfortunate.

Similary, parameterizing the coalgebra-like structure with a monad requires the ability to distribute the monad over Either b to get to where it can apply the distributive law for the base functor f. Interestingly the Either monad works, which gives us ways to compose Elgot (co)algebras, but that is a story for another day.

As usual there is a tradeoff in expressivity in one area to compensate for gains in another, but this manner of entangling provides us with a new set of possibilities to explore.

Code for Elgot algebras and Elgot coalgebras has been included in category-extras as of release 0.50.3 as Control.Functor.Algebra.Elgot.

Now available from hackage.

]]>
http://comonad.com/reader/2008/elgot-coalgebras/feed/ 4
Zipping and Unzipping Functors http://comonad.com/reader/2008/zipping-and-unzipping-functors/ http://comonad.com/reader/2008/zipping-and-unzipping-functors/#comments Mon, 05 May 2008 03:00:52 +0000 Edward Kmett http://comonad.com/reader/2008/zipping-and-unzipping-functors/ Kefer asked a question in the comments of my post about (co)monadic (co)strength about Uustalu and Vene's ComonadZip class from p157 of The Essence of Dataflow Programming. The class in question is:

 
class Comonad w => ComonadZip w where
     czip :: f a -> f b -> f (a, b)
 

In response I added Control.Functor.Zip [Source] to my nascent rebundled version of category-extras, which was posted up to hackage earlier today.

Putting aside the dual operation for the moment, we can dispense with the inverse of zip quite simply, for much the same reason that every functor in Haskell is strong, every functor in haskell is unzippable:

 
unfzip :: Functor f => f (a, b) -> (f a, f b)
unfzip = fmap fst &&& fmap snd
 

On the other hand the question of what Functors are zippable is a little trickier. Allowing for a circular definition between fzip and fzipWith we can start with the following class for which you have to implement at least one of fzip or fzipWith.

 
class Functor f => Zip f where
        fzip :: f a -> f b -> f (a, b)
	fzip = fzipWith (,)
        fzipWith :: (a -> b -> c) -> f a -> f b -> f c
	fzipWith f as bs = fmap (uncurry f) (fzip as bs)
 

Here we set aside the restriction that we only be able to Zip a comonad, and simply require that if the functor in question is a comonad, then it is a "symmetric semi-monoidal comonad", which is to say that zipping and then extracting yields the same result as extracting from each separately. You may note a lot of similarity in the above to the definition for Control.Functor.Zap the Dual functor from the other day.

Now, we can throw ourselves with reckless abandon at the easy cases:

 
instance Zip Identity where
	fzipWith f (Identity a) (Identity b) = Identity (f a b)
 
instance Zip [] where
	fzip = zip
	fzipWith = zipWith
 
instance Zip Maybe where
	fzipWith f (Just a) (Just b) = Just (f a b)
	fzipWith f _ _ = Nothing
 

But we note that Either causes us to break down, we can't handle the 'mixed' cases of Left and Right cleanly. We can however use the same 'cheat' that makes the Writer Monad work, however, and rely on an instance of Monoid, and leaving the left hand side of the bifunctor unchanged to enable us to define:

 
instance Monoid a => Zip (Either a) where
	fzipWith f (Left a) (Left b) = Left (mappend a b)
	fzipWith f (Right a) (Left b) = Left b
	fzipWith f (Left a) (Right b) = Left a
	fzipWith f (Right a) (Right b) = Right (f a b)
 

and similarly:

 
instance Monoid a => Zip ((,)a) where
	fzipWith f (a, c) (b, d) = (mappend a b, f c d)
 

Unfortunately the instance for ((,)a) is a little less than satisfying, what we really want to say there is that we have a Bifunctor and that it has two parameters that can be zipped together:

 
class Bifunctor p => Bizip p where
	bizip :: p a c -> p b d -> p (a,b) (c,d)
	bizip = bizipWith (,) (,)
	bizipWith :: (a -> b -> e) -> (c -> d -> f) -> p a c -> p b d -> p e f
	bizipWith f g as bs = bimap (uncurry f) (uncurry g) (bizip as bs)
 

Now, we can define a more satisfying instance for (,):

 
instance Bizip (,) where
	bizipWith f g (a,b) (c,d) = (f a c, g b d)
 

However, by its very nature, an instance for Either eludes us.

Now, we can define a "Bifunctor-Functor-Functor Bifunctor" transformer that takes a bifunctor and a pair of functors to wrap it around, and derives a new bifunctor and lift the zippability of each of the parts to the zippability of the whole:

 
newtype BiffB p f g a b = BiffB { runBiffB :: p (f a) (g b) } 
 
instance (Functor f, Bifunctor p, Functor g) => Bifunctor (BiffB p f g) where
	bimap f g = BiffB . bimap (fmap f) (fmap g) . runBiffB
 
instance (Zip f, Bizip p, Zip g) => Bizip (BiffB p f g) where
	bizipWith f g as bs = BiffB $ bizipWith (fzipWith f) (fzipWith g) (runBiffB as) (runBiffB bs)
 

What is interesting about this is that the cofree comonad and free monad can be defined in terms of BiffB given a definition for the fixed point of a bifunctor:

 
newtype FixB s a = InB { outB :: s a (FixB s a) }
 
instance Bifunctor s => Functor (FixB s) where
        fmap f = InB . bimap f (fmap f) . outB
 
type Cofree f a = FixB (BiffB (,) Identity f) a
type Free f a = FixB (BiffB Either Identity f) a
 

Then we can define that the fixed point of a zippable bifunctor is a zippable functor:

 
instance Bizip p => Zip (FixB p) where
	fzipWith f as bs = InB $ bizipWith f (fzipWith f) (outB as) (outB bs)
 

Then it immediately follows by the construction for BiffB that every Cofree Comonad of a zippable base functor is Zippable because they are the fixed point of BiffB (,) Identity f. and since (,) is zippable and Identity is zippable, then given f zippable the base bifunctor is zippable, so Cofree f is zippable.

On the other hand, we do not get the same result for the Free Monad, because it is built over BiffB Either Identity f, and Either is not a zippable bifunctor.

We can define some other functors and bifunctors which are zippable, i.e. we can define a "functor-wrapped bifunctor bifunctor":

 
 
newtype FunctorB f p a b = FunctorB { runFunctorB :: f (p a b) } 
 
liftFunctorB :: Functor f => (p a b -> p c d) -> FunctorB f p a b -> FunctorB f p c d
liftFunctorB f = FunctorB . fmap f . runFunctorB
 
instance (Functor f, Bifunctor p) => Bifunctor (FunctorB f p) where
	bimap f g = liftFunctorB (bimap f g)
 
instance (Zip f, Bizip p) => Bizip (FunctorB f p) where
	bizipWith f g as bs = FunctorB $ fzipWith (bizipWith f g) (runFunctorB as) (runFunctorB bs)
 

But the general pattern was set by Either and Maybe. Whenever your functor has a branch you need a way to uniquely determine the way the constant terms combine.

While I think the above yields a pleasingly generic version of zip. I do not believe that I have exhausted the set of possible instances, but yielding them automatically for cofree comonads of zippable functors, and hence for rose trees, streams, was rather nice.

If you have any other instances of note, I would welcome the insight.

[category-extras-0.44.1]

]]>
http://comonad.com/reader/2008/zipping-and-unzipping-functors/feed/ 6