The Comonad.Reader » Monoids http://comonad.com/reader types, (co)monads, substructural logic Thu, 02 May 2013 14:19:12 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 Abstracting with Applicatives http://comonad.com/reader/2012/abstracting-with-applicatives/ http://comonad.com/reader/2012/abstracting-with-applicatives/#comments Thu, 27 Dec 2012 01:25:13 +0000 Gershom Bazerman http://comonad.com/reader/?p=756 a <*> b? But we seldom use the Applicative as such — when Functor is too little, Monad is [...]]]> Consider the humble Applicative. More than a functor, less than a monad. It gives us such lovely syntax. Who among us still prefers to write liftM2 foo a b when we could instead write foo <$> a <*> b? But we seldom use the Applicative as such — when Functor is too little, Monad is too much, but a lax monoidal functor is just right. I noticed lately a spate of proper uses of Applicative —Formlets (and their later incarnation in the reform library), OptParse-Applicative (and its competitor library CmdTheLine), and a post by Gergo Erdi on applicatives for declaring dependencies of computations. I also ran into a very similar genuine use for applicatives in working on the Panels library (part of jmacro-rpc), where I wanted to determine dependencies of a dynamically generated dataflow computation. And then, again, I stumbled into an applicative while cooking up a form validation library, which turned out to be a reinvention of the same ideas as formlets.

Given all this, It seems post on thinking with applicatives is in order, showing how to build them up and reason about them. One nice thing about the approach we'll be taking is that it uses a "final" encoding of applicatives, rather than building up and then later interpreting a structure. This is in fact how we typically write monads (pace operational, free, etc.), but since we more often only determine our data structures are applicative after the fact, we often get some extra junk lying around (OptParse-Applicative, for example, has a GADT that I think is entirely extraneous).

So the usual throat clearing:

{-# LANGUAGE TypeOperators, MultiParamTypeClasses, FlexibleInstances,
StandaloneDeriving, FlexibleContexts, UndecidableInstances,
GADTs, KindSignatures, RankNTypes #-}
 
module Main where
import Control.Applicative hiding (Const)
import Data.Monoid hiding (Sum, Product)
import Control.Monad.Identity
instance Show a => Show (Identity a) where
    show (Identity x) = "(Identity " ++ show x ++ ")"

And now, let's start with a classic applicative, going back to the Applicative Programming With Effects paper:

data Const mo a = Const mo deriving Show
 
instance Functor (Const mo) where
    fmap _ (Const mo) = Const mo
 
instance Monoid mo => Applicative (Const mo) where
    pure _ = Const mempty
    (Const f) < *> (Const x) = Const (f <> x)

(Const lives in transformers as the Constant functor, or in base as Const)

Note that Const is not a monad. We've defined it so that its structure is independent of the `a` type. Hence if we try to write (>>=) of type Const mo a -> (a -> Const mo b) -> Const mo b, we'll have no way to "get out" the first `a` and feed it to our second argument.

One great thing about Applicatives is that there is no distinction between applicative transformers and applicatives themselves. This is to say that the composition of two applicatives is cleanly and naturally always also an applicative. We can capture this like so:

 
newtype Compose f g a = Compose (f (g a)) deriving Show
 
instance (Functor f, Functor g) => Functor (Compose f g) where
    fmap f (Compose x) = Compose $ (fmap . fmap) f x
 
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
    pure = Compose . pure . pure
    (Compose f) < *> (Compose x) = Compose $ (< *>) < $> f < *> x

(Compose also lives in transformers)

Note that Applicatives compose two ways. We can also write:

data Product f g a = Product (f a) (g a) deriving Show
 
instance (Functor f, Functor g) => Functor (Product f g) where
    fmap f (Product  x y) = Product (fmap f x) (fmap f y)
 
instance (Applicative f, Applicative g) => Applicative (Product f g) where
    pure x = Product (pure x) (pure x)
    (Product f g) < *> (Product  x y) = Product (f < *> x) (g < *> y)

(Product lives in transformers as well)

This lets us now construct an extremely rich set of applicative structures from humble beginnings. For example, we can reconstruct the Writer Applicative.

type Writer mo = Product (Const mo) Identity
 
tell :: mo -> Writer mo ()
tell x = Product (Const x) (pure ())
-- tell [1] *> tell [2]
-- > Product (Const [1,2]) (Identity ())

Note that if we strip away the newtype noise, Writer turns into (mo,a) which is isomorphic to the Writer monad. However, we've learned something along the way, which is that the monoidal component of Writer (as long as we stay within the rules of applicative) is entirely independent from the "identity" component. However, if we went on to write the Monad instance for our writer (by defining >>=), we'd have to "reach in" to the identity component to grab a value to hand back to the function yielding our monoidal component. Which is to say we would destroy this nice seperation of "trace" and "computational content" afforded by simply taking the product of two Applicatives.

Now let's make things more interesting. It turns out that just as the composition of two applicatives may be a monad, so too the composition of two monads may be no stronger than an applicative!

We'll see this by introducing Maybe into the picture, for possibly failing computations.

type FailingWriter mo = Compose (Writer mo) Maybe
 
tellFW :: Monoid mo => mo -> FailingWriter mo ()
tellFW x = Compose (tell x *> pure (Just ()))
 
failFW :: Monoid mo => FailingWriter mo a
failFW = Compose (pure Nothing)
-- tellFW [1] *> tellFW [2]
-- > Compose (Product (Const [1,2]) (Identity Just ()))

-- tellFW [1] *> failFW *> tellFW [2]
-- > Compose (Product (Const [1,2]) (Identity Nothing))

Maybe over Writer gives us the same effects we'd get in a Monad — either the entire computation fails, or we get the result and the trace. But Writer over Maybe gives us new behavior. We get the entire trace, even if some computations have failed! This structure, just like Const, cannot be given a proper Monad instance. (In fact if we take Writer over Maybe as a Monad, we get only the trace until the first point of failure).

This seperation of a monoidal trace from computational effects (either entirely independent of a computation [via a product] or independent between parts of a computation [via Compose]) is the key to lots of neat tricks with applicative functors.

Next, let's look at Gergo Erdi's "Static Analysis with Applicatives" that is built using free applicatives. We can get essentially the same behavior directly from the product of a constant monad with an arbitrary effectful monad representing our ambient environment of information. As long as we constrain ourselves to only querying it with the takeEnv function, then we can either read the left side of our product to statically read dependencies, or the right side to actually utilize them.

type HasEnv k m = Product (Const [k]) m
takeEnv :: (k -> m a) -> k -> HasEnv k m a
takeEnv f x = Product (Const [x]) (f x)

If we prefer, we can capture queries of a static environment directly with the standard Reader applicative, which is just a newtype over the function arrow. There are other varients of this that perhaps come closer to exactly how Erdi's post does things, but I think this is enough to demonstrate the general idea.

data Reader r a = Reader (r -> a)
instance Functor (Reader r) where
    fmap f (Reader x) = Reader (f . x)
instance Applicative (Reader r) where
    pure x = Reader $ pure x
    (Reader f) < *> (Reader x) = Reader (f < *> x)
 
runReader :: (Reader r a) -> r -> a
runReader (Reader f) = f
 
takeEnvNew :: (env -> k -> a) -> k -> HasEnv k (Reader env) a
takeEnvNew f x = Product (Const [x]) (Reader $ flip f x)

So, what then is a full formlet? It's something that can be executed in one context as a monoid that builds a form, and in another as a parser. so the top level must be a product.

type FormletOne mo a = Product (Const mo) Identity a

Below the product, we read from an environment and perhaps get an answer. So that's reader with a maybe.

type FormletTwo mo env a =
    Product (Const mo) (Compose (Reader env) Maybe) a

Now if we fail, we want to have a trace of errors. So we expand out the Maybe into a product as well to get the following, which adds monoidal errors:

type FormletThree mo err env a =
    Product (Const mo)
            (Compose (Reader env) (Product (Const err) Maybe)) a

But now we get errors whether or not the parse succeeds. We want to say either the parse succeeds or we get errors. For this, we can turn to the typical Sum functor, which currently lives as Coproduct in comonad-transformers, but will hopefully be moving as Sum to the transformers library in short order.

data Sum f g a = InL (f a) | InR (g a) deriving Show
 
instance (Functor f, Functor g) => Functor (Sum f g) where
    fmap f (InL x) = InL (fmap f x)
    fmap f (InR y) = InR (fmap f y)

The Functor instance is straightforward for Sum, but the applicative instance is puzzling. What should "pure" do? It needs to inject into either the left or the right, so clearly we need some form of "bias" in the instance. What we really need is the capacity to "work in" one side of the sum until compelled to switch over to the other, at which point we're stuck there. If two functors, F and G are in a relationship such that we can always send f x -> g x in a way that "respects" fmap (that is to say, such that (fmap f . fToG == ftoG . fmap f) then we call this a natural transformation. The action that sends f to g is typically called "eta". (We actually want something slightly stronger called a "monoidal natural transformation" that respects not only the functorial action fmap but the applicative action <*>, but we can ignore that for now).

Now we can assert that as long as there is a natural transformation between g and f, then Sum f g can be made an Applicative, like so:

class Natural f g where
    eta :: f a -> g a
 
instance (Applicative f, Applicative g, Natural g f) =>
  Applicative (Sum f g) where
    pure x = InR $ pure x
    (InL f) < *> (InL x) = InL (f < *> x)
    (InR g) < *> (InR y) = InR (g < *> y)
    (InL f) < *> (InR x) = InL (f < *> eta x)
    (InR g) < *> (InL x) = InL (eta g < *> x)

The natural transformation we'll tend to use simply sends any functor to Const.

instance Monoid mo => Natural f (Const mo) where
    eta = const (Const mempty)

However, there are plenty of other natural transformations that we could potentially make use of, like so:

instance Applicative f =>
  Natural g (Compose f g) where
     eta = Compose . pure
 
instance (Applicative g, Functor f) => Natural f (Compose f g) where
     eta = Compose . fmap pure
 
instance (Natural f g) => Natural f (Product f g) where
    eta x = Product x (eta x)
 
instance (Natural g f) => Natural g (Product f g) where
    eta x = Product (eta x) x
 
instance Natural (Product f g) f where
    eta (Product x _ ) = x
 
instance Natural (Product f g) g where
    eta (Product _ x) = x
 
instance Natural g f => Natural (Sum f g) f where
    eta (InL x) = x
    eta (InR y) = eta y
 
instance Natural Identity (Reader r) where
    eta (Identity x) = pure x

In theory, there should also be a natural transformation that can be built magically from the product of any other two natural transformations, but that will just confuse the Haskell typechecker hopelessly. This is because we know that often different "paths" of typeclass choices will often be isomorphic, but the compiler has to actually pick one "canonical" composition of natural transformations to compute with, although multiple paths will typically be possible.

For similar reasons of avoiding overlap, we can't both have the terminal homomorphism that sends everything to "Const" and the initial homomorphism that sends "Identity" to anything like so:

-- instance Applicative g => Natural Identity g where
--     eta (Identity x) = pure x
 

We choose to keep the terminal transformation around because it is more generally useful for our purposes. As the comments below point out, it turns out that a version of "Sum" with the initial transformation baked in now lives in transformers as Lift.

In any case we can now write a proper Validation applicative:

type Validation mo = Sum (Const mo) Identity
 
validationError :: Monoid mo => mo -> Validation mo a
validationError x = InL (Const x)

This applicative will yield either a single result, or an accumulation of monoidal errors. It exists on hackage in the Validation package.

Now, based on the same principles, we can produce a full Formlet.

type Formlet mo err env a =
    Product (Const mo)
            (Compose (Reader env)
                     (Sum (Const err) Identity))
    a

All the type and newtype noise looks a bit ugly, I'll grant. But the idea is to think with structures built with applicatives, which gives guarantees that we're building applicative structures, and furthermore, structures with certain guarantees in terms of which components can be interpreted independently of which others. So, for example, we can strip away the newtype noise and find the following:

type FormletClean mo err env a = (mo, env -> Either err a)

Because we built this up from our basic library of applicatives, we also know how to write its applicative instance directly.

Now that we've gotten a basic algebraic vocabulary of applicatives, and especially now that we've produced this nifty Sum applicative (which I haven't seen presented before), we've gotten to where I intended to stop.

But lots of other questions arise, on two axes. First, what other typeclasses beyond applicative do our constructions satisfy? Second, what basic pieces of vocabulary are missing from our constructions — what do we need to add to flesh out our universe of discourse? (Fixpoints come to mind).

Also, what statements can we make about "completeness" — what portion of the space of all applicatives can we enumerate and construct in this way? Finally, why is it that monoids seem to crop up so much in the course of working with Applicatives? I plan to tackle at least some of these questions in future blog posts.

]]>
http://comonad.com/reader/2012/abstracting-with-applicatives/feed/ 7
Free Modules and Functional Linear Functionals http://comonad.com/reader/2011/free-modules-and-functional-linear-functionals/ http://comonad.com/reader/2011/free-modules-and-functional-linear-functionals/#comments Mon, 11 Jul 2011 20:58:04 +0000 Edward Kmett http://comonad.com/reader/?p=356 Today I hope to start a new series of posts exploring constructive abstract algebra in Haskell.

In particular, I want to talk about a novel encoding of linear functionals, polynomials and linear maps in Haskell, but first we're going to have to build up some common terminology.

Having obtained the blessing of Wolfgang Jeltsch, I replaced the algebra package on hackage with something... bigger, although still very much a work in progress.

(Infinite) Modules over Semirings

Recall that a vector space V over a field F is given by an additive Abelian group on V, and a scalar multiplication operator
(.*) :: F -> V -> V subject to distributivity laws

 
s .* (u + v) = s .* u + s .* v
(s + t) .* v = s .* v + t .* v
 

and associativity laws

 
   (s * t) .* v = s .* (t .* v)
 

and respect of the unit of the field.

 
   1 .* v = v
 

Since multiplication on a field is commutative, we can also add

 
  (*.) :: V -> F -> V
  v *. f = f .* v
 

with analogous rules.

But when F is only a Ring, we call the analogous structure a module, and in a ring, we can't rely on the commutativity of multiplication, so we may have to deal left-modules and right-modules, where only one of those products is available.

We can weaken the structure still further. If we lose the negation in our Ring we and go to a Rig (often called a Semiring), now our module is an additive moniod.

If we get rid of the additive and multiplicative unit on our Rig we get down to what some authors call a Ringoid, but which we'll call a Semiring here, because it makes the connection between semiring and semigroup clearer, and the -oid suffix is dangerously overloaded due to category theory.

First we'll define additive semigroups, because I'm going to need both additive and multiplicative monoids over the same types, and Data.Monoid has simultaneously too much and too little structure.

 
-- (a + b) + c = a + (b + c)
class Additive m where
  (+) :: m -> m -> m
  replicate1p :: Whole n => n -> m -> m -- (ignore this for now)
  -- ...
 

their Abelian cousins

 
-- a + b = b + a
class Additive m => Abelian m
 

and Multiplicative semigroups

 
-- (a * b) * c = a * (b * c)
class Multiplicative m where
  (*) :: m -> m -> m
  pow1p :: Whole n => m -> n -> m
  -- ...
 

Then we can define a semirings

 
-- a*(b + c) = a*b + a*c
-- (a + b)*c = a*c + b*c
class (Additive m, Abelian m, Multiplicative m) => Semiring
 

With that we can define modules over a semiring:

 
-- r .* (x + y) = r .* x + r .* y
-- (r + s) .* x = r .* x + s .* x
-- (r * s) .* x = r .* (s .* x)
class (Semiring r, Additive m) => LeftModule r m
   (.*) :: r -> m -> m
 

and analogously:

 
class (Semiring r, Additive m) => RightModule r m
   (*.) :: m -> r -> m
 

For instance every additive semigroup forms a semiring module over the positive natural numbers (1,2..) using replicate1p.

If we know that our addition forms a monoid, then we can form a module over the naturals as well

 
-- | zero + a = a = a + zero
class
    (LeftModule Natural m,
    RightModule Natural m
    ) => AdditiveMonoid m where
   zero :: m
   replicate :: Whole n => n -> m -> m
 

and if our addition forms a group, then we can form a module over the integers

 
-- | a + negate a = zero = negate a + a
class
    (LeftModule Integer m
    , RightModule Integer m
    ) => AdditiveGroup m where
  negate :: m -> m
  times :: Integral n => n -> m -> m
  -- ...
 

Free Modules over Semirings

A free module on a set E, is a module where the basis vectors are elements of E. Basically it is |E| copies of some (semi)ring.

In Haskell we can represent the free module of a ring directly by defining the action of the (semi)group pointwise.

 
instance Additive m => Additive (e -> m) where
   f + g = \x -> f x + g x
 
instance Abelian m => Abelian (e -> m)
 
instance AdditiveMonoid m => AdditiveMonoid (e -> m) where
   zero = const zero
 
instance AdditiveGroup m => AdditveGroup (e -> m) where
   f - g = \x -> f x - g x
 

We could define the following

 
instance Semiring r => LeftModule r (e -> m) where
   r .* f = \x -> r * f x
 

but then we'd have trouble dealing with the Natural and Integer constraints above, so instead we lift modules

 
instance LeftModule r m => LeftModule r (e -> m) where
   (.*) m f e = m .* f e
 
instance RightModule r m => RightModule r (e -> m) where
   (*.) f m e = f e *. m
 

We could go one step further and define multiplication pointwise, but while the direct product of |e| copies of a ring _does_ define a ring, and this ring is the one provided by the Conal Elliot's vector-space package, it isn't the most general ring we could construct. But we'll need to take a detour first.

Linear Functionals

A Linear functional f on a module M is a linear function from a M to its scalars R.

That is to say that, f : M -> R such that

 
f (a .* x + y) = a * f x + f y
 

Consequently linear functionals also form a module over R. We call this module the dual module M*.

Dan Piponi has blogged about these dual vectors (or covectors) in the context of trace diagrams.

If we limit our discussion to free modules, then M = E -> R, so a linear functional on M looks like (E -> R) -> R
subject to additional linearity constraints on the result arrow.

The main thing we're not allowed to do in our function is apply our function from E -> R to two different E's and then multiply the results together. Our pointwise definitions above satisfy those linearity constraints, but for example:

 
bad f = f 0 * f 0
 

does not.

We could capture this invariant in the type by saying that instead we want

 
newtype LinearM r e =
  LinearM {
    runLinearM :: forall r. LeftModule r m => (e -> m) -> m
  }
 

we'd have to make a new such type every time we subclassed Semiring. I'll leave further exploration of this more exotic type to another time. (Using some technically illegal module instances we can recover more structure that you'd expect.)

Now we can package up the type of covectors/linear functionals:

 
infixr 0 $*
newtype Linear r a = Linear { ($*) :: (a -> r) -> r }
 

The sufficiently observant may have already noticed that this type is the same as the Cont monad (subject to the linearity restriction on the result arrow).

In fact the Functor, Monad, Applicative instances for Cont all carry over, and preserve linearity.

(We lose callCC, but that is at least partially due to the fact that callCC has a less than ideal type signature.)

In addition we get a number of additional instances for Alternative, MonadPlus, by exploiting the knowledge that r is ring-like:

 
instance AdditiveMonoid r => Alternative (Linear r a) where
  Linear f < |> Linear g = Linear (f + g)
  empty = Linear zero
 

Note that the (+) and zero there are the ones defined on functions from our earlier free module construction!

Linear Maps

Since Linear r is a monad, Kleisli (Linear r) forms an Arrow:

 
b -> ((a -> r) ~> r)
 

where the ~> denotes the arrow that is constrained to be linear.

If we swap the order of the arguments so that

 
(a -> r) ~> (b -> r)
 

this arrow has a very nice meaning! (See Numeric.Map.Linear)

 
infixr 0 $#
newtype Map r b a = Map { ($#) :: (a -> r) -> (b -> r) }
 

Map r b a represents the type of linear maps from a -> b. Unfortunately due to contravariance the arguments wind up in the "wrong" order.

 
instance Category (Map r) where
  Map f . Map g = Map (g . f)
  id = Map id
 

So we can see that a linear map from a module A with basis a to a vector space with basis b effectively consists of |b| linear functionals on A.

Map r b a provides a lot of structure. It is a valid instance of an insanely large number of classes.

Vectors and Covectors

In physics, we sometimes call linear functionals covectors or covariant vectors, and if we're feeling particularly loquacious, we'll refer to vectors as contravariant vectors.

This has to do with the fact that when you change basis, you change map the change over covariant vectors covariantly, and map the change over vectors contravariantly. (This distinction is beautifully captured by Einstein's summation notation.)

We also have a notion of covariance and contravariance in computer science!

Functions vary covariantly in their result, and contravariant in their argument. E -> R is contravariant in E. But we chose this representation for our free modules, so the vectors in our free vector space (or module) are contravariant in E.

 
class Contravariant f where
  contramap :: (a -> b) -> f a -> f b
 
-- | Dual function arrows.
newtype Op a b = Op { getOp :: b -> a } 
 
instance Contravariant (Op a) where
  contramap f g = Op (getOp g . f)
 

On the other hand (E -> R) ~> R varies covariantly with the change of E.

as witnessed by the fact that it is a Functor.

 
instance Functor (Linear r) where
  fmap f m = Linear $ \k -> m $* k . f
 

We have lots of classes for manipulating covariant structures, and most of them apply to both (Linear r) and (Map r b).

Other Representations and Design Trade-offs

One common representation of vectors in a free vector space is as some kind of normalized list of scalars and basis vectors. In particular, David Amos's wonderful HaskellForMaths uses

 
newtype Vect r a = Vect { runVect :: [(r,a)] }
 

for free vector spaces, only considering them up to linearity, paying for normalization as it goes.

Given the insight above we can see that Vect isn't a representation of vectors in the free vector space, but instead represents the covectors of that space, quite simply because Vect r a varies covariantly with change of basis!

Now the price of using the Monad on Vect r is that the monad denormalizes the representation. In particular, you can have multiple copies of the same basis vector., so any function that uses Vect r a has to merge them together.

On the other hand with the directly encoded linear functionals we've described here, we've placed no obligations on the consumer of a linear functional. They can feed the directly encoded linear functional any vector they want!

In fact, it'll even be quite a bit more efficient to compute,

To see this, just consider:

 
instance MultiplicativeMonoid r => Monad (Vect r) where
   return a = Vect [(1,a)]
   Vect as >>= f = Vect
       [ (p*q, b) | (p,a) < - as, (q,b) <- runVect (f b) ]
 

Every >>= must pay for multiplication. Every return will multiply the element by one. On the other hand, the price of return and bind in Linear r is function application.

 
instance Monad (Linear r) where
  return a = Linear $ \k -> k a
  m >>= f = Linear $ \k -> m $* \a -> f a $* k
 

A Digression on Free Linear Functionals

To wax categorical for a moment, we can construct a forgetful functor U : Vect_F -> Set that takes a vector space over F to just its set of covectors.

 
F E = (E -> F, F,\f g x -> f x + g x ,\r f x -> r * f x)
 

using the pointwise constructions we built earlier.

Then in a classical setting, you can show that F is left adjoint to U.

In particular the witnesses of this adjunction provide the linear map from (E -> F) to V and the function E -> (V ~> F) giving a linear functional on V for each element of E.

In a classical setting you can go a lot farther, and show that all vector spaces (but not all modules) are free.

But in a constructive setting, such as Haskell, we need a fair bit to go back and forth, in particular we wind up need E to be finitely enumerable to go one way, and for it to have decidable equality to go in the other. The latter is fairly easy to see, because even going from E -> (E -> F) requires that we can define and partially apply something like Kronecker's delta:

 
delta :: (Rig r, Eq a) => e -> e -> r
delta i j | i == j = one
             | otherwise = zero
 

The Price of Power

The price we pay is that, given a Rig, we can go from Vect r a to Linear r a but going back requires a to be be finitely enumerable (or for our functional to satisfy other exotic side-conditions).

 
vectMap :: Rig r => Vect r a -> Linear r a
vectMap (Vect as) = Map $ \k -> sum [ r * k a | (r, a) < - as ]
 

You can still probe Linear r a for individual coefficients, or pass it a vector for polynomial evaluation very easily, but for instance determining a degree of a polynomial efficiently requires attaching more structure to your semiring, because the only value you can get out of Linear r a is an r.

Optimizing Linear Functionals

In both the Vect r and Linear r cases, excessive use of (>>=) without somehow normalizing or tabulating your data will cause a lot of repeated work.

This is perhaps easiest to see from the fact that Vect r never used the addition of r, so it distributed everything into a kind of disjunctive normal form. Linear r does the same thing.

If you look at the Kleisli arrows of Vect r or Linear r as linear mappings, then you can see that Kleisli composition is going to explode the number of terms.

So how can we collapse back down?

In the Kleisli (Vect r) case we usually build up a map as we walk through the list then spit the list back out in order having added up like terms.

In the Map r case, we can do better. My representable-tries package provides a readily instantiable HasTrie class, and the method:

 
memo :: HasTrie a => (a -> r) -> a -> r
 

which is responsible for providing a memoized version of the function from a -> r in a purely functional way. This is obviously a linear map!

 
memoMap :: HasTrie a => Map r a a
memoMap = Map memo
 

We can also flip memo around and memoize linear functionals.

 
memoLinear :: HasTrie a => a -> Linear r a
memoLinear = Linear . flip memo
 

Next time, (co)associative (co)algebras and the myriad means of multiplying (co)vectors!

]]>
http://comonad.com/reader/2011/free-modules-and-functional-linear-functionals/feed/ 9
Brodal-Okasaki Heaps in Haskell http://comonad.com/reader/2010/brodal-okasaki-heaps-in-haskell/ http://comonad.com/reader/2010/brodal-okasaki-heaps-in-haskell/#comments Sun, 16 May 2010 04:38:11 +0000 Edward Kmett http://comonad.com/reader/?p=187 I've uploaded a package named heaps to Hackage that provides Brodal-Okasaki bootstrapped skew-binomial heaps in Haskell.

The main features of the library are that it provides a nice containers-like API with provably asymptotically optimal functional heap operations including O(1) insert and O(1) union, and that the library design jump through a number of hoops to provide implementations of common Haskell typeclasses such as Foldable, Data and Typeable.

]]>
http://comonad.com/reader/2010/brodal-okasaki-heaps-in-haskell/feed/ 0
Iteratees, Parsec, and Monoids, Oh My! http://comonad.com/reader/2009/iteratees-take-2/ http://comonad.com/reader/2009/iteratees-take-2/#comments Wed, 16 Sep 2009 02:51:08 +0000 Edward Kmett http://comonad.com/reader/?p=165 I'll be giving a talk tomorrow, Wednesday, September 16th, 2009 at the Boston Haskell User Group in the MIT CSAIL Reading Room (on the 8th floor of the William H. Gates tower of the Stata center) about mixing Oleg's iteratees with parsec and monoids to build practical parallel parsers and to cheaply reparse after local modifications are made to source code.

Ravi is trying to organize some time before hand during which people can get together and work on Haskell projects, or spend some time learning Haskell, so its not all scary academic stuff.

The meeting is scheduled from 7-9pm, and an ever growing number of us have been wandering down to the Cambridge Brewing Company afterwards to hang out and talk.

If you are curious about Haskell, or even an expert, or just happen to be interested in parallel programming and find yourself in the area, come on by.

]]>
http://comonad.com/reader/2009/iteratees-take-2/feed/ 7
Iteratees, Parsec and Monoids (Slides) http://comonad.com/reader/2009/iteratees-parsec-and-monoid/ http://comonad.com/reader/2009/iteratees-parsec-and-monoid/#comments Thu, 20 Aug 2009 16:55:03 +0000 Edward Kmett http://comonad.com/reader/?p=122
  • Introduction To Monoids (PDF)
  • Iteratees, Parsec and Monoids: A Parsing Trifecta (PDF)
  • ]]>
    I was asked to give two talks at the Boston Area Haskell User Group for this past Tuesday. The first was pitched at a more introductory level and the second was to go deeper into what I have been using monoids for lately.

    The first talk covers an introduction to the mathematical notion of a monoid, introduces some of the features of my Haskell monoids library on hackage, and starts to motivate the use of monoidal parallel/incremental parsing, and the modification use of compression algorithms to recycle monoidal results.

    The second talk covers a way to generate a locally-context sensitive parallel/incremental parser by modifying Iteratees to enable them to drive a Parsec 3 lexer, and then wrapping that in a monoid based on error productions in the grammar before recycling these techniques at a higher level to deal with parsing seemingly stateful structures, such as Haskell layout.

    1. Introduction To Monoids (PDF)
    2. Iteratees, Parsec and Monoids: A Parsing Trifecta (PDF)

    Due to a late start, I was unable to give the second talk. However, I did give a quick run through to a few die-hards who stayed late and came to the Cambridge Brewing Company afterwards. As I promised some people that I would post the slides after the talk, here they are.

    The current plan is to possibly give the second talk in full at either the September or October Boston Haskell User Group sessions, depending on scheduling and availability.

    [ Iteratee.hs ]

    ]]>
    http://comonad.com/reader/2009/iteratees-parsec-and-monoid/feed/ 5
    Clearer Reflections http://comonad.com/reader/2009/clearer-reflection/ http://comonad.com/reader/2009/clearer-reflection/#comments Sat, 15 Aug 2009 09:04:59 +0000 Edward Kmett http://comonad.com/reader/?p=93 I have updated the reflection package on hackage to use an idea for avoiding dummy arguments posted to the Haskell cafe mailing list by Bertram Felgenhauer, which adapts nicely to the case of handling Reflection. The reflection package implements the ideas from the Functional Pearl: Implicit Configurations paper by Oleg Kiselyov and Chung-chieh Shan.

    Now, you no longer need to use big scary undefineds throughout your code and can instead program with implicit configurations more naturally, using Applicative and Monad sugar.

     
    *Data.Reflection> reify (+)
        (reflect < *> pure 1 < *> (reflect < *> pure 2 < *> pure 3))
    > 6
     

    The Monad in question just replaces the lambda with a phantom type parameter, enabling the compiler to more readily notice that no instance can actually even try to use the value of the type parameter.

    An example from the old API can be seen on the Haskell cafe.

    This example can be made appreciably less scary now!

     
    {-# LANGUAGE
         MultiParamTypeClasses,
         FlexibleInstances, Rank2Types,
         FlexibleContexts, UndecidableInstances #-}
    import Control.Applicative
    import Data.Reflection
    import Data.Monoid
    import Data.Tagged
     
    newtype M s a = M a
     
    instance Reifies s (a,a → a → a) ⇒ Monoid (M s a) where
        mempty = tagMonoid $ fst < $> reflect
        a `mappend` b = tagMonoid $
            snd < $> reflect < *> monoidTag a < *> monoidTag b
     
    monoidTag :: M s a → Tagged s a
    monoidTag (M a) = Tagged a
     
    tagMonoid :: Tagged s a → M s a
    tagMonoid (Tagged a) = M a
     
    withMonoid :: a → (a → a → a)(∀s. Reifies s (a, a → a → a) ⇒ M s w) → w
    withMonoid e op m = reify (e,op) (monoidTag m)
     

    And with that we can cram a Monoid dictionary -- or any other -- with whatever methods we want and our safety is assured by parametricity due to the rank 2 type, just like with the ST monad.

     
    *> withMonoid 0 (+) (M 5 `mappend` M 4 `mappend` mempty)
    9
     

    [Edit: factored Tagged out into Data.Tagged in a separate package, and modified reflection to use that instead, with an appropriate version bump to satisfy the package versioning policy]

    ]]>
    http://comonad.com/reader/2009/clearer-reflection/feed/ 7
    Slides from Hac Phi: “All About Monoids” http://comonad.com/reader/2009/hac-phi-slides/ http://comonad.com/reader/2009/hac-phi-slides/#comments Fri, 31 Jul 2009 15:41:16 +0000 Edward Kmett http://comonad.com/reader/?p=85 Some people have requested my slides from the short talk I gave about monoids and monoidal parsing at Hac Phi. So, here they are.

    There will be more to come at the next Boston Haskell User Group in August, where it looks like I'll be giving two short talks covering monoids. I may use the monoidal parsing engine from Kata as an example for the advanced talk if I have time and will start to cover parsing larger classes of grammars in general (regular languages, CFGs/TIGs, TAGs, PEGs, LALR, attribute-grammars, etc.)

    ]]>
    http://comonad.com/reader/2009/hac-phi-slides/feed/ 10