The Comonad.Reader http://comonad.com/reader types, (co)monads, substructural logic Thu, 02 May 2013 14:19:12 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 Representing Applicatives http://comonad.com/reader/2013/representing-applicatives/ http://comonad.com/reader/2013/representing-applicatives/#comments Thu, 02 May 2013 14:19:12 +0000 Gershom Bazerman http://comonad.com/reader/?p=907 In the previous two posts, we've built up a whole range of applicatives, out of Const, Identity, Reader, Compose, Product, Sum, and Fix (and some higher-order analogues). Sum has given us the most trouble, but in some sense has been the most powerful, letting us write things like possibly eventually terminating lists, or trees, or in fact any sort of structure with branching alternatives. In this post, I want to think a bit more about why it is that Sum is the trickiest of the bunch, and more generally, what we can say about when two applicative structures are the "same". In the process of doing so, we'll invent something a lot like Traversable en passant.

Let's do some counting exercises. Product Identity Identity holds exactly two things. It is therefore isomorphic to ((->) Bool), or if we prefer, ((->) Either () ()). That is to say that a pair that holds two values of type a is the same as a function that takes a two-valued type and yields a value of type a. A product of more functors in turn is isomorphic to the reader of the sum of each of the datatypes that "represent" them. E.g. Product (Product Identity Identity) (Product (Const ()) Identity) is iso to ((->) (Either (Either () ()) ()), i.e. a data type with three possible inhabitants. In making this move we took Product to Either -- multiplication to sum. We can pull a similar trick with Compose. Compose (Product Identity Identity) (Product Identity Identity) goes to ((->) (Either () (),Either () ())). So again we took Product to a sum type, but now we took Compose to a pair -- a product type! The intuition is that composition multiplies the possibilities of spaces in each nested functor.

Hmm.. products go to sums, composition goes to multiplication, etc. This should remind us of something -- these rules are exactly the rules for working with exponentials. x^n * x^m = x^(n + m). (x^n)^m = x^(n*m). x^0 = 1, x^1 = x.

Seen from the right standpoint, this isn't surprising at all, but almost inevitable. The functors we're describing are known as "representable," a term which derives from category theory. (See appendix on representable functors below).

In Haskell-land, a "representable functor" is just any functor isomorphic to the reader functor ((->) a) for some appropriate a. Now if we think back to our algebraic representations of data types, we call the arrow type constructor an exponential. We can "count" a -> x as x^a, since e.g. there are 3^2 distinct functions that inhabit the type 2 -> 3. The intuition for this is that for each input we pick one of the possible results, so as the number of inputs goes up by one, the number of functions goes up by multiplying through by the set of possible results. 1 -> 3 = 3, 2 -> 3 = 3 * 3, (n + 1) -> 3 = 3 * (n -> 3).

Hence, if we "represent" our functors by exponentials, then we can work with them directly as exponentials as well, with all the usual rules. Edward Kmett has a library encoding representable functors in Haskell.

Meanwhile, Peter Hancock prefers to call such functors "Naperian" after John Napier, inventor of the logarithm (See also here). Why Naperian? Because if our functors are isomorphic to exponentials, then we can take their logs! And that brings us back to the initial discussion of type mathematics. We have some functor F, and claim that it is isomorphic to -^R for some concrete data type R. Well, this means that R is the logarithm of F. E.g. (R -> a, S -> a) =~ Either R S -> a, which is to say that if log F = R and log G =~ S, then log (F * G) = log F + log G. Similarly, for any other data type n, again with log F = R, we have n -> F a =~ n -> R -> a =~ (n * R) -> a, which is to say that log (F^n) =~ n * log F.

This gives us one intuition for why the sum functor is not generally representable -- it is very difficult to decompose log (F + G) into some simpler compound expression of logs.

So what functors are Representable? Anything that can be seen as a fixed shape with some index. Pairs, fixed-size vectors, fixed-size matrices, any nesting of fixed vectors and matricies. But also infinite structures of regular shape! However, not things whose shape can vary -- not lists, not sums. Trees of fixed depth or infinite binary trees therefore, but not trees of arbitrary depth or with ragged structure, etc.

Representable functors turn out to be extremely powerful tools. Once we know a functor is representable, we know exactly what its applicative instance must be, and that its applicative instance will be "zippy" -- i.e. acting pointwise across the structure. We also know that it has a monad instance! And, unfortunately, that this monad instance is typically fairly useless (in that it is also "zippy" -- i.e. the monad instance on a pair just acts on the two elements pointwise, without ever allowing anything in the first slot to affect anything in the second slot, etc.). But we know more than that. We know that a representable functor, by virtue of being a reader in disguise, cannot have effects that migrate outwards. So any two actions in a representable functor are commutative. And more than that, they are entirely independent.

This means that all representable functors are "distributive"! Given any functor f, and any data type r, then we have

 
distributeReader :: Functor f => f (r -> a) -> (r -> f a)
distributeReader fra = \r -> fmap ($r) fra
 

That is to say, given an arrow "inside" a functor, we can always pull the arrow out, and "distribute" application across the contents of the functor. A list of functions from Int -> Int becomes a single function from Int to a list of Int, etc. More generally, since all representable functors are isomorphic to reader, given g representable, and f any functor, then we have: distribute :: (Functor f, Representable g) => f (g a) -> g (f a).

This is pretty powerful sauce! And if f and g are both representable, then we get the transposition isomorphism, witnessed by flip! That's just the beginning of the good stuff. If we take functions and "unrepresent" them back to functors (i.e. take their logs), then we can do things like move from ((->) Bool) to pairs, etc. Since we're in a pervasively lazy language, we've just created a library for memoization! This is because we've gone from a function to a data structure we can index into, representing each possible argument to this function as a "slot" in the structure. And the laziness pays off because we only need to evaluate the contents of each slot on demand (otherwise we'd have a precomputed lookup table rather than a dynamically-evaluated memo table).

And now suppose we take our representable functor in the form s -> a and paired it with an "index" into that function, in the form of a concrete s. Then we'd be able to step that s forward or backwards and navigate around our structure of as. And this is precisely the Store Comonad! And this in turn gives a characterization of the lens laws.

What this all gives us a tiny taste of, in fact, is the tremendous power of the Yoneda lemma, which, in Haskell, is all about going between values and functions, and in fact captures the important universality and uniqueness properties that make working with representable functors tractable. A further tiny taste of Yoneda comes from a nice blog post by Conal Elliott on memoization.

Extra Credit on Sum Functors

There in fact is a log identity on sums. It goes like this:

log(a + c) = log a + log (1 + c/a)

Do you have a useful computational interpretation of this? I've got the inklings of one, but not much else.

Appendix: Notes on Representable Functors in Hask.

The way to think about this is to take some arbitrary category C, and some category that's basically Set (in our case, Hask. In fact, in our case, C is Hask too, and we're just talking about endofunctors on Hask). Now, we take some functor F : C -> Set, and some A which is an element of C. The set of morphisms originating at A (denoted by Hom(A,-)) constitutes a functor called the "hom functor." For any object X in C, we can "plug it in" to Hom(A,-), to then get the set of all arrows from A to X. And for any morphism X -> Y in C, we can derive a morphism from Hom(A,X) to Hom(A,Y), by composition. This is equivalent to, in Haskell-land, using a function f :: x -> y to send g :: a -> x to a -> y by writing "functionAToY = f . g".

So, for any A in C, we have a hom functor on C, which is C -> Set, where the elements of the resultant Set are homomorphisms in C. Now, we have this other arbitrary functor F, which is also C -> Set. Now, if there is an isomorphism of functors between F, and Hom(A,_), then we say F is "representable". A representable functor is thus one that can be worked with entirely as an appropriate hom-functor.

]]>
http://comonad.com/reader/2013/representing-applicatives/feed/ 0
Japanese “ekmett” Workshop (Part 1 of 2) http://comonad.com/reader/2013/japanese-workshop-1/ http://comonad.com/reader/2013/japanese-workshop-1/#comments Mon, 01 Apr 2013 05:59:05 +0000 Edward Kmett http://comonad.com/reader/?p=831 A couple of weeks back one of my coworkers brought to my attention a several hour long workshop in Japan to go over and describe a number of my libraries, hosted by TANAKA Hideyuki — not the voice actor, I checked!

I was incredibly honored and I figured that if that many people (they had 30 or so registered attendees and 10 presentations) were going to spend that much time going over software that I had written, I should at least offer to show up!

I'd like to apologize for any errors in the romanization of people's names or misunderstandings I may have in the following text. My grasp of Japanese is very poor! Please feel free to send me corrections or additions!

Surprise!

Sadly, my boss's immediate reaction to hearing that there was a workshop in Japan about my work was to quip that "You're saying you're huge in Japan?" With him conspicuously not offering to fly me out here, I had to settle for surprising the organizers and attending via Google Hangout.

Commentary and Logs

@nushio was very helpful in getting me connected, and while the speakers gave their talks I sat on the irc.freenode.net #haskell-lens channel and Google Hangout and answered questions and provided a running commentary with more details and references. Per freenode policy the fact that we were logging the channel was announced -- well, at least before things got too far underway.

Here is the IRC session log as a gist. IKEGAMI Daisuke @ikegami__ (ikeg in the IRC log) tried to keep up a high-level running commentary about what was happening in the video to the log, which may be helpful if you are trying to follow along through each retroactively.

Other background chatter and material is strewn across twitter under the #ekmett_conf hash tag and on a japanese twitter aggregator named togetter

Getting Started

The 1PM start time in Shibuya, Tokyo, Japan translates to midnight at the start of Easter here in Boston, which meant ~6 hours later when we reached the Q&A session, I was a bit loopy from lack of sleep, but they were incredibly polite and didn't seem to mind my long rambling responses.

Thanks to the organizers, we have video of the vast majority of the event! There was no audio for the first couple of minutes, and the recording machine lost power for the last talk and the Q&A session at the end as we ran somewhat longer than they had originally scheduled! -- And since I was attending remotely and a number of others flitted in and out over the course of the night, they were nice enough to put most of the slides and background material online.

profunctors by Liyang HU and HIBINO Kei

Liyang Hu (@liyanghu) started the session off with a nicely self-contained crash course on my profunctors package, since profunctors are used fairly heavily inside the implementation of lens and machines, with a couple of detours into contravariant and bifunctors.

His presentation materials are available interactively from the new FP Complete School of Haskell. You can also watch the video recording of his talk on ustream.

This talk was followed by a much more condensed version of very similar content in Japanese by Hibino Kei (@khibino) His talk was more focused on the relationship between arrows and profunctors, and the slides are available through slideshare.

lens by @its_out_of_tune

Once the necessary background material was out of the way, the talk on lens -- arguably the presentation that most of the people were there for -- came early.

ekmett_conf-its_out_of_tune-1

@its_out_of_tune gave an incredibly dense overview of how to use the main parts of the lens package in Japanese. His slides are available online and here is a recording of his talk.

Over the course of a half hour, he was able to cram in a great cross-section of the library including material that I hadn't even been able to get to even with 4x the amount of time available during my New York talk on how to use the lens template-haskell code to automatically generate lenses for user data types and how to use the lens Action machinery.

free and free-game by KINOSHITA Fumiaki

Next up, was my free package and the neat free-game engine that Kinoshita Fumiaki (@fumieval) built on top.

The slides were in English, though the talk and humor were very Japanese. ^_^

ekmett_conf-free

That said, he had some amazingly nice demos, including a live demo of his tetris clone, Monaris, which is visible about 10 minutes into the video!

ad by @nebutalab

@nebutalab, like me, joined the session remotely through Google Hangout, and proceeded to give a tutorial on how forward mode automatic differentiation works through my AD package.

His slides were made available before the talk and the video is available in two parts due a technical hiccup in the middle of the recording.

ekmett_conf-ad

I'm currently working to drastically simplify the API for ad with Alex Lang. Fortunately almost all of the material in this presentation will still be relevant to the new design.

tables by MURAYAMA Shohei

Next up, Murayama Shohei (@yuga) gave an introduction to tables, which is a small in memory data-store that I wrote a few months back to sit on top of lens.

Video of @yuga's talk and his slides are available, which I think makes this the first public talk about this project. -_^

machines by YOSHIDA Sanshiro

Yoshida Sanshiro (@halcat0x15a) gave a nice overview of the currently released version of machines including a lot of examples! I think he may have actually written more code using machines just for demonstrations than I have written using it myself.

Video of his talk is available along with his slide deck -- just tap left or right to move through the slides. He has also written a blog post documenting his early explorations of the library, and some thoughts about using it with attoparsec.

I've recently been trying to redesign machines with coworker Paul CHIUSANO @pchiusano and we've begun greatly simplifying the design of machines based on some work he has been doing in Scala, so unfortunately many of the particulars of this talk will be soon outdated, but the overall 'feel' of working with machines should be preserved across the change-over. Some of these changes can be seen in the master branch on github now.

More to come

There were 4 more sessions, but alas, I'm out of time for the moment! I'll continue this write-up with more links to the source material and my thoughts as soon as I can tomorrow!

]]>
http://comonad.com/reader/2013/japanese-workshop-1/feed/ 0
Algebras of Applicatives http://comonad.com/reader/2013/algebras-of-applicatives/ http://comonad.com/reader/2013/algebras-of-applicatives/#comments Fri, 11 Jan 2013 17:03:58 +0000 Gershom Bazerman http://comonad.com/reader/?p=811 While the previous post in this series was relatively immediately applicable, this one has constructions I definitely wouldn't recommend in production code. However, they do take us further in exploring the universe of applicative functors, and, more broadly, exploring which data types provide which properties by construcion.

It's well known that if you have any Functor F a, you can take its "fixpoint", creating a structure of infinitely nested Fs, like so. F (F (F (...) ) ) Since we can't have infinite types directly in Haskell, we introduce the Fix newtype:

 
newtype Fix f = Fix (f (Fix f))

This "wraps up" the recursion so that GHC accepts the type. Fix f is a Fix constructor, containing an "f" of Fix f inside. Each in turn expands out, and soforth. Fixpoints of functors have fixedpoints of functors inside 'em. And so on, and so on, ad infinitum.

(Digression: We speak of "algebraic data types" in Haskell. The "algebra" in question is an "F-algebra", and we can build up structures with fixpoints of functors, taking those functors as initial or terminal objects and generating either initial algebras or terminal coalgebras. These latter two concepts coincide in Haskell in the Fix description given above, as greatest and least fixpoints of data types in Haskell turn out to be the same thing. For more background, one can go to Wadler's "Recursive Types for Free," or Jacobs and Rutten's "Tutorial on (Co)Algebras and (Co)Induction" for starters.)

The family of functors built from our friends Const, Sum, Product, and Reader (exponentiation) are known as Polynomial Functors. If we take closure of these with a proper fixpoint construct (that lets us build infinite structures), we get things that are variously known as Containers, Shapely Types, and Strictly Positive types.

One irritating thing is that the fixpoint of a functor as we've written it is no longer itself a functor. The type constructor Fix is of kind (* -> *) -> *, which says it takes an "f" which takes one argument (e.g. "Maybe" or "Identity" or etc.) and returns a proper type (i.e. a value at the type level of kind *).

We want a fixpoint construction that gives back something of kind * -> * — i.e. something that is a type constructor representing a functor, and not just a plain old type. The following does the trick.

 
newtype FixF f a = FixF (f (FixF f) a)
deriving instance (Show (f (FixF f) a)) => Show (FixF f a)

(I learned about FixF from a paper by Ralf Hinze, but I'm sure the origins go back much further).

FixF is of kind ((* -> *) -> * -> *) -> * -> *. It takes the fixpoint of a "second-order Functor" (a Functor that sends a Functor to another Functor, i.e. an endofunctor on the functor category of hask), to recover a standard "first order Functor" back out. This sounds scary, but it isn't once you load it up in ghci and start playing with it. In fact, we've encountered second order functors just recently. Product, Sum, and Compose are all of kind (* -> *) -> (* -> *) -> * -> *. So they all send two functors to a third functor. That means that Product Const, Sum Identity and Compose Maybe are all second-order functors, and things appropriate to take our "second-order fixpoint" of.

Conceptually, "Fix f" took a value with one hole, and we filled that hole with "Fix f" so there was no room for a type parameter. Now we've got an "f" with two holes, the first of which takes a functor, and the second of which is the hole of the resulting functor.

Unlike boring old "Fix", we can write Functor and Applicative instances for "FixF", and they're about as simple and compositional as we could possibly hope.

instance Functor (f (FixF f)) => Functor (FixF f) where
    fmap f (FixF x) = FixF $ fmap f x
 
instance Applicative (f (FixF f)) => Applicative (FixF f) where
    pure x = FixF $ pure x
    (FixF f) < *> (FixF x) = FixF (f < *> x)

But now we run into a new problem! It seems like this "a" parameter is just hanging out there, doing basically nothing. We take our classic functors and embed them in there, and they still only have "one hole" at the value level, so don't actually have any place to put the "a" type we now introduced. For example, we can write the following:

-- FixF . Compose . Just . FixF . Compose $ Nothing
-- > FixF (Compose (Just (FixF (Compose Nothing))))
-- :t FixF (Compose (Just (FixF (Compose Nothing))))
-- > FixF (Compose Maybe) a

We now introduce one new member of our basic constructions — a second order functor that acts like "const" on the type level, taking any functor and returning Identity.

 
newtype Embed (f :: * -> *) a = Embed a deriving (Show)
 
instance Functor (Embed f) where
    fmap f (Embed x) = Embed $ f x
 
instance Applicative (Embed f) where
    pure x = Embed x
    (Embed f) < *> (Embed x) = Embed (f x)

Now we can actually stick functorial values into our fixpoints:

-- FixF $ Embed "hi"
-- > FixF (Embed "hi")
 
-- fmap (++ " world") $ FixF (Embed "hi")
-- > FixF (Embed "hi world")
 
-- FixF . Product (Embed "hi") .
--        FixF . Product (Embed "there") . FixF $ undefined
-- > FixF (Product (Embed "hi")
--   (FixF (Product (Embed "there")
--   (FixF *** Exception: Prelude.undefined

You may have noticed that we seem to be able to use "product" to begin a chain of nested fixpoints, but we don't seem able to stick a "Maybe" in there to stop the chain. And it seems like we're not even "fixing" where we intend to be:

-- :t FixF . Product (Embed "hi") . FixF . Product (Embed "there") . FixF $ undefined
-- > FixF (Product (Embed f)) [Char]

That's because we're applying Product, takes and yields arguments of kind * -> * in a context where we really want to take and yield second-order functors as arguments — things of kind (* -> *) -> * -> *. If we had proper kind polymorphism, "Product" and "ProductF" would be able to collapse (and maybe, soon, they will). But at least in the ghc 7.4.1 that I'm working with, we have to write the same darn thing, but "up a kind".

data ProductF f g (b :: * -> *) a =
      ProductF (f b a) (g b a) deriving Show
 
instance (Functor (f b), Functor (g b)) =>
         Functor (ProductF f g b) where
                fmap f (ProductF x y) = ProductF (fmap f x) (fmap f y)
 
instance (Applicative (f b), Applicative (g b)) =>
         Applicative (ProductF f g b) where
               pure x = ProductF (pure x) (pure x)
              (ProductF f g) < *> (ProductF x y) = ProductF (f < *> x) (g < *> y)

We can now do the following properly.

yy = FixF . ProductF (Embed "foo") $ InL $ Const ()
xx = FixF . ProductF (Embed "bar") . InR .
        FixF . ProductF (Embed "baz") . InL $ Const ()

So we've recovered proper lists in Haskell, as the "second-order fixpoint" of polynomial functors. And the types look right too:

-- :t yy
-- > FixF (ProductF Embed (Sum (Const ()))) [Char]

Because we've built our Applicative instances compositionally, we have an applicative for our list list construction automatically:

-- (++) < $> yy < *> xx
-- > FixF (ProductF (Embed "foobar") (InL (Const ())))

This is precisely the "ZipList" applicative instance. In fact, our applicative instances from this "functor toolkit" are all "zippy" — matching up structure where possible, and "smashing it down" where not. This is because Sum, with its associated natural transformation logic, is the only way to introduce a disjoint choice of shape. Here are some simple examples with Sum to demonstrate:

-- liftA2 (++) (InL (Identity "hi")) $
--      InR (Product (Identity " there") (Const ([12::Int])))
-- > InL (Identity "hi there")
-- liftA2 (++) (InR (Identity "hi")) $ InL (Product (Identity " there") (Const ([12::Int])))
-- > InL (Product (Identity "hi there") (Const [12]))

We're always "smashing" towards the left. So in the first case, that means throwing away half of the pair. In the second case, it means injecting Const mempty into a pair, and then operating with that.

In any case, we now have infinite and possibly infinite branching structures. And not only are they Functors, but they're also Applicatives, and in a way that's uniform and straightforward to reason about.

In the next post, we'll stop building out our vocabulary of "base" Functors (though it's not quite complete) and instead push forward on what else these functors can provide "beyond" Applicative.

]]>
http://comonad.com/reader/2013/algebras-of-applicatives/feed/ 0
Abstracting with Applicatives http://comonad.com/reader/2012/abstracting-with-applicatives/ http://comonad.com/reader/2012/abstracting-with-applicatives/#comments Thu, 27 Dec 2012 01:25:13 +0000 Gershom Bazerman http://comonad.com/reader/?p=756 a <*> b? But we seldom use the Applicative as such — when Functor is too little, Monad is [...]]]> Consider the humble Applicative. More than a functor, less than a monad. It gives us such lovely syntax. Who among us still prefers to write liftM2 foo a b when we could instead write foo <$> a <*> b? But we seldom use the Applicative as such — when Functor is too little, Monad is too much, but a lax monoidal functor is just right. I noticed lately a spate of proper uses of Applicative —Formlets (and their later incarnation in the reform library), OptParse-Applicative (and its competitor library CmdTheLine), and a post by Gergo Erdi on applicatives for declaring dependencies of computations. I also ran into a very similar genuine use for applicatives in working on the Panels library (part of jmacro-rpc), where I wanted to determine dependencies of a dynamically generated dataflow computation. And then, again, I stumbled into an applicative while cooking up a form validation library, which turned out to be a reinvention of the same ideas as formlets.

Given all this, It seems post on thinking with applicatives is in order, showing how to build them up and reason about them. One nice thing about the approach we'll be taking is that it uses a "final" encoding of applicatives, rather than building up and then later interpreting a structure. This is in fact how we typically write monads (pace operational, free, etc.), but since we more often only determine our data structures are applicative after the fact, we often get some extra junk lying around (OptParse-Applicative, for example, has a GADT that I think is entirely extraneous).

So the usual throat clearing:

{-# LANGUAGE TypeOperators, MultiParamTypeClasses, FlexibleInstances,
StandaloneDeriving, FlexibleContexts, UndecidableInstances,
GADTs, KindSignatures, RankNTypes #-}
 
module Main where
import Control.Applicative hiding (Const)
import Data.Monoid hiding (Sum, Product)
import Control.Monad.Identity
instance Show a => Show (Identity a) where
    show (Identity x) = "(Identity " ++ show x ++ ")"

And now, let's start with a classic applicative, going back to the Applicative Programming With Effects paper:

data Const mo a = Const mo deriving Show
 
instance Functor (Const mo) where
    fmap _ (Const mo) = Const mo
 
instance Monoid mo => Applicative (Const mo) where
    pure _ = Const mempty
    (Const f) < *> (Const x) = Const (f <> x)

(Const lives in transformers as the Constant functor, or in base as Const)

Note that Const is not a monad. We've defined it so that its structure is independent of the `a` type. Hence if we try to write (>>=) of type Const mo a -> (a -> Const mo b) -> Const mo b, we'll have no way to "get out" the first `a` and feed it to our second argument.

One great thing about Applicatives is that there is no distinction between applicative transformers and applicatives themselves. This is to say that the composition of two applicatives is cleanly and naturally always also an applicative. We can capture this like so:

 
newtype Compose f g a = Compose (f (g a)) deriving Show
 
instance (Functor f, Functor g) => Functor (Compose f g) where
    fmap f (Compose x) = Compose $ (fmap . fmap) f x
 
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
    pure = Compose . pure . pure
    (Compose f) < *> (Compose x) = Compose $ (< *>) < $> f < *> x

(Compose also lives in transformers)

Note that Applicatives compose two ways. We can also write:

data Product f g a = Product (f a) (g a) deriving Show
 
instance (Functor f, Functor g) => Functor (Product f g) where
    fmap f (Product  x y) = Product (fmap f x) (fmap f y)
 
instance (Applicative f, Applicative g) => Applicative (Product f g) where
    pure x = Product (pure x) (pure x)
    (Product f g) < *> (Product  x y) = Product (f < *> x) (g < *> y)

(Product lives in transformers as well)

This lets us now construct an extremely rich set of applicative structures from humble beginnings. For example, we can reconstruct the Writer Applicative.

type Writer mo = Product (Const mo) Identity
 
tell :: mo -> Writer mo ()
tell x = Product (Const x) (pure ())
-- tell [1] *> tell [2]
-- > Product (Const [1,2]) (Identity ())

Note that if we strip away the newtype noise, Writer turns into (mo,a) which is isomorphic to the Writer monad. However, we've learned something along the way, which is that the monoidal component of Writer (as long as we stay within the rules of applicative) is entirely independent from the "identity" component. However, if we went on to write the Monad instance for our writer (by defining >>=), we'd have to "reach in" to the identity component to grab a value to hand back to the function yielding our monoidal component. Which is to say we would destroy this nice seperation of "trace" and "computational content" afforded by simply taking the product of two Applicatives.

Now let's make things more interesting. It turns out that just as the composition of two applicatives may be a monad, so too the composition of two monads may be no stronger than an applicative!

We'll see this by introducing Maybe into the picture, for possibly failing computations.

type FailingWriter mo = Compose (Writer mo) Maybe
 
tellFW :: Monoid mo => mo -> FailingWriter mo ()
tellFW x = Compose (tell x *> pure (Just ()))
 
failFW :: Monoid mo => FailingWriter mo a
failFW = Compose (pure Nothing)
-- tellFW [1] *> tellFW [2]
-- > Compose (Product (Const [1,2]) (Identity Just ()))

-- tellFW [1] *> failFW *> tellFW [2]
-- > Compose (Product (Const [1,2]) (Identity Nothing))

Maybe over Writer gives us the same effects we'd get in a Monad — either the entire computation fails, or we get the result and the trace. But Writer over Maybe gives us new behavior. We get the entire trace, even if some computations have failed! This structure, just like Const, cannot be given a proper Monad instance. (In fact if we take Writer over Maybe as a Monad, we get only the trace until the first point of failure).

This seperation of a monoidal trace from computational effects (either entirely independent of a computation [via a product] or independent between parts of a computation [via Compose]) is the key to lots of neat tricks with applicative functors.

Next, let's look at Gergo Erdi's "Static Analysis with Applicatives" that is built using free applicatives. We can get essentially the same behavior directly from the product of a constant monad with an arbitrary effectful monad representing our ambient environment of information. As long as we constrain ourselves to only querying it with the takeEnv function, then we can either read the left side of our product to statically read dependencies, or the right side to actually utilize them.

type HasEnv k m = Product (Const [k]) m
takeEnv :: (k -> m a) -> k -> HasEnv k m a
takeEnv f x = Product (Const [x]) (f x)

If we prefer, we can capture queries of a static environment directly with the standard Reader applicative, which is just a newtype over the function arrow. There are other varients of this that perhaps come closer to exactly how Erdi's post does things, but I think this is enough to demonstrate the general idea.

data Reader r a = Reader (r -> a)
instance Functor (Reader r) where
    fmap f (Reader x) = Reader (f . x)
instance Applicative (Reader r) where
    pure x = Reader $ pure x
    (Reader f) < *> (Reader x) = Reader (f < *> x)
 
runReader :: (Reader r a) -> r -> a
runReader (Reader f) = f
 
takeEnvNew :: (env -> k -> a) -> k -> HasEnv k (Reader env) a
takeEnvNew f x = Product (Const [x]) (Reader $ flip f x)

So, what then is a full formlet? It's something that can be executed in one context as a monoid that builds a form, and in another as a parser. so the top level must be a product.

type FormletOne mo a = Product (Const mo) Identity a

Below the product, we read from an environment and perhaps get an answer. So that's reader with a maybe.

type FormletTwo mo env a =
    Product (Const mo) (Compose (Reader env) Maybe) a

Now if we fail, we want to have a trace of errors. So we expand out the Maybe into a product as well to get the following, which adds monoidal errors:

type FormletThree mo err env a =
    Product (Const mo)
            (Compose (Reader env) (Product (Const err) Maybe)) a

But now we get errors whether or not the parse succeeds. We want to say either the parse succeeds or we get errors. For this, we can turn to the typical Sum functor, which currently lives as Coproduct in comonad-transformers, but will hopefully be moving as Sum to the transformers library in short order.

data Sum f g a = InL (f a) | InR (g a) deriving Show
 
instance (Functor f, Functor g) => Functor (Sum f g) where
    fmap f (InL x) = InL (fmap f x)
    fmap f (InR y) = InR (fmap f y)

The Functor instance is straightforward for Sum, but the applicative instance is puzzling. What should "pure" do? It needs to inject into either the left or the right, so clearly we need some form of "bias" in the instance. What we really need is the capacity to "work in" one side of the sum until compelled to switch over to the other, at which point we're stuck there. If two functors, F and G are in a relationship such that we can always send f x -> g x in a way that "respects" fmap (that is to say, such that (fmap f . fToG == ftoG . fmap f) then we call this a natural transformation. The action that sends f to g is typically called "eta". (We actually want something slightly stronger called a "monoidal natural transformation" that respects not only the functorial action fmap but the applicative action <*>, but we can ignore that for now).

Now we can assert that as long as there is a natural transformation between g and f, then Sum f g can be made an Applicative, like so:

class Natural f g where
    eta :: f a -> g a
 
instance (Applicative f, Applicative g, Natural g f) =>
  Applicative (Sum f g) where
    pure x = InR $ pure x
    (InL f) < *> (InL x) = InL (f < *> x)
    (InR g) < *> (InR y) = InR (g < *> y)
    (InL f) < *> (InR x) = InL (f < *> eta x)
    (InR g) < *> (InL x) = InL (eta g < *> x)

The natural transformation we'll tend to use simply sends any functor to Const.

instance Monoid mo => Natural f (Const mo) where
    eta = const (Const mempty)

However, there are plenty of other natural transformations that we could potentially make use of, like so:

instance Applicative f =>
  Natural g (Compose f g) where
     eta = Compose . pure
 
instance (Applicative g, Functor f) => Natural f (Compose f g) where
     eta = Compose . fmap pure
 
instance (Natural f g) => Natural f (Product f g) where
    eta x = Product x (eta x)
 
instance (Natural g f) => Natural g (Product f g) where
    eta x = Product (eta x) x
 
instance Natural (Product f g) f where
    eta (Product x _ ) = x
 
instance Natural (Product f g) g where
    eta (Product _ x) = x
 
instance Natural g f => Natural (Sum f g) f where
    eta (InL x) = x
    eta (InR y) = eta y
 
instance Natural Identity (Reader r) where
    eta (Identity x) = pure x

In theory, there should also be a natural transformation that can be built magically from the product of any other two natural transformations, but that will just confuse the Haskell typechecker hopelessly. This is because we know that often different "paths" of typeclass choices will often be isomorphic, but the compiler has to actually pick one "canonical" composition of natural transformations to compute with, although multiple paths will typically be possible.

For similar reasons of avoiding overlap, we can't both have the terminal homomorphism that sends everything to "Const" and the initial homomorphism that sends "Identity" to anything like so:

-- instance Applicative g => Natural Identity g where
--     eta (Identity x) = pure x
 

We choose to keep the terminal transformation around because it is more generally useful for our purposes. As the comments below point out, it turns out that a version of "Sum" with the initial transformation baked in now lives in transformers as Lift.

In any case we can now write a proper Validation applicative:

type Validation mo = Sum (Const mo) Identity
 
validationError :: Monoid mo => mo -> Validation mo a
validationError x = InL (Const x)

This applicative will yield either a single result, or an accumulation of monoidal errors. It exists on hackage in the Validation package.

Now, based on the same principles, we can produce a full Formlet.

type Formlet mo err env a =
    Product (Const mo)
            (Compose (Reader env)
                     (Sum (Const err) Identity))
    a

All the type and newtype noise looks a bit ugly, I'll grant. But the idea is to think with structures built with applicatives, which gives guarantees that we're building applicative structures, and furthermore, structures with certain guarantees in terms of which components can be interpreted independently of which others. So, for example, we can strip away the newtype noise and find the following:

type FormletClean mo err env a = (mo, env -> Either err a)

Because we built this up from our basic library of applicatives, we also know how to write its applicative instance directly.

Now that we've gotten a basic algebraic vocabulary of applicatives, and especially now that we've produced this nifty Sum applicative (which I haven't seen presented before), we've gotten to where I intended to stop.

But lots of other questions arise, on two axes. First, what other typeclasses beyond applicative do our constructions satisfy? Second, what basic pieces of vocabulary are missing from our constructions — what do we need to add to flesh out our universe of discourse? (Fixpoints come to mind).

Also, what statements can we make about "completeness" — what portion of the space of all applicatives can we enumerate and construct in this way? Finally, why is it that monoids seem to crop up so much in the course of working with Applicatives? I plan to tackle at least some of these questions in future blog posts.

]]>
http://comonad.com/reader/2012/abstracting-with-applicatives/feed/ 7
Natural Deduction, Sequent Calculus and Type Classes http://comonad.com/reader/2012/natural-deduction-sequent-calculus-and-type-classes/ http://comonad.com/reader/2012/natural-deduction-sequent-calculus-and-type-classes/#comments Fri, 07 Dec 2012 19:09:09 +0000 Dan Doel http://comonad.com/reader/?p=720 By and large, there are two sorts of proof systems that people use (these days) when studying logic: natural deduction, and sequent calculus. I know of at least one other---Hilbert style---but it is older, and the above systems were invented due to dissatisfaction with Hilbert systems (for a programming analogy, Hilbert systems are like programming entirely with combinators (S, K, etc.), rather than a lambda calculus).

Natural Deduction

Probably the best way to categorize the difference, for the purpose of where we're eventually going, is that natural deduction focuses on the ways to build proof terms up from their constituent parts. This comes in the form of introduction and elimination rules for the various propositions. For instance, the rules for conjunction are:

 \inference{A \,\,\,\,\,\,\,\,\, B}{A \wedge B}[$\wedge$-I]

 \inference{A \wedge B}{A}[$\wedge$-E1] \,\,\,\,\,\, \inference{A \wedge B}{B}[$\wedge$-E2]

This spartan style gets a bit annoying (in my opinion) for the hypothetical premises of the implication introduction, but this can be solved by adding contexts:

 \inference{\Gamma, A \vdash B}{\Gamma \vdash A \rightarrow B}[$\rightarrow$-I]

 \inference{\Gamma \vdash A \rightarrow B \,\,\,\,\,\,\,\,\, \Gamma \vdash A}{\Gamma \vdash B}[$\rightarrow$-E]

This is the style most commonly adopted for presenting type theories, except we reason about terms with a type, rather than just propositions. The context we added for convenience above also becomes fairly essential for keeping track of variables:

 \inference{\Gamma \vdash M : A \,\,\,\,\,\,\,\,\, \Gamma \vdash N : B}{\Gamma \vdash (M, N) : A \times B}[$\times$-I]

 \inference{\Gamma \vdash M : A \times B}{\Gamma \vdash \mathsf{fst}\, M : A}[$\times$-E1]

 \inference{\Gamma \vdash M : A \times B}{\Gamma \vdash \mathsf{snd}\, M : B}[$\times$-E2]

 \inference{\Gamma, x : A \vdash M : B}{\Gamma \vdash (\lambda x:A. \,\, M) : A \rightarrow B}[$\rightarrow$-I]

 \inference{\Gamma \vdash M : A \rightarrow B \,\,\,\,\,\,\,\,\, \Gamma \vdash N : A}{\Gamma \vdash M \, N : B}[$\rightarrow$-E]

As can be seen, all the rules involve taking terms from the premise and building on them in the conclusion.

Sequent Calculi

The other type of system in question, sequent calculus, looks very similar, but represents a subtle shift in focus for our purposes (sequent calculi are a lot more obviously different when presenting classical logics). First, the inference rules relate sequents, which look a lot like our contextual judgments above, and I'll write them the same way. The difference is that not all rules operate on the conclusion side; some operate just on the context. Generally, introduction rules stay similar to natural deduction (and are called right rules), while elimination rules are replaced by manipulations of the context, and are called left rules. For pairs, we can use the rules:

 \inference{\Gamma \vdash A \,\,\,\,\,\,\,\,\, \Gamma \vdash B}{\Gamma \vdash A \wedge B}[$\wedge$-R]

 \inference{\Gamma, A, B \vdash C}{\Gamma, A \wedge B \vdash C}[$\wedge$-L]

We could also have two separate left rules:

\inference{\Gamma, A \vdash C}{\Gamma, A \wedge B \vdash C}[$\wedge$-L1]

\inference{\Gamma, B \vdash C}{\Gamma, A \wedge B \vdash C}[$\wedge$-L2]

But these two different sets are equivalent as long as we're not considering substructural logics. Do note, however, that we're moving from A on the top left to A \wedge B on the bottom left, using the fact that A \wedge B is sufficient to imply A. That is, projections apply contravariantly to the left.

It turns out that almost no type theory is done in this style; natural deduction is far and away more popular. There are, I think, a few reasons for this. The first is: how do we even extend the left rules to type theory (eliminations are obvious, by contrast)? I know of two ways. The first is to introduce pattern matching into the contexts, so our left rule becomes:

 \inference{\Gamma, x : A, y : B \vdash M : C}{\Gamma, (x, y) : A \times B \vdash M : C}[$\times$-L]

This is an acceptable choice (and may avoid some of the pitfalls in the next option), but it doesn't gel with your typical lambda calculus. It's probably more suited to a pattern calculus of some sort (although, even then, if you want to bend your brain, go look at the left rule for implication and try to figure out how it translates into such a theory; I think you probably need higher-order contexts of some sort). Anyhow, I'm not going to explore this further.

The other option (and one that I've seen in the literature) is that left rules actually involve a variable substitution. So we come up with the following rule:

 \inference{\Gamma, x : A, y : B \vdash M : C}{\Gamma, p : A \times B \vdash M[x := \mathsf{fst}\, p, y := \mathsf{snd}\, p] : C}[$\times$-L]

And with this rule, it becomes (I think) more obvious why natural deduction is preferred over sequent calculus, as implementing this rule in a type checker seems significantly harder. Checking the rules of natural deduction involves examining some outer-most structure of the term, and then checking the constituents of the term, possibly in an augmented context, and which rule we're dealing with is always syntax directed. But this left rule has no syntactic correspondent, so it seems as though we must nondeterministically try all left rules at each step, which is unlikely to result in a good algorithm. This is the same kind of problem that plagues extensional type theory, and ultimately results in only derivations being checkable, not terms.

The Type Class Connection

However, there are certain problems that I believe are well modeled by such a sequent calculus, and one of them is type class checking and associated dictionary translations. This is due mainly to the fact that the process is mainly context-directed term building, rather than term-directed type checking. As far as the type class algorithm goes, there are two interesting cases, having to do with the following two varieties of declaration:

 
  class Eq a => Ord a where ...
  instance (Eq a, Eq b) => Eq (a, b) where ...
 

It turns out that each of these leads to a left rule in a kind of type class sequent calculus:

 \inference{\Gamma, \mathbf{Eq} \, a \vdash M : T}{\Gamma, \mathbf{Ord} \,  a \vdash M : T}[Eq-pre-Ord]

 \inference{\Gamma, \mathbf{Eq} \, (a, b) \vdash M : T}{\Gamma, \mathbf{Eq} \, a, \mathbf{Eq} \, b \vdash M : T}[Eq-pair]

That is:

  1. if Eq a is a sufficient constraint for M : T, then the stronger constraint Ord a is also sufficient, so we can discharge the Eq a constraint and use Ord a instead.
  2. We can discharge an Eq (a, b) constraint using two constraints, Eq a, Eq b together with an instance telling us how to do so. This also works for instances without contexts, giving us rules like:

    \inference{\Gamma, \mathbf{Show\, Int} \vdash M : T}{\Gamma \vdash M : T}[Show-Int]

Importantly, the type inference algorithm for type classes specifies when we should use these rules based only on the contexts we're dealing with. Now, these look more like the logical sequent rules, but it turns out that they have corresponding type theory-like versions when we consider dictionary passing:

 \inference{\Gamma, eqd : \mathbf{Eq} \, a \vdash M : T}{\Gamma, ordd : \mathbf{Ord} \,  a \vdash M[eqd := \mathsf{eqOrdPrj}\, ordd] : T}[Eq-pre-Ord]

\inference{\Gamma, peq : \mathbf{Eq} \, (a, b) \vdash M : T}{\Gamma, aeq : \mathbf{Eq} \, a, beq : \mathbf{Eq} \, b \vdash M[peq := \mathsf{eqPair} \, aeq \, beq] : T}[Eq-pair]

And this kind of substituting into dictionary variables produces exactly the evidence passing translation we want.

Another way to look at the difference in feasibility is that type checking involves moving bottom-to-top across the rules; in natural deduction, this is always easy, and we need look only at the terms to figure out which we should do. Type class checking and dictionary translation moves from top-to-bottom, directed by the left hand context, and produces terms on the right via complex operations, and that is a perfect fit for the sequent calculus rules.

I believe this corresponds to the general opinion on those who have studied sequent calculi with regard to type theory. A quick search revealed mostly papers on proof search, rather than type checking, and type classes rather fall into that realm (they're a very limited form of proof search).

]]>
http://comonad.com/reader/2012/natural-deduction-sequent-calculus-and-type-classes/feed/ 3
Unnatural Transformations and Quantifiers http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/ http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/#comments Sun, 23 Sep 2012 03:43:13 +0000 Dan Doel http://comonad.com/reader/?p=660 Recently, a fellow in category land discovered a fact that we in Haskell land have actually known for a while (in addition to things most of us probably don't). Specifically, given two categories $\mathcal{C}$ and $\mathcal{D}$, a functor $G : \mathcal{C} \rightarrow \mathcal{D}$, and provided some conditions in $\mathcal{D}$ hold, there exists a monad $T^G$, the codensity monad of $G$.

In category theory, the codensity monad is given by the rather frightening expression:

$ T^G(a) = \int_r \left[\mathcal{D}(a, Gr), Gr\right] $

Where the integral notation denotes an end, and the square brackets denote a power, which allows us to take what is essentially an exponential of the objects of $\mathcal{D}$ by objects of $\mathcal{V}$, where $\mathcal{D}$ is enriched in $\mathcal{V}$. Provided the above end exists, $T^G$ is a monad regardless of whether $G$ has an adjoint, which is the usual way one thinks of functors (in general) giving rise to monads.

It also turns out that this construction is a sort of generalization of the adjunction case. If we do have $F \dashv G$, this gives rise to a monad $GF$. But, in such a case, $T^G \cong GF$, so the codensity monad is the same as the monad given by the adjunction when it exists, but codensity may exist when there is no adjunction.

In Haskell, this all becomes a bit simpler (to my eyes, at least). Our category $\mathcal{D}$ is always $\mathbf{Hask}$, which is enriched in itself, so powers are just function spaces. And all the functors we write will be rather like $\mathbf{Hask}$ (objects will come from kinds we can quantify over), so ends of functors will look like forall r. F r r where $F : \mathcal{C}^{op} \times \mathcal{C} \rightarrow \mathbf{Hask}$. Then:

newtype Codensity f a = Codensity (forall r. (a -> f r) -> f r)

As mentioned, we've known for a while that we can write a Monad instance for Codensity f without caring at all about f.

As for the adjunction correspondence, consider the adjunction between products and exponentials: $ - \times S \dashv S \rightarrow - $

This gives rise to the monad $S \rightarrow (- \times S)$, the state monad. According to the facts above, we should have that Codensity (s ->) (excuse the sectioning) is the same as state, and if we look, we see:

forall r. (a -> s -> r) -> s -> r

which is the continuation passing, or Church (or Boehm-Berarducci) encoding of the monad.

Now, it's also well known that for any monad, we can construct an adjunction that gives rise to it. There are multiple ways to do this, but the most accessible in Haskell is probably via the Kleisli category. So, given a monad $M$ on $\mathbf{Hask}$, there is a category $\mathbf{Hask}_M$ with the same objects, but where $\mathbf{Hask}_M(a, b) = \mathbf{Hask}(a, Mb)$. The identity for each object is return and composition of arrows is:

(f >=> g) x = f x >>= g

Our two functors are:

F a = a
F f = return . f

U a = M a
U f = (>>= f)

Verifying that $F \dashv U$ requires only that $\mathbf{Hask}_M(F-, =) \cong \mathbf{Hask}(-, U\!\!=)$, but this is just $\mathbf{Hask}(-, M\!\!=) \cong \mathbf{Hask}(-, M\!\!=)$, which is a triviality. Now we should have that $T^U = M$.

So, one of the simplest monads is reader, $(e \rightarrow)$. Now, $U$ just takes objects in the Kleisli category (which are objects in $\mathbf{Hask}$) and applies $M$ to them, so we should have Codensity (e ->) is reader. But earlier we had Codensity (e ->) was state. So reader is state, right?

We can actually arrive at this result another way. One of the most famous pieces of category theory is the Yoneda lemma, which states that the following correspondence holds for any functor $F : \mathcal{C} \rightarrow \mathbf{Set}$:

$ Fa \,\, \cong \, \mathbf{Set}^\mathcal{C}\left(C(a,-), F\right) $

This also works for any functor into $\mathbf{Hask}$ and looks like:

F a ~= forall r. (a -> r) -> F r

for $F : \mathbf{Hask} \rightarrow \mathbf{Hask}$. But we also have our functor $U : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$, which should look more like:

U a ~= forall r. (a -> M r) -> U r
M a ~= forall r. (a -> M r) -> M r

So, we fill in M = (e ->) and get that reader is isomorphic to state, right? What's going on?

To see, we have to take a closer look at natural transformations. Given two categories $\mathcal{C}$ and $\mathcal{D}$, and functors $F, G : \mathcal{C} \rightarrow \mathcal{D}$, a natural transformation $\phi : F \Rightarrow G$ is a family of maps $\phi_a : Fa \rightarrow Ga$ such that for every $f : a \rightarrow b$ the following diagram commutes:

$ \bfig \square&lt;1000,1000>[Fa`Ga`Fb`Gb;\phi_a`Ff`Gf`\phi_b]\efig $

The key piece is what the morphisms look like. It's well known that parametricity ensures the naturality of t :: forall a. F a -> G a for $F, G : \mathbf{Hask} \rightarrow \mathbf{Hask}$, and it also works when the source is $\mathbf{Hask}^{op}$. It should also work for a category, call it $\mathbf{Hask}^{\sim}$, which has Haskell types as objects, but where $\mathbf{Hask}^{\sim}(a, b) = \mathbf{Hask}(a, b) \times \mathbf{Hask}(b, a)$, which is the sort of category that newtype Endo a = Endo (a -> a) is a functor from. So we should be at liberty to say:

Codensity Endo a = forall r. (a -> r -> r) -> r -> r ~= [a]

However, hom types for $\mathbf{Hask}_M$ are not merely made up of $\mathbf{Hask}$ hom types on the same arguments, so naturality turns out not to be guaranteed. A functor $F : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$ must take a Kleisli arrow $f : b \rightarrow Mc$ to an arrow $Ff : Fb \rightarrow Fc$, and transformations must commute with that mapping. So, if we look at our use of Yoneda, we are considering transformations $\phi : \mathbf{Hask}_M(a, -) \Rightarrow U$:

$ \bfig\square&lt;1000,1000>[\mathbf{Hask}_M(a,b)`Ub`\mathbf{Hask}_M(a,c)`Uc;\phi_a`\mathbf{Hask}_M(a,f)`Uf`\phi_h]\efig $

Now, $\mathbf{Hask}_M(a,b) = \mathbf{Hask}(a, Mb)$ and $Ub = Mb$. So

t :: forall r. (a -> M r) -> M r

will get us the right type of maps. But, the above commutative square corresponds to the condition that for all f :: b -> M c:

t . (>=> f) = (>>= f) . t

So, if we have h :: a -> M b, Kleisli composing it with f and then feeding to t is the same as feeding h to t and then binding the result with f.

Now, if we go back to reader, we can consider the reader morphism:

f = const id :: a -> e -> e

For all relevant m and g, m >>= f = id and g >=> f = f. So the
naturality condition here states that t f = id.

Now, t :: forall r. (a -> e -> r) -> e -> r. The general form of these is state actions (I've split e -> (a, e) into two pieces):

t f e = f (v e) (st e)
  where
  rd :: e -> a
  st :: e -> e

If f = const id, then:

t (const id) e = st e
 where
 st :: e -> e

But our naturality condition states that this must be the identity, so we must have st = id. That is, the naturality condition selects t for which the corresponding state action does not change the state, meaning it is equivalent to a reader action! Presumably the definition of an end (which involves dinaturality) enforces a similar condition, although I won't work through it, as it'd be rather more complicated.

However, we have learned a lesson. Quantifiers do not necessarily enforce (di)naturality for every category with objects of the relevant kind. It is important to look at the hom types, not just the objects .In this case, the point of failure seems to be the common, extra s. Even though the type contains nautral transformations for the similar functors over $\mathbf{Hask}$, they can (in general) still manipulate the shared parameter in ways that are not natural for the domain in question.

I am unsure of how exactly one could enforce the above condition in (Haskell's) types. For instance, if we consider:

forall r m. Monad m => (a -> m r) -> m r

This still contains transformations of the form:

t k = k a >> k a

And for this to be natural would require:

(k >=> f) a >> (k >=> f) a = (k a >> k a) >>= f

Which is not true for all possible instantiations of f. It seems as though leaving m unconstrained would be sufficient, as all that could happen is t feeding a value to k and yielding the result, but it seems likely to be over-restrictive.

]]>
http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/feed/ 3
Working around Hackage Outages http://comonad.com/reader/2012/hackage-mirror/ http://comonad.com/reader/2012/hackage-mirror/#comments Wed, 29 Aug 2012 19:51:49 +0000 Edward Kmett http://comonad.com/reader/?p=651 Luite Stegeman has a mirror of the packages from Hackage.

He uses it to power his incredibly useful hdiff website.

During a Hackage outage, you can set up your local cabal configuration to point to it instead by (temporarily) replacing the remote-repo in your ~/.cabal/config file with:


remote-repo:
hdiff.luite.com:http://hdiff.luite.com/packages/archive

and then running cabal update.

I have a ~/.cabal/config that I use whenever hackage goes down in my lens package.

If you use travis-ci, you can avoid build failures during hackage outages by first copying that config to ~/.cabal/config during before_install. -- You'll still be stuck waiting while it first tries to refresh from the real hackage server, but it only adds a few minutes to buildbot times.

]]>
http://comonad.com/reader/2012/hackage-mirror/feed/ 0
Mirrored Lenses http://comonad.com/reader/2012/mirrored-lenses/ http://comonad.com/reader/2012/mirrored-lenses/#comments Mon, 25 Jun 2012 03:38:41 +0000 Edward Kmett http://comonad.com/reader/?p=600 Lenses are a great way to deal with functional references, but there are two common issues that arise from their use.

  1. There is a long-standing folklore position that lenses do not support polymorphic updates. This has actually caused a fair bit of embarrassment for the folks who'd like to incorporate lenses in any Haskell record system improvement.
  2. Access control. It'd be nice to have read-only or write-only properties -- "one-way" or "mirrored" lenses, as it were. Moreover, lenses are commonly viewed as an all or nothing proposition, in that it is hard to mix them with arbitrary user functions.
  3. Finally there is a bit of a cult around trying to generalize lenses by smashing a monad in the middle of them somewhere, it would be nice to be able to get into a list and work with each individual element in it without worrying about someone mucking up our lens laws, and perhaps avoid the whole generalized lens issue entirely.

We'll take a whack at each of these concerns in turn today.

   {-# LANGUAGE Rank2Types #-}  -- we'll relax this later
   import Data.Complex -- for complex examples

First, let us consider the type of van Laarhoven lenses:

type Lens a b =
  forall f. Functor f =>
  (b -> f b) -> a -> f a

with a couple of examples:

 
realLens :: RealFloat a => Lens (Complex a) a
realLens f (r :+ i) = fmap (:+ i) (f r)
 
imagLens :: RealFloat a => Lens (Complex a) a
imagLens f (r :+ i) = fmap (r :+) (f i)
 

These lenses have some very nice properties that we're going to exploit. By far their nicest property is that you can compose them using just (.) and id from the Prelude rather than having to go off and write a Category.

Lens Families

Russell O'Connor recently noted that these lenses permit polymorphic update if you simply generalize their type signature to

 
type LensFamily a b c d =
  forall f. Functor f =>
  (c -> f d) -> a -> f b
 

I'd like to note that you can't just let these 4 arguments vary with complete impunity, so I'll be referring to these as "lens families" rather than polymorphic lenses, a point that I'll address further below. In short, we want the original lens laws to still hold in spite of the generalized type signature, and this forces some of these types to be related.

As an aside, each of the other lens types admit this same generalization! For instance the Lens type in data-lens can be generalized using an indexed store comonad:

 
data Store c d b = Store (d -> b) c
 
instance Functor (Store c d) where
  fmap f (Store g c) = Store (f . g) c
 
newtype DataLensFamily a b c d = DataLensFamily (a -> Store c d b)
 

and we can freely convert back and forth to van Laarhoven lens families:

 
dlens :: LensFamily a b c d -> DataLensFamily a b c d
dlens l = DataLensFamily (l (Store id))
 
plens :: DataLensFamily a b c d -> LensFamily a b c d
plens (DataLensFamily l) f a = case l a of
  Store g c -> fmap g (f c)
 

I leave it as an exercise to the reader to generalize the other lens types, but we'll stick to van Laarhoven lens families almost exclusively below.

As Russell noted, we can define functions to get, modify and set the target of a lens very easily. I'll create local names for Identity and Const, mostly to help give nicer error messages later.

We can read from a lens family:

 
infixl 8 ^.
newtype Getting b a = Getting { got :: b }
instance Functor (Getting b) where
    fmap _ (Getting b) = Getting b
(^.) :: a -> ((c -> Getting c d) -> a -> Getting c b) -> c
x ^. l = got (l Getting x)
 

We can modify the target of the lens:

 
newtype Setting a = Setting { unsetting :: a }
instance Functor Setting where
    fmap f (Setting a) = Setting (f a)
infixr 4 %=
(%=) :: ((c -> Setting d) -> a -> Setting b) -> (c -> d) -> a -> b
l %= f = unsetting . l (Setting . f)
 

We can set the target of the lens with impunity:

 
infixr 4 ^=
(^=) :: ((c -> Setting d) -> a -> Setting b) -> d -> a -> b
l ^= v = l %= const v
 

We can build a lens family from a getter/setter pair

 
lens :: (a -> c) -> (a -> d -> b) -> LensFamily a b c d
lens f g h a = fmap (g a) (h (f a))
 

or from a family of isomorphisms:

 
iso :: (a -> c) -> (d -> b) -> LensFamily a b c d
iso f g h a = fmap g (h (f a))
 

With these combinators in hand, we need some actual lens families to play with. Fortunately they are just as easy to construct as simple lenses. The only thing that changes is the type signature.

 
fstLens :: LensFamily (a,c) (b,c) a b
fstLens f (a,b) = fmap (\x -> (x,b)) (f a)
 
sndLens :: LensFamily (a,b) (a,c) b c
sndLens f (a,b) = fmap ((,) a) (f b)
 
swap :: (a,b) -> (b,a)
swap (a,b) = (b,a)
 
swapped :: LensFamily (a,b) (c,d) (b,a) (d,c)
swapped = iso swap swap
 

These can also build 'traditional' lenses:

 
negated :: Num a => Lens a a
negated = iso negate negate
 

And since Lens and LensFamily are both type aliases, we can freely mix and match lenses with lens families:

 
ghci> (1:+2,3) ^.fstLens.realLens
1.0
ghci> fstLens . realLens ^= 4 $ (1:+2,3)
(4.0 :+ 2.0,3)
 

But, we can now change types with our lens updates!

 
ghci> (fstLens . sndLens ^= "hello") ((1,()),3)
((1,"hello"),3)
 

We can even do things like use the combinator

 
traverseLens :: ((c -> c) -> a -> b) -> a -> b
traverseLens f = f id
 

to project a Functor out through an appropriate lens family:

 
ghci> :t traverseLens (fstLens . sndLens)
traverseLens (fstLens . sndLens)
  :: Functor f => ((a, f b), c) -> f ((a, b), c)
 

That takes care of polymorphic updates.

Why is it a Lens Family?

So, why do I use the term "lens family" rather than "polymorphic lens"?

In order for the lens laws to hold, the 4 types parameterizing our lens family must be interrelated.

In particular you need to be able to put back (with ^=) what you get out of the lens (with ^.) and put multiple times.

This effectively constrains the space of possible legal lens families to those where there exists an index kind i, and two type families outer :: i -> *, and inner :: i -> *. If this were a viable type signature, then each lens family would actually have 2 parameters, yielding something like:

 
-- pseudo-Haskell
-- type LensFamily outer inner =
--    forall a b. LensFamily (outer a) (outer b) (inner a) (inner b)
 

but you can't pass in type families as arguments like that, and even if you could, their lack of injectivity doesn't give the type checker enough to work with to compose your lenses. By specifying all 4 type arguments independently, we give the compiler enough to work with. But since the arguments aren't just freely polymorphic and are instead related by these index types, I'm choosing to call them "lens families" rather than "polymorphic lenses".

Getters

Note, we didn't use the full polymorphism of the van Laarhoven lenses in the signatures of (^.), (%=) and (^=) above.

What happens when we restrict the type of Functor we're allowed to pass to our lens?

If we generalize the type of our getter ever so slightly from the type we pass to (^.) to permit composition, we get:

 
type Getter a c = forall r d b. (c -> Getting r d) -> a -> Getting r b
 

and we can make getters out of arbitrary Haskell functions that we have lying around with

 
-- | build a getting out of a function
getting :: (a -> b) -> Getter a b
getting g f = Getting . got . f . g
 

For example:

 
getFst :: Getter (a,b) a
getFst = getting fst
 
getSnd :: Getter (a,b) b
getSnd = getting snd
 

But this is particularly nice for things that can't be made into real lenses or lens families, because of loss of information:

 
getPhase :: RealFloat a => Getter (Complex a) a
getPhase = getting phase
 
getAbs, getSignum  :: Num a => Getter a a
getAbs = getting abs
getSignum = getting signum
 

Notably, getMagnitude and getPhase can't be legal lenses because when the magnitude is 0, you lose phase information.

These can be mixed and matched with other lenses when dereferencing with (^.)

 
ghci> (0,(1:+2,3)) ^. getting snd . fstLens . getting magnitude
2.23606797749979
 

But we get a type error when we attempt to write to a Getter.

 
ghci> getting magnitude ^= 12
<interactive>:2:1:
    Couldn't match expected type `Setting d0'
                with actual type `Getting r0 d1'
    Expected type: (c0 -> Setting d0) -> a1 -> Setting b1
      Actual type: (c0 -> Getting r0 d1) -> a0 -> Getting r0 b0
    In the return type of a call of `getting'
    In the first argument of `(^=)', namely `getting magnitude'
</interactive>

Setters

So what about write-only properties?

These have a less satisfying solution. We have to break our lens family structure slightly to make something that can strictly only be written to, by disabling the ability to read our current value entirely.

 
type Setter a d b = (() -> Setting d) -> a -> Setting b
 
setting :: (a -> d -> b) -> Setter a d b
setting f g a = Setting (f a (unsetting (g ())))
 

Now we can make setters out of functions that take two arguments:

 
plus, times :: Num a => Setter a a a
plus = setting (+)
times = setting (*)
 
 
ghci> setting (+) ^= 12 $ 32
44
ghci> fstLens . setting (*) ^= 12 $ (2,3)
(24,3)
 

However, these lenses have the unsatisfying property that they can only be placed last in the chain of lenses we're setting.

 
ghci> (setting (+) . realLens ^= 12) 1
<interactive>:15:16:
    Couldn't match expected type `()' with actual type `Complex d0'
    Expected type: (d0 -> Setting d0) -> () -> Setting b0
      Actual type: (d0 -> Setting d0)
                   -> Complex d0 -> Setting (Complex d0)
    In the second argument of `(.)', namely `realLens'
    In the first argument of `(^=)', namely `setting (+) . realLens'
</interactive>

This isn't surprising, if you consider that to compose data-lens lenses you need to use %= to chain setters.

Modifiers

So what do we need to do to make a lens we can only modify but not read?

Lets restore the lens family structure!

 
type Modifier a b c d = (c -> Setting d) -> a -> Setting b
 
modifying :: ((c -> d) -> a -> b) -> Modifier a b c d
modifying f g a = Setting (f (unsetting . g) a)
 

modifying makes a modify-only lens family you can modify using local information, but can't tell anyone about the contents of.

This lets us work with a lens over a variable number of elements in a structure, without worrying about a user accidentally "putting back" too many or too few entries.

 
ghci> modifying map %= (+1) $ [1,2,3]
[2,3,4]
 

They can be composed with other lenses:

 
ghci> modifying map . sndLens %= (+1) $ [("hello",1),("goodbye",2)]
[("hello",2),("goodbye",3)]
 

and unlike with a Setter, you can compose a Modifier with a Modifier:

 
modifying fmap . modifying fmap
  :: (Functor g, Functor f) =>
     (c -> Setting d) -> f (g c) -> Setting (f (g d))
 

but they cannot be read from directly:

 
ghci> [1,2,3] ^. modifying fmap
<interactive>:18:12:
    Couldn't match expected type `Getting c0 d0'
                with actual type `Setting d1'
    Expected type: (c0 -> Getting c0 d0) -> [t0] -> Getting c0 b1
      Actual type: Modifier a0 b0 c0 d1
    In the return type of a call of `modifying'
    In the second argument of `(^.)', namely `modifying map'
</interactive>

We can map over restricted domains:

 
reals :: (RealFloat a, RealFloat b) => Modifier (Complex a) (Complex b) a b
reals = modifying (\f (r :+ i) -> f r :+ f i)
 

and everything still composes:

 
ghci> reals %= (+1) $  1 :+ 2
2 :+ 3
ghci> fstLens . reals %= (+1) $ (1 :+ 2, 4)
(2.0 :+ 3.0,4)
 

These aren't limited to actions that map over the entire structure, however!

 
ghci> :m + Data.Lens
ghci> modifying (`adjust` "goodbye") %= (+1) $
      fromList [("hello",1),("goodbye",2)]
fromList [("goodbye",3),("hello",1)]
 

This lets us update potentially nested structures where something may or may not be present , which was fairly tedious to do with earlier lens representations.

Both the former map-like example and the latter update-like behavior were commonly used examples in calls for partial lenses or 'multi-lenses', but here they are able to implemented using a restricted form of a more traditional lens type, and moreover they compose cleanly with other lenses and lens families.

Rank-1 Lens Families

At the very start I mentioned that you can dispense with the need for Rank-2 Types. Doing so requires much more tedious type signatures as the LensFamily, Getter, Setter and Lens aliases are no longer legal. Also, if you want to take a lens as an argument and use it in multiple contexts (e.g. as both a getter and a setter), you'll need to clone it to obtain a lens family. For example, this fails:

 
ghci> :t \l y -> l ^= y ^. l + 1 $ y
<interactive>:1:19:
    Couldn't match expected type `Getting d0 d1'
                with actual type `Setting d0'
    Expected type: (d0 -> Getting d0 d1) -> a1 -> Getting d0 b1
      Actual type: (d0 -> Setting d0) -> a0 -> Setting b0
    In the second argument of `(^.)', namely `l'
    In the first argument of `(+)', namely `y ^. l'
</interactive>

But we can clone the supplied monomorphic lens using the composition of dlens and plens above, since the DataLensFamily completely characterizes the LensFamily with:

 
clone ::
  ((c -> Store c d d) -> (a -> Store c d b)) ->
  LensFamily a b c d
clone l f a = case l (Store id) a of
  Store g c -> fmap g (f c)
 

and then the following code type checks:

 
ghci> :t \l y -> clone l ^= y ^. clone l + 1 $ y
\l y -> clone l ^= y ^. clone l + 1 $ y
  :: Num d => ((c -> Store c d1 d1) -> a -> Store d d b) -> a -> b
 

This means you could implement an entire library to deal with lens families with restricted getters and setters and remain within the confines of Haskell 98. However, the type signatures are considerably less elegant than what becomes available when you simply add Rank2Types.

Conclusion

So, we've demonstrated that van Laarhoven lens families let you have lenses that permit polymorphic update, let you offer lenses that are restricted to only allowing the use of getters, setters or modifiers, while granting you easy composition with the existing (.) and id from the Prelude.

I think the practical existence and power of these combinators make a strong case for their use in any serious record reform proposal.

My thanks go to Russell O'Connor. He first noticed that you can generalize van Laarhoven lenses and proposed the clone combinator as a path to Haskell 98/2010 compatibility, while retaining the nicer composition model.

]]>
http://comonad.com/reader/2012/mirrored-lenses/feed/ 2
Where’s Waldo? http://comonad.com/reader/2012/wheres-waldo/ http://comonad.com/reader/2012/wheres-waldo/#comments Sun, 10 Jun 2012 23:54:02 +0000 Edward Kmett http://comonad.com/reader/?p=592 No, I don't mean like this, but rather, If you spent any time trying to figure out xkcd's Umwelt April Fool comic this year, you may be interested in the Haskell source code. They used all sorts of information about you, the browser you were using, the resolution of your screen, to the geocoding of the network address you came from, etc. to serve up a custom web comic.

Today, davean posted to github the code for waldo, the engine he wrote to drive that comic.

Alas, he was not kind enough to actually supply the code for the umwelt comic strip itself, so you'll still be left wondering if the internet managed to find all of the Easter eggs. (Are they still Easter eggs when you release something a week before Easter?) You may find the list of links below useful if you want to get a feel for the different responses it gave people.

[ Article | xkcd's Forum | Hacker News | /r/haskell ]

[Update: Jun 10, 9:09pm] davean just posted a rather insightful post mortem of the development of waldo that talks a bit about why xkcd uses Haskell internally.

]]>
http://comonad.com/reader/2012/wheres-waldo/feed/ 0
Catamorphism Knol http://comonad.com/reader/2012/catamorphism-knol/ http://comonad.com/reader/2012/catamorphism-knol/#comments Sun, 27 May 2012 18:17:24 +0000 Edward Kmett http://comonad.com/reader/?p=579 I was contacted by someone who wanted to read my old catamorphism knol, despite the fact that Google Knol is no more.

Fortunately, while it was rather inconvenient that they shut down Google Knol completely, and I'll forever remember a knol as a "unit of abandonment", Google did provide a nice way to download at least your own user content and for that I am grateful.

I have fixed up the internal linkage as much as possible and have placed a copy of the original article below.

Catamorphisms: A Knol

Sadly, as I am not "Dark Magus", I am unable to download the Russian translation. If anyone knows how to contact him, I would love to obtain and preserve a copy of the translation as well.

]]>
http://comonad.com/reader/2012/catamorphism-knol/feed/ 4