The Comonad.Reader » Haskell http://comonad.com/reader types, (co)monads, substructural logic Thu, 02 May 2013 14:19:12 +0000 http://wordpress.org/?v=2.8.4 en hourly 1 Japanese “ekmett” Workshop (Part 1 of 2) http://comonad.com/reader/2013/japanese-workshop-1/ http://comonad.com/reader/2013/japanese-workshop-1/#comments Mon, 01 Apr 2013 05:59:05 +0000 Edward Kmett http://comonad.com/reader/?p=831 A couple of weeks back one of my coworkers brought to my attention a several hour long workshop in Japan to go over and describe a number of my libraries, hosted by TANAKA Hideyuki — not the voice actor, I checked!

I was incredibly honored and I figured that if that many people (they had 30 or so registered attendees and 10 presentations) were going to spend that much time going over software that I had written, I should at least offer to show up!

I'd like to apologize for any errors in the romanization of people's names or misunderstandings I may have in the following text. My grasp of Japanese is very poor! Please feel free to send me corrections or additions!

Surprise!

Sadly, my boss's immediate reaction to hearing that there was a workshop in Japan about my work was to quip that "You're saying you're huge in Japan?" With him conspicuously not offering to fly me out here, I had to settle for surprising the organizers and attending via Google Hangout.

Commentary and Logs

@nushio was very helpful in getting me connected, and while the speakers gave their talks I sat on the irc.freenode.net #haskell-lens channel and Google Hangout and answered questions and provided a running commentary with more details and references. Per freenode policy the fact that we were logging the channel was announced -- well, at least before things got too far underway.

Here is the IRC session log as a gist. IKEGAMI Daisuke @ikegami__ (ikeg in the IRC log) tried to keep up a high-level running commentary about what was happening in the video to the log, which may be helpful if you are trying to follow along through each retroactively.

Other background chatter and material is strewn across twitter under the #ekmett_conf hash tag and on a japanese twitter aggregator named togetter

Getting Started

The 1PM start time in Shibuya, Tokyo, Japan translates to midnight at the start of Easter here in Boston, which meant ~6 hours later when we reached the Q&A session, I was a bit loopy from lack of sleep, but they were incredibly polite and didn't seem to mind my long rambling responses.

Thanks to the organizers, we have video of the vast majority of the event! There was no audio for the first couple of minutes, and the recording machine lost power for the last talk and the Q&A session at the end as we ran somewhat longer than they had originally scheduled! -- And since I was attending remotely and a number of others flitted in and out over the course of the night, they were nice enough to put most of the slides and background material online.

profunctors by Liyang HU and HIBINO Kei

Liyang Hu (@liyanghu) started the session off with a nicely self-contained crash course on my profunctors package, since profunctors are used fairly heavily inside the implementation of lens and machines, with a couple of detours into contravariant and bifunctors.

His presentation materials are available interactively from the new FP Complete School of Haskell. You can also watch the video recording of his talk on ustream.

This talk was followed by a much more condensed version of very similar content in Japanese by Hibino Kei (@khibino) His talk was more focused on the relationship between arrows and profunctors, and the slides are available through slideshare.

lens by @its_out_of_tune

Once the necessary background material was out of the way, the talk on lens -- arguably the presentation that most of the people were there for -- came early.

ekmett_conf-its_out_of_tune-1

@its_out_of_tune gave an incredibly dense overview of how to use the main parts of the lens package in Japanese. His slides are available online and here is a recording of his talk.

Over the course of a half hour, he was able to cram in a great cross-section of the library including material that I hadn't even been able to get to even with 4x the amount of time available during my New York talk on how to use the lens template-haskell code to automatically generate lenses for user data types and how to use the lens Action machinery.

free and free-game by KINOSHITA Fumiaki

Next up, was my free package and the neat free-game engine that Kinoshita Fumiaki (@fumieval) built on top.

The slides were in English, though the talk and humor were very Japanese. ^_^

ekmett_conf-free

That said, he had some amazingly nice demos, including a live demo of his tetris clone, Monaris, which is visible about 10 minutes into the video!

ad by @nebutalab

@nebutalab, like me, joined the session remotely through Google Hangout, and proceeded to give a tutorial on how forward mode automatic differentiation works through my AD package.

His slides were made available before the talk and the video is available in two parts due a technical hiccup in the middle of the recording.

ekmett_conf-ad

I'm currently working to drastically simplify the API for ad with Alex Lang. Fortunately almost all of the material in this presentation will still be relevant to the new design.

tables by MURAYAMA Shohei

Next up, Murayama Shohei (@yuga) gave an introduction to tables, which is a small in memory data-store that I wrote a few months back to sit on top of lens.

Video of @yuga's talk and his slides are available, which I think makes this the first public talk about this project. -_^

machines by YOSHIDA Sanshiro

Yoshida Sanshiro (@halcat0x15a) gave a nice overview of the currently released version of machines including a lot of examples! I think he may have actually written more code using machines just for demonstrations than I have written using it myself.

Video of his talk is available along with his slide deck -- just tap left or right to move through the slides. He has also written a blog post documenting his early explorations of the library, and some thoughts about using it with attoparsec.

I've recently been trying to redesign machines with coworker Paul CHIUSANO @pchiusano and we've begun greatly simplifying the design of machines based on some work he has been doing in Scala, so unfortunately many of the particulars of this talk will be soon outdated, but the overall 'feel' of working with machines should be preserved across the change-over. Some of these changes can be seen in the master branch on github now.

More to come

There were 4 more sessions, but alas, I'm out of time for the moment! I'll continue this write-up with more links to the source material and my thoughts as soon as I can tomorrow!

]]>
http://comonad.com/reader/2013/japanese-workshop-1/feed/ 0
Unnatural Transformations and Quantifiers http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/ http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/#comments Sun, 23 Sep 2012 03:43:13 +0000 Dan Doel http://comonad.com/reader/?p=660 Recently, a fellow in category land discovered a fact that we in Haskell land have actually known for a while (in addition to things most of us probably don't). Specifically, given two categories $\mathcal{C}$ and $\mathcal{D}$, a functor $G : \mathcal{C} \rightarrow \mathcal{D}$, and provided some conditions in $\mathcal{D}$ hold, there exists a monad $T^G$, the codensity monad of $G$.

In category theory, the codensity monad is given by the rather frightening expression:

$ T^G(a) = \int_r \left[\mathcal{D}(a, Gr), Gr\right] $

Where the integral notation denotes an end, and the square brackets denote a power, which allows us to take what is essentially an exponential of the objects of $\mathcal{D}$ by objects of $\mathcal{V}$, where $\mathcal{D}$ is enriched in $\mathcal{V}$. Provided the above end exists, $T^G$ is a monad regardless of whether $G$ has an adjoint, which is the usual way one thinks of functors (in general) giving rise to monads.

It also turns out that this construction is a sort of generalization of the adjunction case. If we do have $F \dashv G$, this gives rise to a monad $GF$. But, in such a case, $T^G \cong GF$, so the codensity monad is the same as the monad given by the adjunction when it exists, but codensity may exist when there is no adjunction.

In Haskell, this all becomes a bit simpler (to my eyes, at least). Our category $\mathcal{D}$ is always $\mathbf{Hask}$, which is enriched in itself, so powers are just function spaces. And all the functors we write will be rather like $\mathbf{Hask}$ (objects will come from kinds we can quantify over), so ends of functors will look like forall r. F r r where $F : \mathcal{C}^{op} \times \mathcal{C} \rightarrow \mathbf{Hask}$. Then:

newtype Codensity f a = Codensity (forall r. (a -> f r) -> f r)

As mentioned, we've known for a while that we can write a Monad instance for Codensity f without caring at all about f.

As for the adjunction correspondence, consider the adjunction between products and exponentials: $ - \times S \dashv S \rightarrow - $

This gives rise to the monad $S \rightarrow (- \times S)$, the state monad. According to the facts above, we should have that Codensity (s ->) (excuse the sectioning) is the same as state, and if we look, we see:

forall r. (a -> s -> r) -> s -> r

which is the continuation passing, or Church (or Boehm-Berarducci) encoding of the monad.

Now, it's also well known that for any monad, we can construct an adjunction that gives rise to it. There are multiple ways to do this, but the most accessible in Haskell is probably via the Kleisli category. So, given a monad $M$ on $\mathbf{Hask}$, there is a category $\mathbf{Hask}_M$ with the same objects, but where $\mathbf{Hask}_M(a, b) = \mathbf{Hask}(a, Mb)$. The identity for each object is return and composition of arrows is:

(f >=> g) x = f x >>= g

Our two functors are:

F a = a
F f = return . f

U a = M a
U f = (>>= f)

Verifying that $F \dashv U$ requires only that $\mathbf{Hask}_M(F-, =) \cong \mathbf{Hask}(-, U\!\!=)$, but this is just $\mathbf{Hask}(-, M\!\!=) \cong \mathbf{Hask}(-, M\!\!=)$, which is a triviality. Now we should have that $T^U = M$.

So, one of the simplest monads is reader, $(e \rightarrow)$. Now, $U$ just takes objects in the Kleisli category (which are objects in $\mathbf{Hask}$) and applies $M$ to them, so we should have Codensity (e ->) is reader. But earlier we had Codensity (e ->) was state. So reader is state, right?

We can actually arrive at this result another way. One of the most famous pieces of category theory is the Yoneda lemma, which states that the following correspondence holds for any functor $F : \mathcal{C} \rightarrow \mathbf{Set}$:

$ Fa \,\, \cong \, \mathbf{Set}^\mathcal{C}\left(C(a,-), F\right) $

This also works for any functor into $\mathbf{Hask}$ and looks like:

F a ~= forall r. (a -> r) -> F r

for $F : \mathbf{Hask} \rightarrow \mathbf{Hask}$. But we also have our functor $U : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$, which should look more like:

U a ~= forall r. (a -> M r) -> U r
M a ~= forall r. (a -> M r) -> M r

So, we fill in M = (e ->) and get that reader is isomorphic to state, right? What's going on?

To see, we have to take a closer look at natural transformations. Given two categories $\mathcal{C}$ and $\mathcal{D}$, and functors $F, G : \mathcal{C} \rightarrow \mathcal{D}$, a natural transformation $\phi : F \Rightarrow G$ is a family of maps $\phi_a : Fa \rightarrow Ga$ such that for every $f : a \rightarrow b$ the following diagram commutes:

$ \bfig \square<1000,1000>[Fa`Ga`Fb`Gb;\phi_a`Ff`Gf`\phi_b]\efig $

The key piece is what the morphisms look like. It's well known that parametricity ensures the naturality of t :: forall a. F a -> G a for $F, G : \mathbf{Hask} \rightarrow \mathbf{Hask}$, and it also works when the source is $\mathbf{Hask}^{op}$. It should also work for a category, call it $\mathbf{Hask}^{\sim}$, which has Haskell types as objects, but where $\mathbf{Hask}^{\sim}(a, b) = \mathbf{Hask}(a, b) \times \mathbf{Hask}(b, a)$, which is the sort of category that newtype Endo a = Endo (a -> a) is a functor from. So we should be at liberty to say:

Codensity Endo a = forall r. (a -> r -> r) -> r -> r ~= [a]

However, hom types for $\mathbf{Hask}_M$ are not merely made up of $\mathbf{Hask}$ hom types on the same arguments, so naturality turns out not to be guaranteed. A functor $F : \mathbf{Hask}_M \rightarrow \mathbf{Hask}$ must take a Kleisli arrow $f : b \rightarrow Mc$ to an arrow $Ff : Fb \rightarrow Fc$, and transformations must commute with that mapping. So, if we look at our use of Yoneda, we are considering transformations $\phi : \mathbf{Hask}_M(a, -) \Rightarrow U$:

$ \bfig\square<1000,1000>[\mathbf{Hask}_M(a,b)`Ub`\mathbf{Hask}_M(a,c)`Uc;\phi_a`\mathbf{Hask}_M(a,f)`Uf`\phi_h]\efig $

Now, $\mathbf{Hask}_M(a,b) = \mathbf{Hask}(a, Mb)$ and $Ub = Mb$. So

t :: forall r. (a -> M r) -> M r

will get us the right type of maps. But, the above commutative square corresponds to the condition that for all f :: b -> M c:

t . (>=> f) = (>>= f) . t

So, if we have h :: a -> M b, Kleisli composing it with f and then feeding to t is the same as feeding h to t and then binding the result with f.

Now, if we go back to reader, we can consider the reader morphism:

f = const id :: a -> e -> e

For all relevant m and g, m >>= f = id and g >=> f = f. So the
naturality condition here states that t f = id.

Now, t :: forall r. (a -> e -> r) -> e -> r. The general form of these is state actions (I've split e -> (a, e) into two pieces):

t f e = f (v e) (st e)
  where
  rd :: e -> a
  st :: e -> e

If f = const id, then:

t (const id) e = st e
 where
 st :: e -> e

But our naturality condition states that this must be the identity, so we must have st = id. That is, the naturality condition selects t for which the corresponding state action does not change the state, meaning it is equivalent to a reader action! Presumably the definition of an end (which involves dinaturality) enforces a similar condition, although I won't work through it, as it'd be rather more complicated.

However, we have learned a lesson. Quantifiers do not necessarily enforce (di)naturality for every category with objects of the relevant kind. It is important to look at the hom types, not just the objects .In this case, the point of failure seems to be the common, extra s. Even though the type contains nautral transformations for the similar functors over $\mathbf{Hask}$, they can (in general) still manipulate the shared parameter in ways that are not natural for the domain in question.

I am unsure of how exactly one could enforce the above condition in (Haskell's) types. For instance, if we consider:

forall r m. Monad m => (a -> m r) -> m r

This still contains transformations of the form:

t k = k a >> k a

And for this to be natural would require:

(k >=> f) a >> (k >=> f) a = (k a >> k a) >>= f

Which is not true for all possible instantiations of f. It seems as though leaving m unconstrained would be sufficient, as all that could happen is t feeding a value to k and yielding the result, but it seems likely to be over-restrictive.

]]>
http://comonad.com/reader/2012/unnatural-transformations-and-quantifiers/feed/ 3
Working around Hackage Outages http://comonad.com/reader/2012/hackage-mirror/ http://comonad.com/reader/2012/hackage-mirror/#comments Wed, 29 Aug 2012 19:51:49 +0000 Edward Kmett http://comonad.com/reader/?p=651 Luite Stegeman has a mirror of the packages from Hackage.

He uses it to power his incredibly useful hdiff website.

During a Hackage outage, you can set up your local cabal configuration to point to it instead by (temporarily) replacing the remote-repo in your ~/.cabal/config file with:


remote-repo:
hdiff.luite.com:http://hdiff.luite.com/packages/archive

and then running cabal update.

I have a ~/.cabal/config that I use whenever hackage goes down in my lens package.

If you use travis-ci, you can avoid build failures during hackage outages by first copying that config to ~/.cabal/config during before_install. -- You'll still be stuck waiting while it first tries to refresh from the real hackage server, but it only adds a few minutes to buildbot times.

]]>
http://comonad.com/reader/2012/hackage-mirror/feed/ 0
Mirrored Lenses http://comonad.com/reader/2012/mirrored-lenses/ http://comonad.com/reader/2012/mirrored-lenses/#comments Mon, 25 Jun 2012 03:38:41 +0000 Edward Kmett http://comonad.com/reader/?p=600 Lenses are a great way to deal with functional references, but there are two common issues that arise from their use.

  1. There is a long-standing folklore position that lenses do not support polymorphic updates. This has actually caused a fair bit of embarrassment for the folks who'd like to incorporate lenses in any Haskell record system improvement.
  2. Access control. It'd be nice to have read-only or write-only properties -- "one-way" or "mirrored" lenses, as it were. Moreover, lenses are commonly viewed as an all or nothing proposition, in that it is hard to mix them with arbitrary user functions.
  3. Finally there is a bit of a cult around trying to generalize lenses by smashing a monad in the middle of them somewhere, it would be nice to be able to get into a list and work with each individual element in it without worrying about someone mucking up our lens laws, and perhaps avoid the whole generalized lens issue entirely.

We'll take a whack at each of these concerns in turn today.

   {-# LANGUAGE Rank2Types #-}  -- we'll relax this later
   import Data.Complex -- for complex examples

First, let us consider the type of van Laarhoven lenses:

type Lens a b =
  forall f. Functor f =>
  (b -> f b) -> a -> f a

with a couple of examples:

 
realLens :: RealFloat a => Lens (Complex a) a
realLens f (r :+ i) = fmap (:+ i) (f r)
 
imagLens :: RealFloat a => Lens (Complex a) a
imagLens f (r :+ i) = fmap (r :+) (f i)
 

These lenses have some very nice properties that we're going to exploit. By far their nicest property is that you can compose them using just (.) and id from the Prelude rather than having to go off and write a Category.

Lens Families

Russell O'Connor recently noted that these lenses permit polymorphic update if you simply generalize their type signature to

 
type LensFamily a b c d =
  forall f. Functor f =>
  (c -> f d) -> a -> f b
 

I'd like to note that you can't just let these 4 arguments vary with complete impunity, so I'll be referring to these as "lens families" rather than polymorphic lenses, a point that I'll address further below. In short, we want the original lens laws to still hold in spite of the generalized type signature, and this forces some of these types to be related.

As an aside, each of the other lens types admit this same generalization! For instance the Lens type in data-lens can be generalized using an indexed store comonad:

 
data Store c d b = Store (d -> b) c
 
instance Functor (Store c d) where
  fmap f (Store g c) = Store (f . g) c
 
newtype DataLensFamily a b c d = DataLensFamily (a -> Store c d b)
 

and we can freely convert back and forth to van Laarhoven lens families:

 
dlens :: LensFamily a b c d -> DataLensFamily a b c d
dlens l = DataLensFamily (l (Store id))
 
plens :: DataLensFamily a b c d -> LensFamily a b c d
plens (DataLensFamily l) f a = case l a of
  Store g c -> fmap g (f c)
 

I leave it as an exercise to the reader to generalize the other lens types, but we'll stick to van Laarhoven lens families almost exclusively below.

As Russell noted, we can define functions to get, modify and set the target of a lens very easily. I'll create local names for Identity and Const, mostly to help give nicer error messages later.

We can read from a lens family:

 
infixl 8 ^.
newtype Getting b a = Getting { got :: b }
instance Functor (Getting b) where
    fmap _ (Getting b) = Getting b
(^.) :: a -> ((c -> Getting c d) -> a -> Getting c b) -> c
x ^. l = got (l Getting x)
 

We can modify the target of the lens:

 
newtype Setting a = Setting { unsetting :: a }
instance Functor Setting where
    fmap f (Setting a) = Setting (f a)
infixr 4 %=
(%=) :: ((c -> Setting d) -> a -> Setting b) -> (c -> d) -> a -> b
l %= f = unsetting . l (Setting . f)
 

We can set the target of the lens with impunity:

 
infixr 4 ^=
(^=) :: ((c -> Setting d) -> a -> Setting b) -> d -> a -> b
l ^= v = l %= const v
 

We can build a lens family from a getter/setter pair

 
lens :: (a -> c) -> (a -> d -> b) -> LensFamily a b c d
lens f g h a = fmap (g a) (h (f a))
 

or from a family of isomorphisms:

 
iso :: (a -> c) -> (d -> b) -> LensFamily a b c d
iso f g h a = fmap g (h (f a))
 

With these combinators in hand, we need some actual lens families to play with. Fortunately they are just as easy to construct as simple lenses. The only thing that changes is the type signature.

 
fstLens :: LensFamily (a,c) (b,c) a b
fstLens f (a,b) = fmap (\x -> (x,b)) (f a)
 
sndLens :: LensFamily (a,b) (a,c) b c
sndLens f (a,b) = fmap ((,) a) (f b)
 
swap :: (a,b) -> (b,a)
swap (a,b) = (b,a)
 
swapped :: LensFamily (a,b) (c,d) (b,a) (d,c)
swapped = iso swap swap
 

These can also build 'traditional' lenses:

 
negated :: Num a => Lens a a
negated = iso negate negate
 

And since Lens and LensFamily are both type aliases, we can freely mix and match lenses with lens families:

 
ghci> (1:+2,3) ^.fstLens.realLens
1.0
ghci> fstLens . realLens ^= 4 $ (1:+2,3)
(4.0 :+ 2.0,3)
 

But, we can now change types with our lens updates!

 
ghci> (fstLens . sndLens ^= "hello") ((1,()),3)
((1,"hello"),3)
 

We can even do things like use the combinator

 
traverseLens :: ((c -> c) -> a -> b) -> a -> b
traverseLens f = f id
 

to project a Functor out through an appropriate lens family:

 
ghci> :t traverseLens (fstLens . sndLens)
traverseLens (fstLens . sndLens)
  :: Functor f => ((a, f b), c) -> f ((a, b), c)
 

That takes care of polymorphic updates.

Why is it a Lens Family?

So, why do I use the term "lens family" rather than "polymorphic lens"?

In order for the lens laws to hold, the 4 types parameterizing our lens family must be interrelated.

In particular you need to be able to put back (with ^=) what you get out of the lens (with ^.) and put multiple times.

This effectively constrains the space of possible legal lens families to those where there exists an index kind i, and two type families outer :: i -> *, and inner :: i -> *. If this were a viable type signature, then each lens family would actually have 2 parameters, yielding something like:

 
-- pseudo-Haskell
-- type LensFamily outer inner =
--    forall a b. LensFamily (outer a) (outer b) (inner a) (inner b)
 

but you can't pass in type families as arguments like that, and even if you could, their lack of injectivity doesn't give the type checker enough to work with to compose your lenses. By specifying all 4 type arguments independently, we give the compiler enough to work with. But since the arguments aren't just freely polymorphic and are instead related by these index types, I'm choosing to call them "lens families" rather than "polymorphic lenses".

Getters

Note, we didn't use the full polymorphism of the van Laarhoven lenses in the signatures of (^.), (%=) and (^=) above.

What happens when we restrict the type of Functor we're allowed to pass to our lens?

If we generalize the type of our getter ever so slightly from the type we pass to (^.) to permit composition, we get:

 
type Getter a c = forall r d b. (c -> Getting r d) -> a -> Getting r b
 

and we can make getters out of arbitrary Haskell functions that we have lying around with

 
-- | build a getting out of a function
getting :: (a -> b) -> Getter a b
getting g f = Getting . got . f . g
 

For example:

 
getFst :: Getter (a,b) a
getFst = getting fst
 
getSnd :: Getter (a,b) b
getSnd = getting snd
 

But this is particularly nice for things that can't be made into real lenses or lens families, because of loss of information:

 
getPhase :: RealFloat a => Getter (Complex a) a
getPhase = getting phase
 
getAbs, getSignum  :: Num a => Getter a a
getAbs = getting abs
getSignum = getting signum
 

Notably, getMagnitude and getPhase can't be legal lenses because when the magnitude is 0, you lose phase information.

These can be mixed and matched with other lenses when dereferencing with (^.)

 
ghci> (0,(1:+2,3)) ^. getting snd . fstLens . getting magnitude
2.23606797749979
 

But we get a type error when we attempt to write to a Getter.

 
ghci> getting magnitude ^= 12
<interactive>:2:1:
    Couldn't match expected type `Setting d0'
                with actual type `Getting r0 d1'
    Expected type: (c0 -> Setting d0) -> a1 -> Setting b1
      Actual type: (c0 -> Getting r0 d1) -> a0 -> Getting r0 b0
    In the return type of a call of `getting'
    In the first argument of `(^=)', namely `getting magnitude'
</interactive>

Setters

So what about write-only properties?

These have a less satisfying solution. We have to break our lens family structure slightly to make something that can strictly only be written to, by disabling the ability to read our current value entirely.

 
type Setter a d b = (() -> Setting d) -> a -> Setting b
 
setting :: (a -> d -> b) -> Setter a d b
setting f g a = Setting (f a (unsetting (g ())))
 

Now we can make setters out of functions that take two arguments:

 
plus, times :: Num a => Setter a a a
plus = setting (+)
times = setting (*)
 
 
ghci> setting (+) ^= 12 $ 32
44
ghci> fstLens . setting (*) ^= 12 $ (2,3)
(24,3)
 

However, these lenses have the unsatisfying property that they can only be placed last in the chain of lenses we're setting.

 
ghci> (setting (+) . realLens ^= 12) 1
<interactive>:15:16:
    Couldn't match expected type `()' with actual type `Complex d0'
    Expected type: (d0 -> Setting d0) -> () -> Setting b0
      Actual type: (d0 -> Setting d0)
                   -> Complex d0 -> Setting (Complex d0)
    In the second argument of `(.)', namely `realLens'
    In the first argument of `(^=)', namely `setting (+) . realLens'
</interactive>

This isn't surprising, if you consider that to compose data-lens lenses you need to use %= to chain setters.

Modifiers

So what do we need to do to make a lens we can only modify but not read?

Lets restore the lens family structure!

 
type Modifier a b c d = (c -> Setting d) -> a -> Setting b
 
modifying :: ((c -> d) -> a -> b) -> Modifier a b c d
modifying f g a = Setting (f (unsetting . g) a)
 

modifying makes a modify-only lens family you can modify using local information, but can't tell anyone about the contents of.

This lets us work with a lens over a variable number of elements in a structure, without worrying about a user accidentally "putting back" too many or too few entries.

 
ghci> modifying map %= (+1) $ [1,2,3]
[2,3,4]
 

They can be composed with other lenses:

 
ghci> modifying map . sndLens %= (+1) $ [("hello",1),("goodbye",2)]
[("hello",2),("goodbye",3)]
 

and unlike with a Setter, you can compose a Modifier with a Modifier:

 
modifying fmap . modifying fmap
  :: (Functor g, Functor f) =>
     (c -> Setting d) -> f (g c) -> Setting (f (g d))
 

but they cannot be read from directly:

 
ghci> [1,2,3] ^. modifying fmap
<interactive>:18:12:
    Couldn't match expected type `Getting c0 d0'
                with actual type `Setting d1'
    Expected type: (c0 -> Getting c0 d0) -> [t0] -> Getting c0 b1
      Actual type: Modifier a0 b0 c0 d1
    In the return type of a call of `modifying'
    In the second argument of `(^.)', namely `modifying map'
</interactive>

We can map over restricted domains:

 
reals :: (RealFloat a, RealFloat b) => Modifier (Complex a) (Complex b) a b
reals = modifying (\f (r :+ i) -> f r :+ f i)
 

and everything still composes:

 
ghci> reals %= (+1) $  1 :+ 2
2 :+ 3
ghci> fstLens . reals %= (+1) $ (1 :+ 2, 4)
(2.0 :+ 3.0,4)
 

These aren't limited to actions that map over the entire structure, however!

 
ghci> :m + Data.Lens
ghci> modifying (`adjust` "goodbye") %= (+1) $
      fromList [("hello",1),("goodbye",2)]
fromList [("goodbye",3),("hello",1)]
 

This lets us update potentially nested structures where something may or may not be present , which was fairly tedious to do with earlier lens representations.

Both the former map-like example and the latter update-like behavior were commonly used examples in calls for partial lenses or 'multi-lenses', but here they are able to implemented using a restricted form of a more traditional lens type, and moreover they compose cleanly with other lenses and lens families.

Rank-1 Lens Families

At the very start I mentioned that you can dispense with the need for Rank-2 Types. Doing so requires much more tedious type signatures as the LensFamily, Getter, Setter and Lens aliases are no longer legal. Also, if you want to take a lens as an argument and use it in multiple contexts (e.g. as both a getter and a setter), you'll need to clone it to obtain a lens family. For example, this fails:

 
ghci> :t \l y -> l ^= y ^. l + 1 $ y
<interactive>:1:19:
    Couldn't match expected type `Getting d0 d1'
                with actual type `Setting d0'
    Expected type: (d0 -> Getting d0 d1) -> a1 -> Getting d0 b1
      Actual type: (d0 -> Setting d0) -> a0 -> Setting b0
    In the second argument of `(^.)', namely `l'
    In the first argument of `(+)', namely `y ^. l'
</interactive>

But we can clone the supplied monomorphic lens using the composition of dlens and plens above, since the DataLensFamily completely characterizes the LensFamily with:

 
clone ::
  ((c -> Store c d d) -> (a -> Store c d b)) ->
  LensFamily a b c d
clone l f a = case l (Store id) a of
  Store g c -> fmap g (f c)
 

and then the following code type checks:

 
ghci> :t \l y -> clone l ^= y ^. clone l + 1 $ y
\l y -> clone l ^= y ^. clone l + 1 $ y
  :: Num d => ((c -> Store c d1 d1) -> a -> Store d d b) -> a -> b
 

This means you could implement an entire library to deal with lens families with restricted getters and setters and remain within the confines of Haskell 98. However, the type signatures are considerably less elegant than what becomes available when you simply add Rank2Types.

Conclusion

So, we've demonstrated that van Laarhoven lens families let you have lenses that permit polymorphic update, let you offer lenses that are restricted to only allowing the use of getters, setters or modifiers, while granting you easy composition with the existing (.) and id from the Prelude.

I think the practical existence and power of these combinators make a strong case for their use in any serious record reform proposal.

My thanks go to Russell O'Connor. He first noticed that you can generalize van Laarhoven lenses and proposed the clone combinator as a path to Haskell 98/2010 compatibility, while retaining the nicer composition model.

]]>
http://comonad.com/reader/2012/mirrored-lenses/feed/ 2
Where’s Waldo? http://comonad.com/reader/2012/wheres-waldo/ http://comonad.com/reader/2012/wheres-waldo/#comments Sun, 10 Jun 2012 23:54:02 +0000 Edward Kmett http://comonad.com/reader/?p=592 No, I don't mean like this, but rather, If you spent any time trying to figure out xkcd's Umwelt April Fool comic this year, you may be interested in the Haskell source code. They used all sorts of information about you, the browser you were using, the resolution of your screen, to the geocoding of the network address you came from, etc. to serve up a custom web comic.

Today, davean posted to github the code for waldo, the engine he wrote to drive that comic.

Alas, he was not kind enough to actually supply the code for the umwelt comic strip itself, so you'll still be left wondering if the internet managed to find all of the Easter eggs. (Are they still Easter eggs when you release something a week before Easter?) You may find the list of links below useful if you want to get a feel for the different responses it gave people.

[ Article | xkcd's Forum | Hacker News | /r/haskell ]

[Update: Jun 10, 9:09pm] davean just posted a rather insightful post mortem of the development of waldo that talks a bit about why xkcd uses Haskell internally.

]]>
http://comonad.com/reader/2012/wheres-waldo/feed/ 0
Wadler’s Law Revisited http://comonad.com/reader/2012/wadlers-law-revisited/ http://comonad.com/reader/2012/wadlers-law-revisited/#comments Mon, 02 Apr 2012 19:40:47 +0000 Edward Kmett http://comonad.com/reader/?p=571 In light of the burgeoning length of the ongoing record discussion sparked off by Simon Peyton-Jones in October, I would like to propose that we recognize an extension to Wadler's law (supplied in bold), which I'll refer to as the "Weak Record Conjecture" below.

In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position.

  • 0. Semantics
  • 1. Syntax
  • 2. Lexical syntax
  • 3. Lexical syntax of comments
  • 4. Semantics of records

I base the Weak Record Conjecture on the stable of proposed record semantics, which now includes (among others) Simple Overloaded Record Fields (SORF), Agda-derived Records (ADR), Frege-derived Records (FDR), Type-Punning Declared Overloaded Record Fields (TPDORF), Syntax Directed Name Resolution, Type Indexed Records and the less seriously proposed Homotopy Extensional Record Proposal (HERP) and Dependent Extensional Record Proposal (DERP).

There is an additional option implied but not stated in all of this, which is the option to "Leave Well Enough Alone" (LWEA?), since you can always Man Up and Learn Lenses (MUALL). Given that every record proposal I've seen thus far breaks polymorphic field updates to some degree, and lenses are going to be compatible with whatever mess folks settle on, even preserving the status quo, this is the path I've chosen to take.

Now, based on the fact that discussions of syntax have already started, and the intuition supplied by the ordering already present in Wadler's insightful law, I would also like to conjecture that perhaps an even stronger version of Wadler's law might be able to be stated, the "Strong Record Conjecture".

In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position.

  • 0. Semantics
  • 1. Syntax
  • 2. Lexical syntax
  • 3. Lexical syntax of comments
  • 4. Semantics of records
  • 5. Syntax of records
  • 6. Lexical syntax of records

Under the Strong Record Conjecture, even in the unlikely event that universal accord could be reached on record semantics today — 164 days into this discussion — we'd still be due for at least another 3 years (328 + 656 days) of backlogged complaining over the syntax before anything gets done.

The evidence thus far is pretty strong that at least the Weak Record Conjecture holds — if anything the exponent is too small and may require further calibration, but we don't have much data yet on the Strong Record Conjecture. Consequently, and in the name of science, I plan to check in again on the record debate in 3 years. Hopefully by then we will have resolved the remaining semantic issues, and will have a better feel for the necessary time commitment required to resolve items 5 and 6.

]]>
http://comonad.com/reader/2012/wadlers-law-revisited/feed/ 3
Searching Infinity Parametrically http://comonad.com/reader/2011/searching-infinity/ http://comonad.com/reader/2011/searching-infinity/#comments Sun, 25 Dec 2011 06:19:43 +0000 Edward Kmett http://comonad.com/reader/?p=510 Andrej Bauer recently gave a really nice talk on how you can exploit side-effects to make a faster version of Martin Escardo's pseudo-paradoxical combinators.

A video of his talk is available over on his blog, and his presentation is remarkably clear, and would serve as a good preamble to the code I'm going to present below.

Andrej gave a related invited talk back at MSFP 2008 in Iceland, and afterwards over lunch I cornered him (with Dan Piponi) and explained how you could use parametricity to close over the side-effects of monads (or arrows, etc) but I think that trick was lost in the chaos of the weekend, so I've chosen to resurrect it here, and improve it to handle some of his more recent performance enhancements, and show that you don't need side-effects to speed up the search after all!

First, we'll need to import a few things:

 
{-# LANGUAGE RankNTypes #-}
 
import Data.Maybe (fromMaybe)
import Control.Applicative
import Data.IntMap (IntMap)
import qualified Data.IntMap as IntMap
import Control.Monad
import Control.Monad.Trans.Class
import Data.Functor.Identity
 

What are looking for is an implementation of Hilbert's epsilon.

This is a formal mechanism for eliminating existentials over some non-empty set X by defining a function

 
ε: (X -> Prop) -> X
 

such that if there exists an x in X such that p(X) holds then p(ε(p)) holds.

As noted by Andrej, we could reify this constructively as a function "epsilon :: (X -> Bool) -> X" for some X.

Now, for some sets, Hilbert's epsilon is really easy to define. If X is a finite set, you can just exhaustively enumerate all of the options returning a member of X such that the property holds if any of them do, otherwise since X is non-empty, just return one of the elements that you tested.

This would be a pretty boring article and I'd be back to eating Christmas dinner with my family if that was all there was to it. However, certain infinite spaces can also be searched.

Last year, Luke Palmer wrote a post on "Searchable Data Types" that might also serve as a good introduction. In that article he led off with the easiest infinite space to search, the lazy naturals, or the 'one point compactification of the naturals'. That is to say the natural numbers extended with infinity.

 
data LazyNat = Zero | Succ LazyNat
infinity :: LazyNat
infinity = Succ infinity
 

Now we can implement Palmer's epsilon (called lyingSearch in his article).

 
palmer :: (LazyNat -> Bool) -> LazyNat
palmer p
  | p Zero = Zero
  | otherwise = Succ $ palmer $ p . Succ
 

The trick to making this work is that we place a requirement that the predicate that you pass has to terminate in a bounded amount of time no matter what input you give it, and since we're working with the naturals extended with infinity, if no natural satisfies the predicate, we'll just keep returning a longer and longer chain of Succ's, effectively yielding infinity.

To check to see if the returned number satisfies the predicate you can always use p (palmer p). The predicate is required to terminate in finite time, even when given infinity, so this will yield a Bool and not bottom out unless the user supplied predicate takes an unbounded amount of time.

I posted a reply to Luke's article when it came up on reddit which included a Hinze-style generic implementation of his lyingSearch predicate, which you can see now is just Hilbert's epsilon for arbitrary recursive polynomial data types.

http://www.reddit.com/r/haskell/comments/e7nij/searchable_data_types/c15zs6l

Another space we can search is the Cantor space 2^N.

 
type Cantor = Integer -> Bool
 

With that we jump clear from countable infinity to uncountable infinity, but it can still be searched in finite time!

This is the space we'll be paying attention to for the rest of this article.

First we'll define how to "book a room in Hilbert's Hotel."

 
infixr 0 #
(#) :: Bool -> Cantor -> Cantor
(x # a) 0 = x
(x # a) i = a (i - 1)
 

Then this can be used to obtain the following implementation of Hilbert's epsilon for the Cantor space, attributed by Andrej to Ulrich Berger.

 
berger :: (Cantor -> Bool) -> Cantor
berger p =
  if ex $ \a -> p $ False # a
  then False # berger $ \a -> p $ False # a
  else True  # berger $ \a -> p $ True # a
  where ex q = q (berger q)
 

This version is particularly close in structure to the one for searching the LazyNats, but it is dreadfully slow!

It would be nice to be able to search the space faster and that is just what Martin Escardo's improved version does, through a more sophisticated divide and conquer technique.

 
escardo :: (Cantor -> Bool) -> Cantor
escardo p = go x l r where
  go x l r n =  case divMod n 2 of
    (0, 0) -> x
    (q, 1) -> l q
    (q, 0) -> r $ q-1
  x = ex $ \l -> ex $ \r -> p $ go True l r
  l = escardo $ \l -> ex $ \r -> p $ go x l r
  r = escardo $ \r -> p $ go x l r
  ex q = q (escardo q)
 

To proceed from here I'll need a State monad:

 
newtype S s a = S { runS :: s -> (a, s) }
 
instance Functor (S s) where
  fmap f (S m) = S $ \s -> case m s of
    (a, s') -> (f a, s')
 
instance Applicative (S s) where
  pure = return
  (< *>) = ap
 
instance Monad (S m) where
  return a = S $ \s -> (a, s)
  S m >>= k = S $ \s -> case m s of
    (a, s') -> runS (k a) s'
 

And now we've reached the point. From here, Andrej's pure code ends, and his side-effecting ocaml and custom programming language start. The first thing he does is compute the modulus of continuity by using a side-effect that writes to a reference cell which he very carefully ensures doesn't leak out of scope, so he doesn't have to concern himself with the proposition code editing the value of the reference.

 
let mu f a =
  let r = ref 0 in
  let b n = (r := max n ! r; a n) in
    ignore (f b);
    !r
 

To obtain the same effect we'll instead make a predicate using the state monad to model the single reference cell.

 
-- bad
modulus :: (Num b, Ord s) =>
  ((s -> S s a) -> S b c) -> (s -> a) -> b
 

We can mash b and s together, and try to make the ordering and number agree by claiming that it is instead Real and we'd get the slightly more reasonable looking type:

 
-- still bad
modulus :: Real a =>
  ((a -> S n b) -> S n c) -> (a -> b) -> a
 

In the imperative code, lexical scoping had ensured that no other code could edit the reference cell, but with this type we don't have that. The predicate is allowed to use arbitrary state actions to muck with the modulus of convergence even though the only thing that should be editing it is the wrapper beta that we placed around alpha.

But how can we ensure that the end user couldn't gain access to any of the additional functionality from the monad? Parametricity!

 
-- getting better
modulus :: Real a =>
  (forall f. Monad f => (a -> f b) -> f c) ->
  (a -> b) ->
  a
 

Here the only thing you are allowed to assume about f is that it forms a monad. This gives you access to return and >>=, but the predicate can't do anything interesting with them. All it can do is work with what is effectively the identity monad, since it knows no additional properties!

We can have mercy on the end user and give them a little bit more syntactic sugar, since it doesn't cost us anything to let them also have access to the Applicative instance.

 
-- good
modulus :: Real a =>
  (forall f. (Monad f, Applicative f) => (a -> f b) -> f c) ->
  (a -> b) ->
  a
 

With that we can show Andrej's version of the modulus of convergence calculation does not need side-effects!

>
> modulus (\a -> a 10 >>= a) (\n -> n * n)
100
 

Admittedly plumbing around the monadic values in our proposition is a bit inconvenient.

His next example was written in a custom ocaml-like programming language. For translating his effect type into Haskell using parametricity, we'll need a CPS'd state monad, so we can retry from the current continuation while we track a map of assigned values.

 
newtype K r s a = K { runK :: (a -> s -> r) -> s -> r }
 
instance Functor (K r s) where
  fmap = liftM
 
instance Applicative (K r s) where
  pure = return
  (< *>) = ap
 
instance Monad (K r s) where
  return a = K $ \k -> k a
  K m >>= f = K $ \k -> m $ \a -> runK (f a) k
 

For those of you who have been paying attention to my previous posts, K r s is just a Codensity monad!

 
neighborhood ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  IntMap Bool
neighborhood phi = snd $ runK (phi beta) (,) IntMap.empty where
  beta n = K $ \k s -> case IntMap.lookup n s of
    Just b -> k b s
    Nothing -> case k True (IntMap.insert n True s) of
      (False, _) -> k False (IntMap.insert n False s)
      r -> r
 

With that we can adapt the final version of Hilbert's epsilon for the Cantor space that Andrej provided to run in pure Haskell.

 
bauer ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Cantor
bauer p = \n -> fromMaybe True $ IntMap.lookup n m where
  m = neighborhood p
 

With a little work you can implement a version of an exists and forAll predicate on top of that by running them through the identity monad.

 
exists ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Bool
forAll ::
  (forall f. (Applicative f, Monad f) => (Int -> f Bool) -> f Bool) ->
  Bool
 

I've gone further in playing with this idea, using monad homomorphisms rather than simply relying on the canonical homomorphism from the identity monad. You can get the gist of it here:

https://gist.github.com/1518767

This permits the predicates themselves to embed some limited monadic side-effects, but then you get more extensional vs. intensional issues.

An obvious direction from here is to fiddle with a version of Martin Escardo's search monad that takes advantage of these techniques, but I'll leave the exploration of these ideas to the reader for now and go enjoy Christmas dinner.

Happy Holidays,
Edward Kmett

]]>
http://comonad.com/reader/2011/searching-infinity/feed/ 11
What Constraints Entail: Part 2 http://comonad.com/reader/2011/what-constraints-entail-part-2/ http://comonad.com/reader/2011/what-constraints-entail-part-2/#comments Thu, 03 Nov 2011 07:23:53 +0000 Edward Kmett http://comonad.com/reader/?p=461 Last time we derived an entailment relation for constraints, now let's get some use out of it.

Reflecting Classes and Instances

Most of the implications we use on a day to day basis come from our class and instance declarations, but last time we only really dealt with constraint products.

For example given:

 
#if 0
class Eq a => Ord a
instance Eq a => Eq [a]
#endif
 

we could provide the following witnesses

 
ordEq :: Ord a :- Eq a
ordEq = Sub Dict
 
eqList :: Eq a :- Eq [a]
eqList = Sub Dict
 

But this would require a lot of names and become remarkably tedious.

So lets define classes to reflect the entailment provided by class definitions and instance declarations and then use them to reflect themselves.

 
class Class b h | h -> b where
  cls :: h :- b
 
infixr 9 :=>
class b :=> h | h -> b where
  ins :: b :- h
 
instance Class () (Class b a) where cls = Sub Dict
instance Class () (b :=> a) where cls = Sub Dict
 

Now we can reflect classes and instances as instances of Class and (:=>) respectively with:

 
-- class Eq a => Ord a where ...
instance Class (Eq a) (Ord a) where cls = Sub Dict
-- instance Eq a => Eq [a] where ...
instance Eq a :=> Eq [a] where ins = Sub Dict
 

That said, instances of Class and Instance should never require a context themselves, because the modules that the class and instance declarations live in can't taken one, so we can define the following instances which bootstrap the instances of (:=>) for Class and (:=>) once and for all.

 
#ifdef UNDECIDABLE
instance Class b a => () :=> Class b a where ins = Sub Dict
instance (b :=> a) => () :=> b :=> a where ins = Sub Dict
#endif
 

These two instances are both decidable, and following a recent bug fix, the current version of GHC HEAD supports them, but my local version isn't that recent, hence the #ifdef.

We can also give admissable-if-not-ever-stated instances of Class and (:=>) for () as well.

 
instance Class () () where cls = Sub Dict
instance () :=> () where ins = Sub Dict
 

Reflecting the Prelude

So now that we've written a handful of instances, lets take the plunge and just reflect the entire Prelude, and (most of) the instances for the other modules we've loaded.

 
instance Class () (Eq a) where cls = Sub Dict
instance () :=> Eq () where ins = Sub Dict
instance () :=> Eq Int where ins = Sub Dict
instance () :=> Eq Bool where ins = Sub Dict
instance () :=> Eq Integer where ins = Sub Dict
instance () :=> Eq Float where ins = Sub Dict
instance () :=> Eq Double where ins = Sub Dict
instance Eq a :=> Eq [a] where ins = Sub Dict
instance Eq a :=> Eq (Maybe a) where ins = Sub Dict
instance Eq a :=> Eq (Complex a) where ins = Sub Dict
instance Eq a :=> Eq (Ratio a) where ins = Sub Dict
instance (Eq a, Eq b) :=> Eq (a, b) where ins = Sub Dict
instance (Eq a, Eq b) :=> Eq (Either a b) where ins = Sub Dict
instance () :=> Eq (Dict a) where ins = Sub Dict
instance () :=> Eq (a :- b) where ins = Sub Dict
 
 
instance Class (Eq a) (Ord a) where cls = Sub Dict
instance () :=> Ord () where ins = Sub Dict
instance () :=> Ord Bool where ins = Sub Dict
instance () :=> Ord Int where ins = Sub Dict
instance ():=> Ord Integer where ins = Sub Dict
instance () :=> Ord Float where ins = Sub Dict
instance ():=> Ord Double where ins = Sub Dict
instance () :=> Ord Char where ins = Sub Dict
instance Ord a :=> Ord (Maybe a) where ins = Sub Dict
instance Ord a :=> Ord [a] where ins = Sub Dict
instance (Ord a, Ord b) :=> Ord (a, b) where ins = Sub Dict
instance (Ord a, Ord b) :=> Ord (Either a b) where ins = Sub Dict
instance Integral a :=> Ord (Ratio a) where ins = Sub Dict
instance () :=> Ord (Dict a) where ins = Sub Dict
instance () :=> Ord (a :- b) where ins = Sub Dict
 
 
instance Class () (Show a) where cls = Sub Dict
instance () :=> Show () where ins = Sub Dict
instance () :=> Show Bool where ins = Sub Dict
instance () :=> Show Ordering where ins = Sub Dict
instance () :=> Show Char where ins = Sub Dict
instance Show a :=> Show (Complex a) where ins = Sub Dict
instance Show a :=> Show [a] where ins = Sub Dict
instance Show a :=> Show (Maybe a) where ins = Sub Dict
instance (Show a, Show b) :=> Show (a, b) where ins = Sub Dict
instance (Show a, Show b) :=> Show (Either a b) where ins = Sub Dict
instance (Integral a, Show a) :=> Show (Ratio a) where ins = Sub Dict
instance () :=> Show (Dict a) where ins = Sub Dict
instance () :=> Show (a :- b) where ins = Sub Dict
 
 
instance Class () (Read a) where cls = Sub Dict
instance () :=> Read () where ins = Sub Dict
instance () :=> Read Bool where ins = Sub Dict
instance () :=> Read Ordering where ins = Sub Dict
instance () :=> Read Char where ins = Sub Dict
instance Read a :=> Read (Complex a) where ins = Sub Dict
instance Read a :=> Read [a] where ins = Sub Dict
instance Read a :=> Read (Maybe a) where ins = Sub Dict
instance (Read a, Read b) :=> Read (a, b) where ins = Sub Dict
instance (Read a, Read b) :=> Read (Either a b) where ins = Sub Dict
instance (Integral a, Read a) :=> Read (Ratio a) where ins = Sub Dict
 
 
instance Class () (Enum a) where cls = Sub Dict
instance () :=> Enum () where ins = Sub Dict
instance () :=> Enum Bool where ins = Sub Dict
instance () :=> Enum Ordering where ins = Sub Dict
instance () :=> Enum Char where ins = Sub Dict
instance () :=> Enum Int where ins = Sub Dict
instance () :=> Enum Integer where ins = Sub Dict
instance () :=> Enum Float where ins = Sub Dict
instance () :=> Enum Double where ins = Sub Dict
instance Integral a :=> Enum (Ratio a) where ins = Sub Dict
 
 
instance Class () (Bounded a) where cls = Sub Dict
instance () :=> Bounded () where ins = Sub Dict
instance () :=> Bounded Ordering where ins = Sub Dict
instance () :=> Bounded Bool where ins = Sub Dict
instance () :=> Bounded Int where ins = Sub Dict
instance () :=> Bounded Char where ins = Sub Dict
instance (Bounded a, Bounded b) :=> Bounded (a,b) where ins = Sub Dict
 
 
instance Class () (Num a) where cls = Sub Dict
instance () :=> Num Int where ins = Sub Dict
instance () :=> Num Integer where ins = Sub Dict
instance () :=> Num Float where ins = Sub Dict
instance () :=> Num Double where ins = Sub Dict
instance RealFloat a :=> Num (Complex a) where ins = Sub Dict
instance Integral a :=> Num (Ratio a) where ins = Sub Dict
 
 
instance Class (Num a, Ord a) (Real a) where cls = Sub Dict
instance () :=> Real Int where ins = Sub Dict
instance () :=> Real Integer where ins = Sub Dict
instance () :=> Real Float where ins = Sub Dict
instance () :=> Real Double where ins = Sub Dict
instance Integral a :=> Real (Ratio a) where ins = Sub Dict
 
 
instance Class (Real a, Enum a) (Integral a) where cls = Sub Dict
instance () :=> Integral Int where ins = Sub Dict
instance () :=> Integral Integer where ins = Sub Dict
 
 
instance Class (Num a) (Fractional a) where cls = Sub Dict
instance () :=> Fractional Float where ins = Sub Dict
instance () :=> Fractional Double where ins = Sub Dict
instance RealFloat a :=> Fractional (Complex a) where ins = Sub Dict
instance Integral a :=> Fractional (Ratio a) where ins = Sub Dict
 
 
instance Class (Fractional a) (Floating a) where cls = Sub Dict
instance () :=> Floating Float where ins = Sub Dict
instance () :=> Floating Double where ins = Sub Dict
instance RealFloat a :=> Floating (Complex a) where ins = Sub Dict
 
 
instance Class (Real a, Fractional a) (RealFrac a) where cls = Sub Dict
instance () :=> RealFrac Float where ins = Sub Dict
instance () :=> RealFrac Double where ins = Sub Dict
instance Integral a :=> RealFrac (Ratio a) where ins = Sub Dict
 
 
instance Class (RealFrac a, Floating a) (RealFloat a) where cls = Sub Dict
instance () :=> RealFloat Float where ins = Sub Dict
instance () :=> RealFloat Double where ins = Sub Dict
 
 
instance Class () (Monoid a) where cls = Sub Dict
instance () :=> Monoid () where ins = Sub Dict
instance () :=> Monoid Ordering where ins = Sub Dict
instance () :=> Monoid [a] where ins = Sub Dict
instance Monoid a :=> Monoid (Maybe a) where ins = Sub Dict
instance (Monoid a, Monoid b) :=> Monoid (a, b) where ins = Sub Dict
 
 
instance Class () (Functor f) where cls = Sub Dict
instance () :=> Functor [] where ins = Sub Dict
instance () :=> Functor Maybe where ins = Sub Dict
instance () :=> Functor (Either a) where ins = Sub Dict
instance () :=> Functor ((->) a) where ins = Sub Dict
instance () :=> Functor ((,) a) where ins = Sub Dict
instance () :=> Functor IO where ins = Sub Dict
 
 
instance Class (Functor f) (Applicative f) where cls = Sub Dict
instance () :=> Applicative [] where ins = Sub Dict
instance () :=> Applicative Maybe where ins = Sub Dict
instance () :=> Applicative (Either a) where ins = Sub Dict
instance () :=> Applicative ((->)a) where ins = Sub Dict
instance () :=> Applicative IO where ins = Sub Dict
instance Monoid a :=> Applicative ((,)a) where ins = Sub Dict
 
 
instance Class (Applicative f) (Alternative f) where cls = Sub Dict
instance () :=> Alternative [] where ins = Sub Dict
instance () :=> Alternative Maybe where ins = Sub Dict
 
 
instance Class () (Monad f) where cls = Sub Dict
instance () :=> Monad [] where ins = Sub Dict
instance () :=> Monad ((->) a) where ins = Sub Dict
instance () :=> Monad (Either a) where ins = Sub Dict
instance () :=> Monad IO where ins = Sub Dict
 
 
instance Class (Monad f) (MonadPlus f) where cls = Sub Dict
instance () :=> MonadPlus [] where ins = Sub Dict
instance () :=> MonadPlus Maybe where ins = Sub Dict
 

Of course, the structure of these definitions is extremely formulaic, so when template-haskell builds against HEAD again, they should be able to be generated automatically using splicing and reify, which would reduce this from a wall of text to a handful of lines with better coverage!

An alternative using Default Signatures and Type Families

Many of the above definitions could have been streamlined by using default definitions. However, MPTCs do not currently support default signatures. We can however, define Class and (:=>) using type families rather than functional dependencies. This enables us to use defaulting, whenever the superclass or context was ().

 
#if 0
class Class h where
  type Sup h :: Constraint
  type Sup h = ()
  cls :: h :- Sup h
  default cls :: h :- ()
  cls = Sub Dict
 
class Instance h where
  type Ctx h :: Constraint
  type Ctx h = ()
  ins :: Ctx h :- h
  default ins :: h => Ctx h :- h
  ins = Sub Dict
 
instance Class (Class a)
instance Class (Instance a)
 
#ifdef UNDECIDABLE
instance Class a => Instance (Class a)
instance Instance a => Instance (Instance a)
#endif
 
instance Class ()
instance Instance ()
#endif
 

This seems at first to be a promising approach. Many instances are quite small:

 
#if 0
instance Class (Eq a)
instance Instance (Eq ())
instance Instance (Eq Int)
instance Instance (Eq Bool)
instance Instance (Eq Integer)
instance Instance (Eq Float)
instance Instance (Eq Double)
#endif
 

But those that aren't are considerably more verbose and are much harder to read off than the definitions using the MPTC based Class and (:=>).

 
#if 0
instance Instance (Eq [a]) where
  type Ctx (Eq [a]) = Eq a
  ins = Sub Dict
instance Instance (Eq (Maybe a)) where
  type Ctx (Eq (Maybe a)) = Eq a
  ins = Sub Dict
instance Instance (Eq (Complex a)) where
  type Ctx (Eq (Complex a)) = Eq a
  ins = Sub Dict
instance Instance (Eq (Ratio a)) where
  type Ctx (Eq (Ratio a)) = Eq a
  ins = Sub Dict
instance Instance (Eq (a, b)) where
  type Ctx (Eq (a,b)) = (Eq a, Eq b)
  ins = Sub Dict
instance Instance (Eq (Either a b)) where
  type Ctx (Eq (Either a b)) = (Eq a, Eq b)
  ins = Sub Dict
#endif
 

Having tested both approaches, the type family approach led to a ~10% larger file size, and was harder to read, so I remained with MPTCs even though it meant repeating "where ins = Sub Dict" over and over.

In a perfect world, we'd gain the ability to use default signatures with multiparameter type classes, and the result would be considerably shorter and easier to read!

Fake Superclasses

Now, that we have all this machinery, it'd be nice to get something useful out of it. Even if we could derive it by other means, it'd let us know we weren't completely wasting our time.

Let's define a rather horrid helper, which we'll only use where a and b are the same constraint being applied to a newtype wrapper of the same type, so we can rely on the fact that the dictionaries have the same representation.

 
evil :: a :- b
evil = unsafeCoerce refl
 

We often bemoan the fact that we can't use Applicative sugar given just a Monad, since Applicative wasn't made a superclass of Monad due to the inability of the Haskell 98 report to foresee the future invention of Applicative.

There are rather verbose options to get Applicative sugar for your Monad, or to pass it to something that expects an Applicative. For instance you can use WrappedMonad from Applicative. We reflect the relevant instance here.

 
instance Monad m :=> Applicative (WrappedMonad m) where ins = Sub Dict
 

Using that instance and the combinators defined previously, we can obtain the following

 
applicative :: forall m a. Monad m => (Applicative m => m a) -> m a
applicative m =
  m \\ trans (evil :: Applicative (WrappedMonad m) :- Applicative m) ins
 

Here ins is instantiated to the instance of (:=>) above, so we use trans to compose ins :: Monad m :- Applicative (WrappedMonad m) with evil :: Applicative (WrappedMonad m) :- Applicative m to obtain an entailment of type Monad m :- Applicative m in local scope, and then apply that transformation to discharge the Applicative obligation on m.

Now, we can use this to write definitions. [Note: Frustratingly, my blog software inserts spaces after <'s in code]

 
(< &>) :: Monad m => m a -> m b -> m (a, b)
m < &> n = applicative $ (,) < $> m < *> n
 

Which compares rather favorably to the more correct

 
(< &>) :: Monad m => m a -> m b -> m (a, b)
m < &> n = unwrapMonad $ (,) < $> WrapMonad m < *> WrapMonad n
 

especially considering you still have access to any other instances on m you might want to bring into scope without having to use deriving to lift them onto the newtype!

Similarly you can borrow < |> and empty locally for use by your MonadPlus with:

 
instance MonadPlus m :=> Alternative (WrappedMonad m) where ins = Sub Dict
 
alternative :: forall m a. MonadPlus m => (Alternative m => m a) -> m a
alternative m =
  m \\ trans (evil :: Alternative (WrappedMonad m) :- Alternative m) ins
 

The correctness of this of course relies upon the convention that any Applicative and Alternative your Monad may have should agree with its Monad instance, so even if you use Alternative or Applicative in a context where the actual Applicative or Alternative instance for your particular type m is in scope, it shouldn't matter beyond a little bit of efficiency which instance the compiler picks to discharge the Applicative or Alternative obligation.

Note: It isn't that the Constraint kind is invalid, but rather that using unsafeCoerce judiciously we can bring into scope instances that don't exist for a given type by substituting those from a different type which have the right representation.

[Source]

]]>
http://comonad.com/reader/2011/what-constraints-entail-part-2/feed/ 13
What Constraints Entail: Part 1 http://comonad.com/reader/2011/what-constraints-entail-part-1/ http://comonad.com/reader/2011/what-constraints-entail-part-1/#comments Thu, 03 Nov 2011 05:46:11 +0000 Edward Kmett http://comonad.com/reader/?p=430 Constraint, Monad :: (* -> *) -> Constraint, but since it is a kind, we can make type families for constraints, and even parameterize constraints on constraints. So, let's play [...]]]> Max Bolingbroke has done a wonderful job on adding Constraint kinds to GHC.

Constraint Kinds adds a new kind Constraint, such that Eq :: * -> Constraint, Monad :: (* -> *) -> Constraint, but since it is a kind, we can make type families for constraints, and even parameterize constraints on constraints.

So, let's play with them and see what we can come up with!

A Few Extensions

First, we'll need a few language features:

 
{-# LANGUAGE
  CPP,
  ScopedTypeVariables,
  FlexibleInstances,
  FlexibleContexts,
  ConstraintKinds,
  KindSignatures,
  TypeOperators,
  FunctionalDependencies,
  Rank2Types,
  StandaloneDeriving,
  GADTs
  #-}
 

Because of the particular version of GHC I'm using I'll also need

 
{-# LANGUAGE UndecidableInstances #-}
#define UNDECIDABLE
 

but this bug has been fixed in the current version of GHC Head. I'll be explicit about any instances that need UndecidableInstances by surrounding them in an #ifdef UNDECIDABLE block.

Explicit Dictionaries

So with that out of the way, let's import some definitions

 
import Control.Monad
import Control.Monad.Instances
import Control.Applicative
import Data.Monoid
import Data.Complex
import Data.Ratio
import Unsafe.Coerce
 

and make one of our own that shows what we get out of making Constraints into a kind we can manipulate like any other.

 
data Dict a where
  Dict :: a => Dict a
 

Previously, we coud make a Dict like data type for any one particular class constraint that we wanted to capture, but now we can write this type once and for all. The act of pattern matching on the Dict constructor will bring the constraint 'a' into scope.

Of course, in the absence of incoherent and overlapping instances there is at most one dictionary of a given type, so we could make instances, like we can for any other data type, but standalone deriving is smart enough to figure these out for me. (Thanks copumpkin!)

 
deriving instance Eq (Dict a)
deriving instance Ord (Dict a)
deriving instance Show (Dict a)
 

If we're willing to turn on UndecidableInstances to enable the polymorphic constraint we can even add:

 
#ifdef UNDECIDABLE
deriving instance a => Read (Dict a)
instance a => Monoid (Dict a) where
  mappend Dict Dict = Dict
  mempty = Dict
#endif
 

and similar polymorphically constrained instances for Enum, Bounded, etc.

Entailment

For that we'll need a notion of entailment.

 
infixr 9 :-
newtype a :- b = Sub (a => Dict b)
 
instance Eq (a :- b) where
  _ == _ = True
 
instance Ord (a :- b) where
  compare _ _ = EQ
 
instance Show (a :- b) where
  showsPrec d _ = showParen (d > 10) $
    showString "Sub Dict"
 

Here we're saying that Sub takes one argument, which is a computation that when implicitly given a constraint of type a, can give me back a dictionary for the type b. Moreover, as a newtype it adds no overhead that isn't aleady present in manipulating terms of type (a => Dict b) directly.

The simplest thing we can define with this is that entailment is reflexive.

 
refl :: a :- a
refl = Sub Dict
 

Max has already written up a nice restricted monad example using these, but what I want to play with today is the category of substitutability of constraints, but there are a few observations I need to make, first.

ConstraintKinds overloads () and (a,b) to represent the trivial constraint and the product of two constraints respectively.

The latter is done with a bit of a hack, which we'll talk about in a minute, but we can use the former as a terminal object for our category of entailments.

 
weaken1 :: (a, b) :- a
weaken1 = Sub Dict
 
weaken2 :: (a, b) :- b
weaken2 = Sub Dict
 

Constraints are idempotent, so we can duplicate one, perhaps as a prelude to transforming one of them into something else.

 
contract :: a :- (a, a)
contract = Sub Dict
 

But to do much more complicated, we're going to need a notion of substitution, letting us use our entailment relation to satisfy obligations.

 
infixl 1 \\ -- required comment
(\\) :: a => (b => r) -> (a :- b) -> r
r \\ Sub Dict = r
 

The type says that given that a constraint a can be satisfied, a computation that needs a constraint of type b to be satisfied in order to obtain a result, and the fact that a entails b, we can compute the result.

The constraint a is satisfied by the type signature, and the fact that we get quietly passed whatever dictionary is needed. Pattern matching on Sub brings into scope a computation of type (a => Dict b), and we are able to discharge the a obligation, using the dictionary we were passed, Pattern matching on Dict forces that computation to happen and brings b into scope, allowing us to meet the obligation of the computation of r. All of this happens for us behind the scenes just by pattern matching.

So what can we do with this?

We can use \\ to compose constraints.

 
trans :: (b :- c) -> (a :- b) -> a :- c
trans f g = Sub $ Dict \\ f \\ g
 

In fact, the way the dictionaries get plumbed around inside the argument to Sub is rather nice, because we can give that same definition different type signatures, letting us make (,) more product-like, giving us the canonical product morphism to go with the weakenings/projections we defined above.

 
(&&&) :: (a :- b) -> (a :- c) -> a :- (b, c)
f &&& g = Sub $ Dict \\ f \\ g
 

And since we're using it as a product, we can make it act like a bifunctor also using the same definition.

 
(***) :: (a :- b) -> (c :- d) -> (a, c) :- (b, d)
f *** g = Sub $ Dict \\ f \\ g
 

Limited Sub-Superkinding?

Ideally we'd be able to capture something like that bifunctoriality using a type like

 
#if 0
class BifunctorS (p :: Constraint -> Constraint -> Constraint) where
  bimapS :: (a :- b) -> (c :- d) -> p a c :- p b d
#endif
 

In an even more ideal world, it would be enriched using something like

 
#ifdef POLYMORPHIC_KINDS
class Category (k :: x -> x -> *) where
  id :: k a a
  (.) :: k b c -> k a b -> k a c
instance Category (:-) where
  id = refl
  (.) = trans
#endif
 

where x is a kind variable, then we could obtain a more baroque and admittedly far less thought-out bifunctor class like:

 
#if 0
class Bifunctor (p :: x -> y -> z) where
  type Left p :: x -> x -> *
  type Left p = (->)
  type Right p :: y -> y -> *
  type Right p = (->)
  type Cod p :: z -> z -> *
  type Cod p = (->)
  bimap :: Left p a b -> Right p c d -> Cod p (p a c) (p b d)
#endif
 

Or even more more ideally, you could use the fact that we can directly define product categories!

Since they are talking about kind-indexing for classes and type families, we could have separate bifunctors for (,) for both kinds * and Constraint.

The current constraint kind code uses a hack to let (a,b) be used as a type inhabiting * and as the syntax for constraints. This hack is limited however. It only works when the type (,) is fully applied to its arguments. Otherwise you'd wind up with the fact that the type (,) needs to have both of these kinds:

 
-- (,) :: Constraint -> Constraint -> Constraint and
-- (,) :: * -> * -> *
 

What is currently done is that the kind magically switches for () and (,) in certain circumstances. GHC already had some support for this because it parses (Foo a, Bar b) as a type in (Foo a, Bar b) => Baz a b before transforming it into a bunch of constraints.

Since we already have a notion of sub-kinding at the kind level, we could solve this for () by making up a new kind, say, ??? which is the subkind of both * and Constraint, but this would break the nice join lattice properties of the current system.

[Edit: in the initial draft, I had said superkind]

 
--    ?
--   / \
-- (#)  ??
--     /  \
--    #    *  Constraint
--          \ /
--          ???
 

But this doesn't address the kind of (,) above. With the new polymorphic kinds that Brent Yorgey and company have been working on and a limited notion of sub-superkinding, this could be resolved by making a new super-kind @ that is the super-kind of both * and Constraint, and which is a sub-superkind of the usual unnamed Box superkind.

 
-- Box
--  |
--  @
 

Then we can have:

 
-- (,) :: forall (k :: @). k -> k -> k
-- () :: forall (k :: @). k
 

and kind checking/inference will do the right thing about keeping the kind ambiguous for types like (,) () :: forall (k :: @). k

This would get rid of the hack and let me make a proper bifunctor for (,) in the category of entailments.

The version of GHC head I'm working with right now doesn't support polymorphic kinds, so I've only been playing with these in a toy type checker, but I'm really looking forward to being able to have product categories!

Stay Tuned

Next, we'll go over how to reflect the class and instance declarations so we can derive entailment of a superclass for a class, and the entailment of instances.

[Source]

]]>
http://comonad.com/reader/2011/what-constraints-entail-part-1/feed/ 4
Homotopy and Directed Type Theory Slides http://comonad.com/reader/2011/homotopy-and-directed-type-theory-slides/ http://comonad.com/reader/2011/homotopy-and-directed-type-theory-slides/#comments Fri, 28 Oct 2011 02:38:34 +0000 Edward Kmett http://comonad.com/reader/?p=425 As requested, here are the slides from Dan Doel's excellent presentation on Homotopy and Directed Type Theory from this past Monday's Boston Haskell.

]]>
http://comonad.com/reader/2011/homotopy-and-directed-type-theory-slides/feed/ 0