Type Theory


As requested, here are the slides from Dan Doel's excellent presentation on Homotopy and Directed Type Theory from this past Monday's Boston Haskell.

Ok, I decided to take a step back from my flawed approach in the last post and play with the idea of power series of functors from a different perspective.

I dusted off my copy of Herbert Wilf's generatingfunctionology and switched goals to try to see some well known recursive functors or species as formal power series. It appears that we can pick a few things out about the generating functions of polynomial functors.

As an example:

 
Maybe x = 1 + x
 

Ok. We're done. Thank you very much. I'll be here all week. Try the veal...

For a more serious example, the formal power series for the list [x] is just a geometric series:

(more...)

The post below will only compile on a version of GHC >= 6.9, since it uses type families.

(more...)

Recently Eric Kidd and Dan Piponi have used a bit of type hackery by Oleg Kiselyov and -fno-implicit-prelude to build some interesting restricted monads, like the Wadler Set and Bag monads.

There is another interesting monad variation - a parameterized monad - where the monad carries around an additional parameter at the type level such as a type-level set of effects. One really good example of this is the separation logic monad in Hoare Type Theory. The pre- and post- conditions can be viewed as the parameter carried around on that monad. Wadler and Thiemann, Jean-Christophe FilliĆ¢tre and others have explore this notion for encoding effects.

(more...)

Was reading Castagna, Ghelli, and Longo's 1995 paper on "A Calculus for Overloaded Functions with Subtyping" today and in it they have to jump through some hoops to index their '&' types to keep them well behaved under β-reduction.

It seems to me, at least from my back-of-the-envelope scribblings, that if you CPS transform the calculus before, that the main technical innovation (overloaded functions using the tighter run-time type information) remains intact, but the need for this technical trick goes away. In this case you know what the reduction will evaluate out to regardless of call-by-value or call-by-need (just bottom), and if the specification changes during evaluation it is still sound, so no need for an index.

 \inference{\Gamma \vdash M:W_1 \leq \lbrace\neg U_i\rbrace_{i\leq(n-1)} & \Gamma \vdash N : W_2 \leq \neg U_n}{\Gamma \vdash (M \binampersand N) : \lbrace \neg U_i \rbrace_{i \leq n }}[$\lbrace\rbrace$-I]

 \inference{\Gamma \vdash M : \lbrace \neg U_i \rbrace_{i \in I} & \Gamma \vdash N : U & U_j = \min_{i \in I} \lbrace U_i \vert U \leq U_i \rbrace } {\Gamma \vdash M \bullet N : \perp }[$\lbrace\rbrace$-E]

The above then would requires explicit continuations and might interfere with rederiving tupling from the overloading mechanism alone, but seems to eliminate some of the barriers they mention to the higher order case. However, I'm not convinced it is a net win regardless, because it would require a notion of typecase.