Coeffects: Context-aware programming languages

Presented on April 28, 2017
Presenter: Mika

Preview

Hello all!

Next thursday, we’ll be learning about Tomas Petricek’s PhD thesis, about coeffects, the dual of type effects: if you imagine type effects to represent what your program does to the world while it’s running, coeffects are what your program requires from the world.

Petricek seems to be a master presenter, so if you want to get a head-start on all of this, go right ahead, but I’ll try to go over it slowly, as it contains the prefix “co*“, which somehow makes our minds think that the topic in question is very complex (when actually, it’s quite mplex, y’all).

Hoping to see you all then,

Miroslav

PS. I’m attaching Petricek’s thesis and the two most influencial papers on the topic that mostly constitute his thesis. The first one (calculus of context-dependent computation) is a good intro, the other is a bit more heavy-duty.

Summary

The author is trying to convince us that coeffects, which are the dual of type effects, help us create more readable, efficient and secure code. Effects represent what the program does to the world, while coeffects present what the program requires from the world. Where effects are modeled through monads, , coeffects are on the side of the environment, represented by comonads, .

I’ll try to explain this as simple as possible, through some example typing rules. Effect systems can define something like , where the part marks that this expression returns unit and does an IO effect. Coeffects are kind of opposite.

A coeffect system (as an addition to the above effect system) could define something like , which now declares that the tone expression requires a set of resources called Audio as a coeffect. What it does with this, we do not know from the declaration, but if it isn’t available, our expression can’t type-check.

Where effects are always delayed to the calling-site of the expression which uses them, coeffects are a bit more complicated, being able to use both coeffects required at declaration-time, as well as the ones required at evaluation-time (thanks, Mehmet, for these names, they are much better than what Petricek himself has). To mark both of these, we mark them separately in the inferences:

inf

In this notation, r and s mark different kinds of coeffects, where r is the set of coeffects declaration-time required coeffects, and s is the set of evaluation-time required coeffects. In a sense, I see these as static and dynamic assertions, embedded in the type-system.

The motivating examples for coeffects in the dissertation are great: basically, what if we could skip having to write lots of hardware-, platform- or system-dependent #if directives and have these be coeffects? What if we could encode the locality of values in a matrix so that a function requiring the neighborhood of a field would have to ask for them, or otherwise not be able to compile?

The actual examination of coeffect systems in Petricek’s work, however, goes into a much more confusing direction, rather explaining them through two languages: a toy language implementing Haskell’s implicit parameters, and one dataflow computation language, in which every variable is a stream of values, and there’s a prev operator that accesses previous values.

In the implicit parameters example, coeffects are used to represent the types needed for a computation to be able to work; in the dataflow computation language, they are used to make sure that the stream has enough members in it, so that calling a prev prev ... prev x operation doesn’t go outside its bound.

These second language showed two types of coeffect algebras: flat and structural. This is very much alike the idea of a collecting semantics from analysis, where (correct me if I’m terribly wrong) we changed the collected information with how we defined the trace. In flat coeffects, the outputting coeffect is a single coeffect, while with structural, every computation constructs a larger structure, stacking coeffects together and showing the whole process of requirement accumulation. We went over several examples of how these actually work, with some success.

Papers aside, Petricek’s presentation is outstanding! I don’t know what his site was made in, but it is the most modern approach to a scientific presentation that I have ever experienced: there are live-computations, recompilations, live code examples and real-time typechecks; the dataflow example can be shown by using the mouse pointer to create two streams of data, the (x, y) coordinates, and then bind those in the programs we try out live. The typing rules and inferences have comments, they can be clicked for more data and further inferences, etc. To think that he not only wrote his dissertation, but also took the time to make this…

Mehmet promised me some offline talk about coeffects, so I’m looking forward to that!

Attachments