Tuesday, November 25, 2014

Reactive programming through explicit effects


This paper is a pause from my spree about the ash shading language library I’m working on. I’m also working on a 3D engine, photon, in which I plan to use ash. The projects are then closely related ;).

The purpose of this paper is to discover a nice way of dealing with the reaction problem. There’re common and elegant solutions that address the problem, like FRP1, which is pretty elegant, but also very experimental. I wanted to explore on my own. This is what I come up with.

Side effect

As a programmer, you might already have come across side effects. You haven’t? Let’s have a look at these nasty things.

A side effect is a big word to describe an effect a function has that mutates something out of scope. For instance, you could picture a function modifying a global state, environment or any kind of “remote” value.

Another way to understand what a side effect is is to look at referential transparency. If you need assumptions to be able to say what a piece of code does, you might be in the presence of a side effect. For instance, look at this C++ snippet:

void update(int rx, int ry) {
  _x += rx;
  _y += ry;

Do you know what those lines do?

Yeah, sure! It’s a method in a class that updates _x and _y fields by applying them offsets rx and ry!

That could be. Let me add some more code to complete the example:

int _x = 0;
int _y = 0;

void update(int rx, int ry) {
  _x += rx;
  _y += ry;

int main() {
  return 0;

A class, you said? As you can see, we can’t say what that code does, because we have to look at all possible code lines that code might touch. It could be a class method, it could be a simple function, _x and _y could be int as well as they could be Foo objects with the operator+= overloaded. The list is long.

In that snippet, _x += rx and _y += ry are side effects, because they alter “something” elsewhere. That could be fine, but it’s not. The more you have side effects, the harder it is to debug, maintain, compose, understand and make your code base evolve. As a good programmer, you should care about side effect avoidance.


In pure functional languages like Haskell, we can’t do that kind of assignment, at least not directly. We would write this function:

update :: (Int,Int) -> (Int,Int) -> (Int,Int)
update (x,y) (rx,ry) = (x + rx,y + ry)

Because everything is immutable in Haskell, we are sure that (x,y) and (rx,ry) are constant. They can’t change. So we can say what that function does:

It takes a pair of int and apply them an offset!

Yeah, exactly. And it can’t be anything else. We don’t need assumptions, because such a function is transparent. That’s a great property you can rely on – do it, your compiler does ;)

However, we still need side effects

If Haskell rejected all side effects, we couldn’t have any programs. We would have theorems, properties and purity, but nothing on screen, no inputs, nothing actually computing. That would be a pity. But why? Well, we still need side effects. When you print something on screen, you actually want a side effect. There’s nowadays no other way to go. You pass a String to putStrLn for instance, and the function generates a side effect to put the String on screen. That’s a side effect because it’s out of scope of your program. You touch something elsewhere.

That’s the same thing for inputs, reading files, creating files and so on and so forth. Haskell uses a type to be able to deal with those cases: IO a. It’s a polymorphic type that can look at the real world. You can’t but IO a can.

We have IO a to explicitly deal with side effects, that’s cool. However, IO represents any kind of side effects. We’d like to be able to explicitly say “Hey, I can have that effect”.

Reaction: Part 1 – Explicit effects in imperative languages

Reacting to something requires activation. Do you know the observer design pattern? That is an interesting design pattern everyone should know. It’s very powerful since it enables you to run actions when an observed event is generated. You have to explicitly describe what actions and/or objects you want to observe. That is commonly done via two important things:

  • implementing an interface that exports the event handlers interface;
  • explicitly interleaving your observed code with calls to the abstract event handlers.

Let’s take an example, still in C++:

// this is the interface used to react to events
class FooObserver {
  FooObserver(void) {}
  virtual ~FooObserver() = 0;
  virtual void on_fire(Direction const &dir) = 0;
  virtual void on_set_a(int oldValue, int newValue) = 0;
  virtual void on_set_b(std::string const &oldValue, std::string const &newValue) = 0;

class Foo {
  int _a;
  std::string _b;
  std::list<FooObserver&> _observers; // all observers that will react to events
  Foo(void) : _a(0), _b("") {}
  ~Foo(void) {}
  void addObserver(FooObserver &observer) {
  void fire(Direction const &dir) {
    // use dir
    for (auto observer : _observers)
      observer.on_fire(dir); // notify all observers something has happenned
  void setA(int a) {
    _a = a;
    for (auto observer : _observers)
      observer.on_set_a(_a, a); // notify all observers something has happenned
  void setB(std::string const &b) {
    _b =  b;
    for (auto observer : _observers)
      observer_on_set_b(_b, b); // notify all observers something has happenned

If we want to react to events emitted by an object of type Foo, we just have to create a new type that inherits from FooObserver, implement its abstract methods and register an object of our type so that the value can call it when it has to emit events. That’s pretty great, but it has a lot of side effects, and we’re gonna try to abstract that away.

Reaction: Part 2 – Explicit effects in Haskell

I’ve been wondering around for a while. There’re folks that advise to use FRP. It addresses the issue another way though – I won’t talk about FRP in this post, maybe later since it’s a very interesting concept. For my part, I wanted something like pipes. Being able to compose my functions along with having effects.

In photon, my 3D engine, I use explicit effects to implement reactions. That is done via a typeclass called Effect:

class (Monad m) => Effect e m where
  react :: e -> m ()

Pretty straight-forward eh? We call react e to react to an event of type e. Let’s have a look at a few examples.

react, examples – Part 1

Let’s start with a simple example:

-- This type represents all effects we want to observe.
data IntChanged
  = IntSucc
  | IntPred
  | IntConst Int
    deriving (Eq,Show)

-- This instance enables us to react in a State Int.
instance  Effect IntChanged (State Int) where
  react e = case e of
    IntSucc -> modify succ
    IntPred -> modify pred
    IntConst x -> put x

foo :: (Effect IntChanged m) => m String
foo = do
  react (IntConst 314)
  return "foo"

bar :: (Effect IntChanged m) => Float -> m Float
bar a = do
  when (sqrt a < 10) . replicateM_ 3 $ react IntSucc
  return (a + pi)

Let’s use that. I use explicit types because I’m in ghci:

flip runState 0 (foo :: State Int String)


flip runState 0 (bar 0 :: State Int Float)


flip runState 0 (bar 10 :: State Int Float)


flip runState 0 (bar 99 :: State Int Float)


flip runState 0 (bar 100 :: State Int Float)


flip runState 0 (bar 314 :: State Int Float)


As you can see, we can have effects without IO. In that case, it was pretty simple. But since it’s abstract to any Monad, we could implement effects in IO, specific ones.

react, examples – Part 2

Let’s see an example in IO.


set to 314


bar 0





bar 10





bar 99





bar 100


bar 314


Because our foo and bar functions are polymorphic, we can use them with any types implementing the wished effects! That’s pretty great because it enables us to write our code in an abstract way, and interpret it with backends.

Extra – handles

Because all of this was firstly designed for my photon engine, I had to deal with an important question. Having effects is great, but how could we make an effect like:

“Draw the mesh with ID=486.”


I use handles to deal with that. I use a type to represent handles (H). Each object that can be managed (i.e. that can have a handle) can be wrapped up in Managed a. Basically:

type H = Int

data Managed a = Managed {
    handle :: H
  , managed :: a
  } deriving (Eq,Show)

Now, because we want to react to the fact that an object is being managed – or not managed anymore – we have to introduce special effects.

Effectful managing

Hence two new types: Manager m and EffectfulManage a s l.


A manager is a monad that can generate new handles to manage any kind of value and recycle managed values:

class (Monad m) => Manager m where
  manage :: a -> m (Managed a)
  drop   :: Managed a -> m ()

manage a will turn a into a managed version you can use for whatever you want. In theory, you shouldn’t have access to the constructor of Managed nor the handle field.

If a type implements both Monad and Manager, we can manage values and recycle them very easily:

import Prelude hiding ( drop )

foo :: (Manager m) => m ()
foo = do
  x <- manage 3
  y <- manage "hey!"
  drop x
  drop y

Notice: if you want to be able to use the drop function, you’ll have to hide Prelude’s drop.


However, we’d like to be able to react to the fact a value is now tracked by our monad, or recycled. That’s done through the following typeclass:

class EffectfulManage a s l | a -> s l where
  spawned :: Managed a -> s
  lost    :: Managed a -> l

EffectfulManage a s l provides an event s and an event l for a. If you’re not comfortable with functional dependencies, a -> s l means you can’t have two pairs of events for the same type.

Let’s take an example.

data IntSpawned = IntSpawned (Managed Int)
data IntLost = IntLost (Managed Int)

instance EffectfulManage Int IntSpawned IntLost where
  spawned = IntSpawned
  lost = IntLost

Pretty simple. Now, there’re two functions to react to such events:

spawn :: (Manager m,EffectfulManage a s l,Effect s m) => a -> m (Managed a)
lose :: (Manager m,EffectfulManage a s l,Effect l m) => Managed a -> m ()

spawn a manages the value a, returning its managed version, and as you can see in the type signature, have an effect of type s, which is the spawned effect. lost a takes a managed value, drops it, and emits the corresponding l event.

In our case, with our Int, we can specialize both the functions this way:

spawn :: (Manager m,EffectfulManage Int IntSpawned IntLost,Effect IntSpawned m) => Int -> m (Managed Int)
lost :: (Manager m,EffectfulManage Int IntSpawned IntLost,Effect IntLost m) => Managed Int -> m ()

Let’s write a simple example:

instance (Functor m,Monad m) => Manager (StateT [H] m) where
  manage x = fmap (flip Managed x) (gets head <* modify tail)
  drop (Managed h _) = modify $ (:) h

-- a possible backend...
instance Effect IntSpawned (StateT [H] IO) where
  react (IntSpawned (Managed h i)) = liftIO . putStrLn $ "int has spawned:" ++ show i

instance Effect IntLost (StateT [H] IO) where
  react (IntLost (Managed h i)) = liftIO . putStrLn $ "int lost :" ++ show i

In the end

This is being implemented in photon, and I think it’s a good start. I once wanted to use pipes, but a 3D engine is not a streaming problem: it’s a reactive problem. Maybe FRP could be more elegant, I don’t know – the H type is not the most elegant thing ever, but it works fine.

What do you think folks?

  1. Functional Reactive Programming

Monday, November 17, 2014

Abstracting Over Shader – Environment

In the previous episode…

This blog entry directly follows the one in which I introduced Ash, a shading language embedded in Haskell. Feel free to read it here before going on.

Controlling behavior

A shader is commonly a function. However, it’s a bit more than a simple function. If you’re a haskeller, you might already know the MonadReader typeclass or simply Reader (or its transformer version ReaderT). Well, a shader is kind of a function in a reader monad.

So… that implies a shader runs in… an environment?

Yeah, exactly! And you define that environment. The environment is guaranteed not to change between two invocations of a shader for the same render (e.g. between two vertices in the same render). This is interesting, because it enables you to use nice variables, such as time, screen resolution, matrices and whatever your imagination brings on ;)

The environment, however, can be changed between two renders, so that you can update the time value passed to the shader, the new resolution if the window resizes, the updated matrices since your camera’s moved, and so on and so forth.

Let’s see a few example in GLSL first.

Shader environment in GLSL

To control the environment of a shader in GLSL, we use uniform variables. Those are special, global variables and shared between all stages of a shader chain1.

Let’s see how to introduce a few uniforms in our shader:

uniform float time;       // time of the host application
uniform vec2 resolution;  // (w,h)
uniform vec2 iresolution; // (1 / w, 1 / h), really useful in a lot of cases ;)
uniform mat4 proj;        // projection matrix
uniform int seed;         // a seed for whatever purpose (perlin noise?)
uniform ivec2 gridSize;   // size of a perlin noise grid!

You got it. Nothing fancy. Those uniforms are shared between all stages so that we can use time in all our shaders, which is pretty cool. You use them as any kind of other variables.

Ok, let’s write an expression that takes a time, a bias value, and multiply them between each other:

uniform float time;
uniform float bias;

// this is not a valid shader, just the expression using it
time * bias;

Shader environment in HLSL

HLSL uses the term constant buffers to introduce the concept of environment. I don’t have any examples right now, sorry for the inconvenience.

Shader environment in Ash

In Ash, environment variables are not called uniforms nor constant buffers. They’re called… CPU variables. That might be weird at first, but let’s think of it. Those values are handled through your application, which lives CPU-side. The environment is like a bridge between the CPU world and the GPU one. A CPU variable refers to a constant value GPU-side, but varying CPU-side.

Create a CPU variable is pretty straight-forward. You have to use a function called cpu. That function is a monadic function working in the EDSL monad. I won’t describe that type since it’s still a work in progress, but it’s a monad for sure.

Note: If you’ve read the previous blog entry, you might have come across the Ash type, describing a HOAST. That type is no more a HOAST. The “new Ash” – the type describing the HOAST – is now Expr.

This is cpu:

cpu :: (CPU a) => Chain (Expr a)

CPU is a typeclass that enables a type to be injected in the environment of a shader chain. The instances are provided by Ash and you can’t make your own – do you really want to make instance CPU String, or instance (CPU a) => CPU (Maybe a)? Don’t think so ;)

Let’s implement the same time–bias example as the GLSL one:

foo :: Chain (Expr Float)
foo = liftA2 (*) cpu cpu

That example is ridiculous, since in normal code, you’d actually want to pass the CPU variables to nested expressions, in shaders. So you could do that:

foo :: Chain ()
foo = do
  time <- cpu
  bias <- cpu

  -- do whatever you want with time and bias
  return ()

You said Chain?

Chain is a new type I introduce in this paper. The idea came up from a discussion I had with Edward Kmett when I discovered that the EDSL needed a way to bind the CPU variables. I spotted two ways to go:

  • using a name, like String, passed to cpu; that would result in writing the name in every shader using it, so that’s not ideal;
  • introducing the environment and providing a monad instance so that we could bind the CPU variables and use them in shaders inside the monad.

The latter also provides a nice hidden feature. A chain of shaders might imply varying2 values. Those varying values have information attached. If you mistake them, that results in catastrophic situations. Using a higher monadic type to capture that information – along with the environment – is in my opinion pretty great because it can prevent you from going into the wall ;).

To sum up, Chain provides a clear way to describe the relation between shaders.

What’s next?

I’m still building and enhancing Ash. In the next post, I’ll try to detail the interface to build functions, but I still need to find how to represent them the best possible way.

  1. You can imagine a shader chain as an explicit composition of functions (i.e. shaders). For instance, a vertex shader followed by geometry shader, itself followed by a fragment shader.

  2. Varying values are values that travel between shaders. When a shader outputs a value, it can go to the input of another shader. That is call a varying value.

Friday, November 14, 2014

Abstracting shader – introducing the ash Haskell library


Abstracting what?

Shaders are very common units in the world of graphics. Even though we’re used to using them for shading1 purposes, they’re not limited to that. Vulgarisation has ripped off the meaning up and down so much that nowadays, a shader might have nothing related to shading. If you’re already doing some graphics, you may know OpenGL and its compute shaders. They have nothing to do with shading.

You might also already know shader toy. That’s a great place to host cool and fancy OpenGL shaders2. You write your shaders in GLSL3 then a GLSL compiler is invoked, and your shader is running on the GPU.

The problem with source based shaders

So you write your shader as a source code in a host language, for instance in C/C++, Java, Haskell, whatever, and you end up with a shader running on GPU.

There’re two nasty issues with that way of doing though:

  • the shader is compiled at runtime, so if it contains error, you’ll know that after your application starts ;
  • you have to learn a new language for each target shader compilers.

They’re both serious issues I’m going to explain further.

Issue n°1: compiled at runtime

This is problematic for a good reason: a lot of shaders are application dependent. Shadertoy is a nice exception, just like modeling tools or material editors, but seriously, in most applications, end users are not asked the shaders to run with. In a game for instance, you write all the shaders while writing the game, and then release the whole package.

Yeah… What’s about additional content? Per-map shaders, or that kind of stuff?

Those shaders are like resources. That doesn’t imply using them as is though. We could use dynamic relocatable objects (.so or .dll) for instance.

What compile-time compilation gives you?

It gives you something hyper cool: host language features. If you have a strongly-typed language, you’ll benefit from that. And that’s a huge benefit you can’t get away from. If you’re writing an incorrectly typed shader, your application / library won’t compile, so that the application won’t react in weird way at run-time. That’s pretty badass.

Issue n°2: languages, languages…

This issue is not as important as the first one, but still. If you’re working on a project and you target several platforms (among ones using OpenGL, OpenGL ES, DirectX and a soft renderer), you’ll have to learn several shading languages as well (GLSL, HLSL4).

In order to solve that, there’re two ways to go:

  • a DSL5 ;
  • an EDSL6.

A DSL is appealing. You have a standalone language for writing shaders, and backends for a compiler/language. However, that sounds a bit overwhelming for such an aim.

An EDSL is pretty cool as well. Take a host language (we’ll be using Haskell) and provide structure and construction idioms borrowed from such a language to create a small embedded one. That is the solution I’m going to introduce.


Ash stands for Abstract Shader. It’s a Haskell package I’ve been working on for a few weeks now. The main idea is:

  • to provide a typesafe shading language compiled at compile-time;
  • to provide backends;
  • to provide a nice and friendly haskellish interface.

I guessed it’d be a good idea to share my thoughts about the whole concept, since I reckon several people will be interested in such a topic. However, keep in mind that Ash is still a big work in progress. I’m gonna use several blog entries to write down my thoughts, share it with you, possibly enhance Ash, and finally release a decent and powerful library.

If you’re curious, you can find Ash here.


Ash is a library that provides useful tools to build up shaders in Haskell. In Ash, a shader is commonly function. For instance, a vertex shader is a function that folds vertex components down to other ones – possibly maps, but it could add/remove components as well – and yields extra values for the next stages to work with.

You write a shader with the Ash EDSL then you pass it along to a backend compiler.

Here are two examples. In order for you to understand how Ash works, I’ll first write the GLSL (330 core) shader, then the Ash one.

First example: a simple vertex shader

Let’s write a vertex shader that takes a position and a color, and projects the vertex using a perspective matrix, a view matrix and the object matrix of the object currently being rendered and passes the color to the next stage:

#version 330 core

in vec3 pos;
in vec4 col;

out vec4 vcol;

uniform mat4 projViewModel;

void main() {
  vcol = col; 
  gl_Position = projViewModel * vec4(pos, 1.);

And now, the Ash one:

vertexShader :: Ash (M44 Float -> V3 Float :. V4 Float -> V4 Float :. V4 Float)
vertexShader = lam $ \proj -> lam $ \v ->
  let pos :. col = v
  in proj #* v3v4 pos 1 :. col

Ash is the type used to lift the shading expression up to Haskell. You use it to use the EDSL. It actually represents some kind of HOAST7.

Then, you can find M44, V3, V4 and (:.).

M44 is the type of 4x4 matrices. Since projection matrix, view matrix and model matrix are all 4x4 floating matrix, M44 Float makes sense.

V3 and V4 represents 3D and 4D vectors, respectively. V3 Int is three ints packed in a vector as well as V4 Float is four floats packed in a vector. You’ll also meet V2, which is… the 2D version.

(:.) is a type operator used to build tuples. You can see (:.) as a generalized (,) – the default Haskell pair type – but (:.) is more power full since it can flatten expressions:

a :. (b :. c) = a :. b :. c

The (:.) has a lot of uses in Ash. In our cases, a chain of (:.) represents a vertex’ components.

So our vertexShader value is just a function that takes a matrix and a vertex (two components) and outputs two values: the new position of the shader, and the color. Let’s see the body of the function.

lam $ \proj -> lam $ \v ->

This is a pretty weird expression, but I haven’t found – yet? – a better way to go. lam is a combinator used to introduce lambdas in the EDSL. This expression then introduces a lambda that takes two values: proj and v. You can read that as:

\proj v ->


let pos :. col = v

This is the tricky part. That let expression extracts the components out of the vertex and binds them to pos and col for later use.

in proj #* v3v4 pos 1 :. col

(#*) is a cool operator used to multiply a matrix by a vector, yielding a new vector.

(v3v4) is a shortcut used to to build a V4 using a V3 by providing the missing value – here, 1. You’ll find similar functions, like v2v3 and v2v4, to respectively build a V3 from a V2 by providing the missing value and build a V4 from a V2 by providing the two missing values.

We finally wrap the result in a tuple (:.), and we’re done.


Ash embeds regular linear expressions (vectors, matrix), textures manipulation, tuples creation, let-bindings, lambda functions (they represent shader stages up to now), and a lot of other features.

Each feature is supposed to have an implementation in a given backend. For instance, in the GLSL backend, a lambda function is often turned into the well done main function. Its parameters are expanded to as in values, and control parameters are uniform variables.

Each backend is supposed to export a compile function – the name may varies though. However, each backend is free to compiles to whatever smart they think is. For instance, compiling an Ash shader to GLuint (shader stage) is not very smart since it would use IO and handles error a specific way we don’t want it to do. So the GLSL compiler is a function like glslCompile :: Ash … -> Either CompilationError String, and the String can be used as a regular GLSL source code string you’ll pass to whatever implementation of shader you’ve written.

What’s next?

I need to finish the implentation of the EDSL, and write the whole GLSL 330 compiler. If it’s a success, I’ll accept pull-requests for other famous compilers (other GLSL version compilers, HLSL, and so on and so forth).

Once that done, I’ll write a few other blog entries with example as a proof-of-concept :)

  1. Shading is the process in which primitives (sets of vertices) are turned into colors (i.e fragments, a.k.a. pixels or texels).

  2. Actually, they’re fragment shaders.

  3. OpenGL Shading Language.

  4. High Level Shading Language.

  5. Domain-Specific Language.

  6. Embedded Specific Language.

  7. High-Order Abstract Syntax tree; for the purpose of this paper, you don’t have to fully understand them to get your feet wet with Ash (which is cool, right? :) ).