tag:blogger.com,1999:blog-89760387706067084992024-03-27T07:37:18.112+01:00Graphics and FPLA blog about graphics, demoscene, functional programming languages and some other IT materials.Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.comBlogger47125tag:blogger.com,1999:blog-8976038770606708499.post-32623710644555034872017-09-14T16:20:00.000+02:002017-09-14T17:58:40.455+02:00State of luminance<p>I’ve been a bit surprised for a few days now. Some rustaceans posted two links about my <a href="https://crates.io/crates/spectra">spectra</a> and <a href="https://crates.io/crates/luminance">luminance</a> crates on reddit – links <a href="https://www.reddit.com/r/rust/comments/6z58n3/spectra_a_demoscene_engine_in_rust">here</a> and <a href="https://www.reddit.com/r/rust_gamedev/comments/6zmtb4/luminance_typesafe_typelevel_and_stateless_rust">here</a>. I didn’t really expect that: the code is public, on Github, but I don’t communicate about them <em>that much</em>.</p>
<p>However, I saw interested people, and I think it’s the right time to write a blog post about some design choices. I’ll start off with luminance and I’ll speak about spectra in another blog post because I truly think spectra starts to become very interesting (I have my own shading language bundled up within, which is basically GLSL on steroids).</p>
<h1 id="luminance-and-how-it-copes-with-fast-and-as-stateless-as-possible-graphics">luminance and how it copes with fast and as-stateless-as-possible graphics</h1>
<h2 id="origins">Origins</h2>
<p>luminance is a library that I wrote, historically, in Haskell. You can find the package <a href="https://hackage.haskell.org/package/luminance">here</a> if you’re interested – careful though, it’s dying and the Rust version has completely drifted path with it. Nevertheless, when I ported the library to Rust, I imported the same “way of programming” that I had·have in Haskell – besides the allocation scheme; remember, I’m a demoscener, I <strong>do care a lot about performance, CPU usage, cache friendliness and runtime size</strong>. So the Rust luminance crate was made to be hybrid: it has the cool functional side that I imported from my Haskell codebase, and the runtime performance I wanted that I had when I wrote my two first 64k in C++11. I had to remove and work around some features that only Haskell could provide, such as higher-kinded types, type families, functional dependencies, GADTs and a few other things such as existential quantification (trait objects saved me here, even though I don’t use them that much in luminance now).</p>
<p>I have to admit, I dithered a lot about the scope of luminance — both in Haskell and Rust. At first, I thought that it’d be great to have a <em>“base”</em> crate, hosting common and abstracted code, and <em>“backend”</em> crates, implementing the abstract interface. That would enable me to have several backends – OpenGL, Vulkan, Metal, a software implementation, something for Amiigaaaaaaa, etc. Though, time has passed, and now, I think it’s:</p>
<ul class="incremental">
<li>Overkill.</li>
<li>A waste of framework.</li>
</ul>
<p>The idea is that if you need to be very low-level on the graphics stack of your application, you’re likely to know what you are doing. And then, your needs will be very precise and well-defined. You might want very specific piece of code to be available, related to a very specific technology. That’s the reason why abstracting over very low-level code is not a good path to me: you need to expose as most as posssible the low-level interface. That’s the goal of luminance: exposing OpenGL’s interface in a stateless, bindless and typesafe way, with no or as minimal as possible runtime overhead.</p>
<p>More reading <a href="http://phaazon.blogspot.fr/2016/08/luminance-designs.html">here</a>.</p>
<h2 id="today">Today</h2>
<p>Today, luminance is almost stable – it still receives massive architecture redesign from time to time, but it’ll hit the <code>1.0.0</code> release soon. As discussed with <a href="https://github.com/kvark">kvark</a> lately, luminance is not about the same scope as <a href="https://crates.io/crates/gfx">gfx</a>’s one. The goals of luminance are:</p>
<ul class="incremental">
<li>To be a typesafe, stateless and bindless OpenGL framework.</li>
<li>To provide a friendly experience and expose as much as possible all of the OpenGL features.</li>
<li>To be very lightweight (the target is to be able to use it without <code>std</code> nor <code>core</code>).</li>
</ul>
<p>To achieve that, luminance is written with several aspects in mind:</p>
<ul class="incremental">
<li>Allocation must be explicitely stated by the user: we must avoid as much as possible to allocate things in luminance since it might become both a bottleneck and an issue to the lightweight aspect.</li>
<li>Performance is a first priority; safety comes second. If you have a feature that can be either exclusively performant or safe, it must then be performant. Most of the current code is, for our joy, both performant and safe. However, some invariants are left around the place and you might shoot your own feet. This is an issue and some reworking must be done (along with tagging some functions and traits <code>unsafe</code>).</li>
<li>No concept of backends will ever end up in luminance. If it’s decided to switch to <a href="https://www.khronos.org/vulkan">Vulkan</a>, the whole luminance API will and <strong>must</strong> be impacted, so that people can use <a href="https://www.khronos.org/vulkan">Vulkan</a> the best possible way.</li>
<li>A bit like the first point, the code must be written in a way that the generated binary is as small as possible. Generics are not forbidden – they’re actually recommended – but things like crate dependencies are likely to be forbidden (exception for the <code>gl</code> dependency, of course).</li>
<li>Windowing <strong>must not be addressed by luminance</strong>. This is crucial. As a demoscener, if I want to write a 64k with luminance, I must be able to use a library over X11 or the Windows API to setup the OpenGL context myself, set the OpenGL pointers myself, etc. This is not the typical usecase – who cares besides demosceners?! – but it’s still a good advantage since you end up with loose coupling for free.</li>
</ul>
<h2 id="the-new-luminance">The new luminance</h2>
<p>luminance has received more attention lately, and I think it’s a good thing to talk about how to use it. I’ll add examples on github and its <a href="https://docs.rs/luminance">docs.rs</a> online documentation.</p>
<p>I’m going to do that like a tutorial. It’s easier to read and you can test the code in the same time. Let’s render a triangle!</p>
<blockquote>
<p>Note: keep in mind that you need a nightly compiler to compile luminance.</p>
</blockquote>
<h3 id="getting-your-feet-wet">Getting your feet wet</h3>
<p>I’ll do everything from scratch with you. I’ll work in <code>/tmp</code>:</p>
<pre><code>$ cd /tmp</code></pre>
<p>First thing first, let’s setup a <code>lumitest</code> Rust binary project:</p>
<pre><code>$ cargo init --bin lumitest
Created binary (application) project
$ cd lumitest</code></pre>
<p>Let’s edit our <code>Cargo.toml</code> to use luminance. We’ll need two crates:</p>
<ul class="incremental">
<li>The <a href="https://crates.io/crates/luminance">luminance</a> crate.</li>
<li>A way to open the OpenGL context; we’ll use GLFW, so the <a href="https://crates.io/crates/luminance-glfw">luminance-glfw</a> crate.</li>
</ul>
<p>At the time of writing, corresponding versions are <a href="https://crates.io/crates/luminance/0.23.0">luminance-0.23.0</a> and <a href="https://crates.io/crates/luminance-glfw/0.3.2">luminance-glfw-0.3.2</a>.</p>
<p>Have the following <code>[dependencies]</code> section</p>
<pre class="toml"><code>[dependencies]
luminance = "0.23.0"
luminance-glfw = "0.3.2"</code></pre>
<pre><code>$ cargo check</code></pre>
<p>Everything should be fine at this point. Now, let’s step in in writing some code.</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">extern</span> <span class="kw">crate</span> luminance;
<span class="kw">extern</span> <span class="kw">crate</span> luminance_glfw;
<span class="kw">use</span> luminance_glfw::<span class="op">{</span>Device, WindowDim, WindowOpt<span class="op">}</span>;
<span class="kw">const</span> SCREEN_WIDTH: <span class="dt">u32</span> = <span class="dv">960</span>;
<span class="kw">const</span> SCREEN_HEIGHT: <span class="dt">u32</span> = <span class="dv">540</span>;
<span class="kw">fn</span> main() <span class="op">{</span>
<span class="kw">let</span> rdev = Device::new(WindowDim::Windowed(SCREEN_WIDTH, SCREEN_HEIGHT), <span class="st">"lumitest"</span>, WindowOpt::default());
<span class="op">}</span></code></pre></div>
<p>The <code>main</code> function creates a <a href="https://docs.rs/luminance-glfw/0.3.2/luminance_glfw/struct.Device.html">Device</a> that is responsible in holding the windowing stuff for us.</p>
<p>Let’s go on:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">match</span> rdev <span class="op">{</span>
<span class="cn">Err</span>(e) => <span class="op">{</span>
<span class="pp">eprintln!</span>(<span class="st">"{:#?}"</span>, e);
::std::process::exit(<span class="dv">1</span>);
<span class="op">}</span>
<span class="cn">Ok</span>(<span class="kw">mut</span> dev) => <span class="op">{</span>
<span class="pp">println!</span>(<span class="st">"let’s go!"</span>);
<span class="op">}</span>
<span class="op">}</span></code></pre></div>
<p>This block will catch any <code>Device</code> errors and will print them to <code>stderr</code> if there’s any.</p>
<p>Let’s write the main loop:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="ot">'app</span>: <span class="kw">loop</span> <span class="op">{</span>
<span class="kw">for</span> (_, ev) <span class="kw">in</span> dev.events() <span class="op">{</span> <span class="co">// the pair is an interface mistake; it’ll be removed</span>
<span class="kw">match</span> ev <span class="op">{</span>
WindowEvent::Close | WindowEvent::Key(Key::Escape, _, Action::Release, _) => <span class="kw">break</span> <span class="ot">'app</span>,
_ => ()
<span class="op">}</span>
<span class="op">}</span>
<span class="op">}</span></code></pre></div>
<p>This loop runs forever and will exit if you hit the escape key or quit the application.</p>
<h3 id="setting-up-the-resources">Setting up the resources</h3>
<p>Now, the most interesting thing: rendering the actual triangle! You will need a few things:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">type</span> Position = <span class="op">[</span><span class="dt">f32</span>; <span class="dv">2</span><span class="op">]</span>;
<span class="kw">type</span> RGB = <span class="op">[</span><span class="dt">f32</span>; <span class="dv">3</span><span class="op">]</span>;
<span class="kw">type</span> Vertex = (Position, RGB);
<span class="kw">const</span> TRIANGLE_VERTS: <span class="op">[</span>Vertex; <span class="dv">3</span><span class="op">]</span> = <span class="op">[</span>
(<span class="op">[</span>-<span class="dv">0.5</span>, -<span class="dv">0.5</span><span class="op">]</span>, <span class="op">[</span><span class="dv">0.8</span>, <span class="dv">0.5</span>, <span class="dv">0.5</span><span class="op">]</span>), <span class="co">// red bottom leftmost</span>
(<span class="op">[</span>-<span class="dv">0.</span>, <span class="dv">0.5</span><span class="op">]</span>, <span class="op">[</span><span class="dv">0.5</span>, <span class="dv">0.8</span>, <span class="dv">0.5</span><span class="op">]</span>), <span class="co">// green top</span>
(<span class="op">[</span><span class="dv">0.5</span>, -<span class="dv">0.5</span><span class="op">]</span>, <span class="op">[</span><span class="dv">0.5</span>, <span class="dv">0.5</span>, <span class="dv">0.8</span><span class="op">]</span>) <span class="co">// blue bottom rightmost</span>
<span class="op">]</span>;</code></pre></div>
<p><code>Position</code>, <code>Color</code> and <code>Vertex</code> define what a vertex is. In our case, we use a 2D position and a RGB color.</p>
<blockquote>
<p>You have a lot of choices here to define the type of your vertices. In theory, you can choose any type you want. However, it must implement the <a href="https://docs.rs/luminance/0.23.0/luminance/vertex/trait.Vertex.html"><code>Vertex</code></a> trait. Have a look at the implementors that already exist for a faster start off!</p>
</blockquote>
<blockquote>
<p><strong>Important</strong>: do not confuse between <code>[f32; 2]</code> and <code>(f32, f32)</code>. The former is a single 2D vertex component. The latter is two 1D components. It’ll make a huge difference when writing shaders.</p>
</blockquote>
<p>The <code>TRIANGLE_VERTS</code> is a constant array with three vertices defined in it: the three vertices of our triangle. Let’s pass those vertices to the GPU with the <a href="https://docs.rs/luminance/0.23.0/luminance/tess/struct.Tess.html"><code>Tess</code></a> type:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="co">// at the top location</span>
<span class="kw">use</span> luminance::tess::<span class="op">{</span>Mode, Tess, TessVertices<span class="op">}</span>;
<span class="co">// just above the main loop</span>
<span class="kw">let</span> triangle = Tess::new(Mode::Triangle, TessVertices::Fill(&TRIANGLE_VERTS), <span class="cn">None</span>);</code></pre></div>
<p>This will pass the <code>TRIANGLE_VERTS</code> vertices to the GPU. You’re given back a <code>triangle</code> object. The <a href="https://docs.rs/luminance/0.23.0/luminance/tess/enum.Mode.html"><code>Mode</code></a> is a hint object that states how vertices must be connected to each other. <code>TessVertices</code> lets you slice your vertices – this is typically enjoyable when you use a mapped buffer that contains a dynamic number of vertices.</p>
<p>We’ll need a <em>shader</em> to render that triangle. First, we’ll place its source code in <code>data</code>:</p>
<pre><code>$ mkdir data</code></pre>
<p>Paste this in <code>data/vs.glsl</code>:</p>
<div class="sourceCode"><pre class="sourceCode glsl"><code class="sourceCode glsl"><span class="kw">layout</span> (<span class="dt">location</span> = <span class="dv">0</span>) <span class="dt">in</span> <span class="dt">vec2</span> co;
<span class="kw">layout</span> (<span class="dt">location</span> = <span class="dv">1</span>) <span class="dt">in</span> <span class="dt">vec3</span> color;
<span class="dt">out</span> <span class="dt">vec3</span> v_color;
<span class="dt">void</span> <span class="fu">main</span>() {
<span class="bu">gl_Position</span> = <span class="dt">vec4</span>(co, <span class="dv">0</span>., <span class="dv">1</span>.);
v_color = color;
}</code></pre></div>
<p>Paste this in <code>data/fs.glsl</code>:</p>
<div class="sourceCode"><pre class="sourceCode glsl"><code class="sourceCode glsl"><span class="dt">in</span> <span class="dt">vec3</span> v_color;
<span class="dt">out</span> <span class="dt">vec4</span> frag;
<span class="dt">void</span> <span class="fu">main</span>() {
frag = <span class="dt">vec4</span>(v_color, <span class="dv">1</span>.);
}</code></pre></div>
<p>And add this to your <code>main.rs</code>:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">const</span> SHADER_VS: &<span class="dt">str</span> = <span class="pp">include_str!</span>(<span class="st">"../data/vs.glsl"</span>);
<span class="kw">const</span> SHADER_FS: &<span class="dt">str</span> = <span class="pp">include_str!</span>(<span class="st">"../data/fs.glsl"</span>);</code></pre></div>
<blockquote>
<p>Note: this is not a typical workflow. If you’re interested in shaders, have a look at how I do it in <a href="https://crates.io/crates/spectra">spectra</a>. That is, hot reloading it via SPSL (Spectra Shading Language), which enables to write GLSL modules and compose them in a single file but just writing functions. The functional programming style!</p>
</blockquote>
<p>Same thing as for the tessellation, we need to pass the source to the GPU’s compiler to end up with a shader object:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="co">// add this at the top of your main.rs</span>
<span class="kw">use</span> luminance::shader::program::Program;
<span class="co">// below declaring triangle</span>
<span class="kw">let</span> (shader, warnings) = Program::<Vertex, (), ()>::from_strings(<span class="cn">None</span>, SHADER_VS, <span class="cn">None</span>, SHADER_FS).unwrap();
<span class="kw">for</span> warning <span class="kw">in</span> &warnings <span class="op">{</span>
<span class="pp">eprintln!</span>(<span class="st">"{:#?}"</span>, warning);
<span class="op">}</span></code></pre></div>
<p>Finally, we need to tell luminance in which framebuffer we want to make the render. It’s simple: to the default framebuffer, which ends up to be… your screen’s back buffer! This is done this way with luminance:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">use</span> luminance::framebuffer::Framebuffer;
<span class="kw">let</span> screen = Framebuffer::default(<span class="op">[</span>SCREEN_WIDTH, SCREEN_HEIGHT<span class="op">]</span>);</code></pre></div>
<p>And we’re done for the resources. Let’s step in the actual render now.</p>
<h3 id="the-actual-render-the-pipeline">The actual render: the pipeline</h3>
<p>luminance’s approach to render is somewhat not very intuitive yet very simple and very efficient: the render pipeline is explicitly defined by the programmer in Rust, on the fly. That means that you must express the actual state the GPU must have for the whole pipeline. Because of the nature of the pipeline, which is an AST (Abstract Syntax Tree), you can batch sub-parts of the pipeline (we call such parts <em>nodes</em>) and you end up with minimal GPU state switches. The theory is as following:</p>
<ul class="incremental">
<li>At top most, you have the <a href="https://docs.rs/luminance/0.23.0/luminance/pipeline/fn.pipeline.html"><code>pipeline</code></a> function that introduces the concept of <em>shading things to a framebuffer</em>.</li>
<li>Nested, you find the concept of a <em>shader gate</em>. That is, an object linked to its parent (pipeline) and that gives you the concept of <em>shading things with a shader</em>.
<ul class="incremental">
<li>Nested, you find the concept of <em>rendering things</em>. That is, can set on such nodes GPU states, such as whether you want a depth test, blending, etc.</li>
<li>Nested, you find the concept of a <em>tessellation gate</em>, enabling you to render actual <code>Tess</code> objects.</li>
</ul></li>
</ul>
<p>That deep nesting enables you to batch your objects on a very fine granularity. Also, notice that the functions are not about slices of <code>Tess</code> or hashmaps of <code>Program</code>. The allocation scheme is completely ignorant about how the data is traversed, which is good: you decide. If you need to borrow things on the fly in a shading gate, you can.</p>
<p>Let’s get things started:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">use</span> luminance::pipeline::<span class="op">{</span>entry, pipeline<span class="op">}</span>;
entry(|_| <span class="op">{</span>
pipeline(&screen, <span class="op">[</span><span class="dv">0.</span>, <span class="dv">0.</span>, <span class="dv">0.</span>, <span class="dv">1.</span><span class="op">]</span>, |shd_gate| <span class="op">{</span>
shd_gate.shade(&shader, |rdr_gate, _| <span class="op">{</span>
rdr_gate.render(<span class="cn">None</span>, <span class="cn">true</span>, |tess_gate| <span class="op">{</span>
<span class="kw">let</span> t = &triangle;
tess_gate.render(t.into());
<span class="op">}</span>);
<span class="op">}</span>);
<span class="op">}</span>);
<span class="op">}</span>);</code></pre></div>
<p>We just need a final thing now: since we render to the back buffer of the screen, if we want to see anything appear, we need to <em>swap the buffer chain</em> so that the back buffer become the front buffer and the front buffer become the back buffer. This is done by wrapping our render code in the <a href="https://docs.rs/luminance-glfw/0.3.2/luminance_glfw/struct.Device.html#method.draw"><code>Device::draw</code></a> function:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust">dev.draw(|| <span class="op">{</span>
entry(|_| <span class="op">{</span>
pipeline(&screen, <span class="op">[</span><span class="dv">0.</span>, <span class="dv">0.</span>, <span class="dv">0.</span>, <span class="dv">1.</span><span class="op">]</span>, |shd_gate| <span class="op">{</span>
shd_gate.shade(&shader, |rdr_gate, _| <span class="op">{</span>
rdr_gate.render(<span class="cn">None</span>, <span class="cn">true</span>, |tess_gate| <span class="op">{</span>
<span class="kw">let</span> t = &triangle;
tess_gate.render(t.into());
<span class="op">}</span>);
<span class="op">}</span>);
<span class="op">}</span>);
<span class="op">}</span>);
<span class="op">}</span>);</code></pre></div>
<p>You should see this:</p>
<div class="figure">
<img src="https://i.imgur.com/x3jC9qm.png" />
</div>
<p>As you can see, the code is pretty straightforward. Let’s get deeper, and let’s kick some time in!</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">use</span> std::time::Instant;
<span class="co">// before the main loop</span>
<span class="kw">let</span> t_start = Instant::now();
<span class="co">// in your main loop</span>
<span class="kw">let</span> t_dur = t_start.elapsed();
<span class="kw">let</span> t = (t_dur.as_secs() <span class="kw">as</span> <span class="dt">f64</span> + t_dur.subsec_nanos() <span class="kw">as</span> <span class="dt">f64</span> * <span class="dv">1</span>e-<span class="dv">9</span>) <span class="kw">as</span> <span class="dt">f32</span>;</code></pre></div>
<p>We have the time. Now, we need to pass it down to the GPU (i.e. the shader). luminance handles that kind of things with two concepts:</p>
<ul class="incremental">
<li>Uniforms.</li>
<li>Buffers.</li>
</ul>
<p>Uniforms are a good match when you want to send data to a specific shader, like a value that customizes the behavior of a shading algorithm.</p>
<p>Because buffers are shared, you can use buffers to share data between shader, leveraging the need to pass the data to all shader by hand – you only pass the index to the buffer that contains the data.</p>
<p>We won’t conver the buffers for this time.</p>
<p>Because of type safety, luminance requires you to state which types the uniforms the shader contains are. We only need the time, so let’s get this done:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="co">// you need to alter this import</span>
<span class="kw">use</span> luminance::shader::program::<span class="op">{</span>Program, ProgramError, Uniform, UniformBuilder, UniformInterface, UniformWarning<span class="op">}</span>;
<span class="kw">struct</span> TimeUniform(Uniform<<span class="dt">f32</span>>);
<span class="kw">impl</span> UniformInterface <span class="kw">for</span> TimeUniform <span class="op">{</span>
<span class="kw">fn</span> uniform_interface(builder: UniformBuilder) -> <span class="dt">Result</span><(<span class="kw">Self</span>, <span class="dt">Vec</span><UniformWarning>), ProgramError> <span class="op">{</span>
<span class="co">// this will fail if the "t" variable is not used in the shader</span>
<span class="co">//let t = builder.ask("t").map_err(ProgramError::UniformWarning)?;</span>
<span class="co">// I rather like this one: we just forward up the warning and use the special unbound uniform</span>
<span class="kw">match</span> builder.ask(<span class="st">"t"</span>) <span class="op">{</span>
<span class="cn">Ok</span>(t) => <span class="cn">Ok</span>((TimeUniform(t), <span class="dt">Vec</span>::new())),
<span class="cn">Err</span>(e) => <span class="cn">Ok</span>((TimeUniform(builder.unbound()), <span class="pp">vec!</span><span class="op">[</span>e<span class="op">]</span>))
<span class="op">}</span>
<span class="op">}</span>
<span class="op">}</span></code></pre></div>
<p>The <a href="https://docs.rs/luminance/0.23.0/luminance/shader/program/struct.UniformBuilder.html#method.unbound"><code>UniformBuilder::unbound</code></a> is a simple function that gives you any uniform you want: the resulting uniform object will just do nothing when you pass values in. It’s a way to say <em>“— Okay, I don’t use that in the shader yet, but don’t fail, it’s not really an error”.</em> Handy.</p>
<p>And now, all the magic: how do we access that uniform value? It’s simple: via types! Have you noticed the type of our <code>Program</code>? For the record:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> (shader, warnings) = Program::<Vertex, (), ()>::from_strings(<span class="cn">None</span>, SHADER_VS, <span class="cn">None</span>, SHADER_FS).unwrap();</code></pre></div>
<p>See the type is parametered with three type variables:</p>
<ul class="incremental">
<li>The first one – here <code>Vertex</code>, our own type – is for the <em>input</em> of the shader program.</li>
<li>The second one is for the <em>output</em> of the shader program. It’s currently not used at all by luminance by is reserved, as it will be used later for enforcing even further type safety.</li>
<li>The third and latter is for the <em>uniform interface</em>.</li>
</ul>
<p>You guessed it: we need to change the third parameter from <code>()</code> to <code>TimeUniform</code>:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> (shader, warnings) = Program::<Vertex, (), TimeUniform>::from_strings(<span class="cn">None</span>, SHADER_VS, <span class="cn">None</span>, SHADER_FS).unwrap();</code></pre></div>
<p>And <em>that’s all</em>. Whenever you <code>shade</code> with a <code>ShaderGate</code>, the type of the shader object is being inspected, and you’re handed with the uniform interface:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust">shd_gate.shade(&shader, |rdr_gate, uniforms| <span class="op">{</span>
uniforms.<span class="dv">0.</span>update(t);
rdr_gate.render(<span class="cn">None</span>, <span class="cn">true</span>, |tess_gate| <span class="op">{</span>
<span class="kw">let</span> t = &triangle;
tess_gate.render(t.into());
<span class="op">}</span>);
<span class="op">}</span>);</code></pre></div>
<p>Now, change your fragment shader to this:</p>
<div class="sourceCode"><pre class="sourceCode glsl"><code class="sourceCode glsl"><span class="dt">in</span> <span class="dt">vec3</span> v_color;
<span class="dt">out</span> <span class="dt">vec4</span> frag;
<span class="kw">uniform</span> <span class="dt">float</span> t;
<span class="dt">void</span> <span class="fu">main</span>() {
frag = <span class="dt">vec4</span>(v_color * <span class="dt">vec3</span>(<span class="bu">cos</span>(t * .<span class="fu">25</span>), <span class="bu">sin</span>(t + <span class="dv">1</span>.), <span class="bu">cos</span>(<span class="fl">1.25</span> * t)), <span class="dv">1</span>.);
}</code></pre></div>
<p>And enjoy the result! Here’s the <a href="https://gist.github.com/phaazon/268d5c0285c6c7cba90d5cea1b99db76">gist</a> that contains the whole <code>main.rs</code>.</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com13tag:blogger.com,1999:blog-8976038770606708499.post-80602928033563749612017-07-30T15:09:00.003+02:002017-07-30T15:09:15.290+02:00Rust GLSL crate<p>It’s been almost two months that I’ve been working on my <a href="https://crates.io/crates/glsl">glsl</a> crate. This crate exposes a GLSL450 compiler that enables you to parse GLSL-formatted sources into memory in the form of an AST. Currently, that AST is everything you get from the parsing process – you’re free to do whatever you want with it. In the next days, I’ll write a GLSL writer (so that I can check that I can parse GLSL to GLSL…). I’d love to see contributions from Vulkan people to write a SPIR-V backend, though!</p>
<p>Just for the record, the initial goal I had in mind was to parse a subset of GLSL for my <a href="https://github.com/phaazon/spectra">spectra</a> demoscene framework. I’ve been planning to write my own GLSL-based shading language (with modules, composability, etc. etc.) and hence I need to be able to parse GLSL sources. Because I wanted to share my effort, I decided to create a dedicated project and here we are with a GLSL crate.</p>
<p>Currently, you can successfully parse a GLSL450-formatted source (or part of it, I expose all the intermediary parsers as well as it’s required by my needs to create another shading language over GLSL). See for instance <a href="https://gist.github.com/phaazon/fbe7a9c26bdea4a7262d2ea028c578ce">this shading snippet</a> parsed to an AST.</p>
<p>However, because the project is still very young, there a lot of features that are missing:</p>
<ul class="incremental">
<li>I followed the <a href="https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.50.pdf">official GLSL450 specifications</a>, which is great, but the types are not very intuitive – some refactoring must be done here;</li>
<li>I use <a href="https://crates.io/crates/nom">nom</a> as a parser library. It’s not perfect but it does its job very well. However, error reporting is pretty absent right now (if you have an error in your source, you’re basically left with <code>Error(SomeVariant)</code> flag, which is completely unusable;</li>
<li>I wrote about 110 unit tests to ensure the parsing process is correct. However, there’s for now zero semantic check. Those are lacking.</li>
<li>About the semantic checks, the crate is also missing a semantic analysis. I don’t really know what to do here, because it’s a lot of work (like, ensure that the assigned value to a float value is a real float and not a boolean or ensure that a function that must returns a vec3 returns a vec3 and not something else, etc.). This is not a trivial task and because this is already done by the OpenGL drivers, I won’t provide this feature yet. However, to me, it’d be a great value.</li>
</ul>
<p>If you’re curious, you can start using the crate with <code>glsl = "0.2.1"</code> in your <code>Cargo.toml</code>. You probably want to use the <code>translation_unit</code> parser, which is the most external parser (it parses an actual shader). If you’re looking for something similar to my need and want to parse subparts of GLSL, feel free to dig in the <a href="https://docs.rs/glsl">documentation</a>, and especially the <code>parsers</code> module, that exports all the parsers available to parse GLSL’s parts.</p>
<p>Either way, you have to pass a bytes slice. If your source’s type is <code>String</code> or <code>&str</code>, you can use the <code>str::as_bytes()</code> function to feed the input of the parsers.</p>
<blockquote>
<p>Not all parsers are exported, only the most important ones. You will not find an octal parser, for instance, while it’s defined and used in the crate internally.</p>
</blockquote>
<h1 id="call-to-contribution">Call to contribution</h1>
<p>If you think it’s worth it, I’m looking for people who would like to:</p>
<ul class="incremental">
<li>write a GLSL to SPIR-V writer: it’s an easy task, because you just have to write a function of the form <code>AST -> Result<SPIRV, _></code>, for being pragmatic;</li>
<li>test the crate and provide feedback about any error you’d find! Please do not open an issue if you have an error in your source and find <em>“the error from the parsers is not useful”</em>, because it’s already well established this is a problem and I’m doing my best to solve it;</li>
<li>any kind of contributions you think could be interesting for the crate.</li>
</ul>
<p>Happy coding, and keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-24165128255739587802017-07-23T20:32:00.000+02:002017-07-24T10:46:04.342+02:00On programming workflows<p>For the last month, I had technical talks with plenty of programmers from all around the globe – thank you IRC! Something interesting that often showed up in the discussion was the actual <em>workflow</em> we have when writing code. Some people are use to <a href="https://en.wikipedia.org/wiki/Integrated_development_environment">IDE</a> and won’t change their tools for anything else. Some others use very basic text editors (I even know someone using <a href="https://www.barebones.com/products/bbedit">BBEdit</a> for his job work! crazy, isn’t it?). I think it’s always a good thing to discuss that kind of topic, because it might give you more hindsight on your own workflow, improve it, share it or just show your nice color scheme.</p>
<p>I’ll be talking about:</p>
<h1 id="my-editor">My editor</h1>
<p>I’ve tried a lot of editors in the last ten years. I spent a year using emacs but eventually discovered – erm, learned! – vim and completely fell in love with the <em>modal</em> editor. It was something like a bit less than ten years ago. I then tried a lot of other editors (because of curiosity) but failed to find something that would help be better than vim to write code. I don’t want to start an editor war; here’s just my very personal point of view on editing. The concept of modes in vim enables me to use a very few keystrokes to perform what I want to do (moving, commands, etc.) and I feel very productive that way.</p>
<blockquote>
<p>A year ago, a friend advised me to switch to <a href="https://neovim.io">neovim</a>, which I have. My editor of the time is then neovim, but it’s so close to vim that I tend to use (neo)vim. :)</p>
</blockquote>
<p>I don’t use any other editing tool. I even use neovim for taking notes while in a meeting or when I need to format something in Markdown. I just use it everywhere.</p>
<h1 id="my-neovim-setup">My neovim setup</h1>
<p>I use several plugins:</p>
<pre><code>Plugin 'VundleVim/Vundle.vim'
Plugin 'ctrlpvim/ctrlp.vim'
Plugin 'gruvbox'
Plugin 'neovimhaskell/haskell-vim'
Plugin 'itchyny/lightline.vim'
Plugin 'rust-lang/rust.vim'
Plugin 'jacoborus/tender.vim'
Plugin 'airblade/vim-gitgutter'
Plugin 'tikhomirov/vim-glsl'
Plugin 'plasticboy/vim-markdown'
Plugin 'cespare/vim-toml'
Plugin 'mzlogin/vim-markdown-toc'
Plugin 'ElmCast/elm-vim'
Plugin 'raichoo/purescript-vim'
Plugin 'easymotion'
Plugin 'scrooloose/nerdtree'
Plugin 'ryanoasis/vim-devicons'
Plugin 'tiagofumo/vim-nerdtree-syntax-highlight'
Plugin 'mhinz/vim-startify'
Plugin 'Xuyuanp/nerdtree-git-plugin'
Plugin 'tpope/vim-fugitive'
Plugin 'MattesGroeger/vim-bookmarks'
Plugin 'luochen1990/rainbow'</code></pre>
<h2 id="vundle">Vundle</h2>
<p>A package manager. It just takes the list you read above and clones / keeps updated the git repositories of the given vim packages. It’s a must have. There’re alternatives like Pathogen but Vundle is very simple to setup and you don’t have to care about the file system: it cares for you.</p>
<h2 id="ctrlp">ctrlp</h2>
<p>This is one is a must-have. It gives you a fuzzy search buffer that traverse files, MRU, tags, bookmarks, etc.</p>
<div class="figure">
<img src="http://phaazon.net/pub/ctrlp.png" />
</div>
<p>I mapped the file search to the <code>, f</code> keys combination and the tag fuzzy search to <code>, t</code>.</p>
<h2 id="gruvbox">gruvbox</h2>
<p>The colorscheme I use. I don’t put an image here since you can find several examples online.</p>
<blockquote>
<p>This colorscheme also exists for a lot of other applications, among terminals and window managers.</p>
</blockquote>
<h2 id="haskell-vim">haskell-vim</h2>
<p>Because I write a lot of Haskell, I need that plugin for language hilighting and linters… mostly.</p>
<h2 id="lightline">lightline</h2>
<p>The <a href="https://github.com/itchyny/lightline.vim">Lightline</a> vim statusline. A popular alternative is <a href="https://github.com/powerline/powerline">Powerline</a> for instance. I like Lightline because it’s lightweight and has everything I want.</p>
<h2 id="rust.vim">rust.vim</h2>
<p>Same as for Haskell: I wrote a lot of Rust code so I need the language support in vim.</p>
<h2 id="tender">tender</h2>
<p>A colorscheme for statusline.</p>
<h2 id="vim-gitgutter">vim-gitgutter</h2>
<p>I highly recommend this one. It gives you diff as icons in the symbols list (behind the line numbers).</p>
<div class="figure">
<img src="http://phaazon.net/pub/vim-gitgutter.png" />
</div>
<h2 id="vim-glsl">vim-glsl</h2>
<p>GLSL support in vim.</p>
<h2 id="vim-markdown">vim-markdown</h2>
<p>Markdown support in vim.</p>
<h2 id="vim-toml">vim-toml</h2>
<p>TOML support in vim.</p>
<blockquote>
<p>TOML is used in Cargo.toml, the configuration file for Rust projects.</p>
</blockquote>
<h2 id="vim-markdown-toc">vim-markdown-toc</h2>
<p>You’re gonna love this one. It enables you to insert a table of contents wherever you want in a Markdown document. As soon as you save the buffer, the plugin will automatically refresh the TOC for you, keeping it up-to-date. A must have if you want table of contents in your specifications or RFC documents.</p>
<h2 id="elm-vim">elm-vim</h2>
<p>Elm support in vim.</p>
<h2 id="purescript-vim">purescript-vim</h2>
<p>Purescript support in vim.</p>
<h2 id="easymotion">easymotion</h2>
<p>A must have. As soon as you hit the corresponding keys, it will replace all words in your visible buffer by a set of letters (typically one or two), enabling you to just press the associated characters to jump to that word. This is the <em>vim motion</em> on steroids.</p>
<h2 id="nerdtree">nerdtree</h2>
<p>A file browser that appears at the left part of the current buffer you’re in. I use it to discover files in a file tree I don’t know – I use it very often at work when I work on a legacy project, for instance.</p>
<h2 id="vim-devicons">vim-devicons</h2>
<p>A neat package that adds icons to vim and several plugins (nerdtree, ctrlp, etc.). It’s not a must-have but I find it gorgeous so… just install it! :)</p>
<h2 id="vim-nerdtree-syntax-highlight">vim-nerdtree-syntax-highlight</h2>
<p>Add more formats and files support to nerdtree.</p>
<h2 id="vim-startify">vim-startify</h2>
<p>A <em>cute</em> plugin that will turn the start page of vim into a MRU page, enabling you to jump to the given file by just pressing its associated number.</p>
<blockquote>
<p>It also features a cow that gives you fortune catchphrases. Me gusta.</p>
</blockquote>
<h2 id="nerdtree-git-plugin">nerdtree-git-plugin</h2>
<p>Git support in nerdtree, it adds markers in front of file that have changes, diff, that were added, etc.</p>
<h2 id="vim-fugitive">vim-fugitive</h2>
<p>A good Git integration package to vim. I use it mostly for its <code>Gdiff</code> diff tooling directly in vim, even though I like using the command line directly for that. The best feature of that plugin is the integrated blaming function, giving you the author of each line directly in a read-only vim buffer.</p>
<h2 id="vim-bookmarks">vim-bookmarks</h2>
<p>Marks on steroids.</p>
<h2 id="rainbow">rainbow</h2>
<p>This little plugins is very neats as it allows me to add colors to matching symbols so that I can easily see where I am.</p>
<div class="figure">
<img src="http://phaazon.net/pub/rainbow_vim.png" />
</div>
<h1 id="workflow-in-rust">Workflow in Rust</h1>
<p>I’m a rustacean. I do a lot of Rust on my spare time. My typical workflow is the following:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>I edit code in neovim</li>
<li>Depending on the project (whether I have a robust unit tests base or not or whether I’m writing a library or an app), I use several <code>cargo</code> commands:</li>
</ol>
<ul class="incremental">
<li>for a library project, I split my screen in two and have a <code>cargo watch -x test</code> running; this command is a file watcher that will run all the tests suites in the project as soon as a file changes;</li>
<li>for an app project, I split my screen in two and have a <code>cargo watch</code> – similar to <code>cargo watch -x check</code> running; this command is a file watcher that will proceed through the compilation of the binary but it will not compile it down to an actual binary; it’s just a check, it doesn’t produce anything you can run. You can manually run that command with <code>cargo check</code>. See more <a href="https://github.com/rust-lang/cargo/pull/3296#event-893283611">here</a>;</li>
<li>for other use cases, I tend to run <code>cargo check</code> by hand and run <code>cargo build</code> to test the actualy binary</li>
</ul>
<ol class="incremental" start="3" style="list-style-type: decimal">
<li>Because I’m a unix lover, I couldn’t work correctly without a terminal, so I use neovim in a terminal and switch from the console to the editor with the keybinding <code>C-z</code> and the <code>jobs</code> and <code>fg</code> commands. I do that especially to create directories, issue git commands, deploy things, etc. It’s especially ultra cool to run a <code>rg</code> search or even a <code>find</code> / <code>fd</code>. I sometimes do that directly in a neovim buffer – with the <code>:r!</code> command – when I know I need to edit things from the results. Bonus: refactoring with <code>tr</code>, <code>sed</code> and <code>find -exec</code> is <strong>ultra</strong> neat.</li>
</ol>
<h1 id="workflow-in-haskell">Workflow in Haskell</h1>
<p>The workflow is almost exactly the same besides the fact I use the <code>stack build --fast --file-watch</code> command to have a file watcher.</p>
<blockquote>
<p>Haskell’s stack doesn’t currently the awesome <code>check</code> command Rust’s cargo has. Duh :(</p>
</blockquote>
<p>I also have a similar workflow for other languages I work in, like Elm, even though I use standard unix tools for the file watching process.</p>
<h1 id="git-workflow">Git workflow</h1>
<p>Aaaah… <code>git</code>. What could I do without it? Pretty much nothing. <code>git</code> is such an important piece of software and brings such an important project philosophy and behaviors to adopt.</p>
<p>What I find very cool in git – besides the tool itself – is everything around it. For instance, on GitHub, you have the concept of Pull Request (PR) – Merge Request in GitLab (MR). Associated with a good set of options, like disallowing people to push on the <code>master</code> branch, hooks, forcing people to address any comments on their MR, this allows for better code reviews and overall a way better quality assurance of the produced code than you could have without using all this infrastructure. Add a good set of DevOps for deployment and production relatide issues and you have a working team that has no excuse to produce bad code!</p>
<h2 id="some-git-candies-i-love-working-with">Some git candies I love working with</h2>
<p>My neovim fugitive plugin allows me to open a special buffer with the <code>:Gblame</code> command that gives me a <code>git blame</code> annotation of the file. This might be trivial but it’s very useful, especially at work when I have my colleagues next me – it’s always better to directly ask them about something than guessing.</p>
<div class="figure">
<img src="http://phaazon.net/pub/vim_fugitive_blame.png" />
</div>
<p>Another one that I love is <code>:Gdiff</code>, that gives you a <code>git diff</code> of the modification you’re about to stage. I often directly to a <code>git diff</code> in my terminal, but I also like how this plugin nails it as well. Very pratictal!</p>
<div class="figure">
<img src="http://phaazon.net/pub/vim_fugitive_diff.png" />
</div>
<h1 id="general-note-on-workflow">General note on workflow</h1>
<p>It’s always funny to actually witness difference in workflows, especially at work. People who use mostly IDEs are completely overwhelmed by my workflow. I was completely astonished at work that some people hadn’t even heard of <code>sed</code> before – they even made a Google search! I’m a supporter of the philosophy that one should use the tool they feel comfortable with and that there’s no “ultimate” tool for everyone. However, for my very own person, I really can’t stand IDEs, with all the buttons and required clicks you have to perform all over the screen. I really think it’s a waste of time, while using a modal editor like neovim with a bépo keyboard layout (French dvorak) and going back and forth to the terminal is just incredibly simple, yet powerful.</p>
<p>I had a pretty good experience with <a href="https://atom.io">Atom</a>, a modern editor. But when I’ve been told it’s written with web technologies, the fact it’s slow as f<em>ck as soon as you start having your own tooling (extensions), its pretty bad behavior and incomprensible “Hey, I do whatever the f</em>ck I want and I’ll just reset your precious keybindings!” or all the weird bugs – some of my extensions won’t just work if I have an empty pane open, wtf?!… well, I was completely bolstered that GUI interfaces, at least for coding and being productive, are cleary not for me. With my current setup, my hands <em>never</em> move from the keyboard – my thrists are completely static. With all the candies like easymotion, ctrlp, etc. etc. I just can’t find any other setups faster and comfier than this one.</p>
<p>There’s even an extra bonus to my setup: because I use mostly unix tools and neovim, it’s pretty straigth-forward to remote-edit something via ssh, because everything happens in a terminal. That’s not something you can do easily with Atom, Sublime Text or any other editors / IDEs – and you even pay for that shit! No offence!</p>
<p>However, there’s a caveat: because pretty much everything I do is linked to my terminal, the user experience mostly relies on the performance of the terminal. Using a bad terminal will result in an overall pretty bad experience, should it be editing, compiling, git or ssh. That’s why I keep lurking at new terminal emulaters – <a href="https://github.com/jwilm/alacritty">alacritty</a> seems very promising, yet it’s still too buggy and lacks too many features to be production-ready to me – but it’s written in Rust and is GPU-accelerated, hells yeah!</p>
<h1 id="conclusion">Conclusion</h1>
<p>Whoever you are, whatever you do, whomever you work with, I think the most important thing about workflow is to find the one that fits your needs the most. I have a profound, deep digust for proprietary and closed software like Sublime Text and IDEs that use GUIs while keyboard shortcut are just better. To me, the problem is about the learning curve and actually wanting to pass it – because yes, learning (neo)vim in details and mastering all its nits is not something you’ll do in two weeks; it might take months or years, but it’s worth it. However, as I said, if you just feel good with your IDE, I will not try to convert you to a modal editor or a unix-based workflow… because you wouldn’t be as productive as you already are.</p>
<p>Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-80896120496196043602017-04-20T12:12:00.000+02:002017-04-20T12:18:22.167+02:00Postmortem #1 – Revision 2017<p>On the weekend of 14th – 17th of April 2017, I was attending for the forth time the <a href="https://2017.revision-party.net">easter demoparty Revision 2017</a>. This demoparty is the biggest so far in the world and gathers around a thousand people coming from around the world. If you’re a demomaker, a demoscene passionated or curious about it, that’s the party to go. It hosts plenty of competitions, among <em>photos</em>, <em>modern graphics</em>, <em>oldschool graphics</em>, <em>games</em>, <em>size-limited demos</em> (what we call <em>intros</em>), <em>demos</em>, <em>tracked and streamed music</em>, <em>wild</em>, <em>live compo</em>, etc. It’s massive.</p>
<p>So, as always, once a year, I attend Revision. But this year, it was a bit different for me. Revision is <em>very</em> impressive and most of the <em>“big demogroups”</em> release their productions they’ve been working on for months or even years. I tend to think <em>“If I release something here, I’ll just be kind of muted by all those massive productions.”</em> Well, less than two weeks before Revision 2017, I was contacted by another demogroup. They asked me to write an <em>invitro</em> – a kind of <em>intro</em> or <em>demo</em> acting as a communication production to invite people to go to another party. In my case, I was proposed to make the <a href="http://outlinedemoparty.nl">Outline 2017</a> invitro. Outline was the first party I attended years back then, so I immediately accepted and started to work on something. That was something like 12 days before the Revision deadline.</p>
<p>I have to tell. It was a challenge. All productions I wrote before was made in about a month and a half and featured less content than the Outline Invitation. I had to write a lot of code from scratch. <em>A lot</em>. But it was also a very nice project to test my demoscene framework, written in Rust – you can find <a href="https://github.com/phaazon/spectra">spectra here</a> or <a href="https://crates.io/crates/spectra">here</a></p>
<p>An hour before hitting the deadline, the beam team told me their Ubuntu compo machine died and that it would be neat if I could port the demo to Windows. I rushed like a fool to make a port – I even forked and modified my OpenAL dependency! – and I did it in 35 minutes. I’m still a bit surprised yet proud that I made it through!</p>
<p>Anyway, this post is not about bragging. It’s about hindsight. It’s a post-mortem. I did that for <a href="http://www.pouet.net/prod.php?which=67966">Céleri Rémoulade</a> as I was the only one working on it – music, gfx, direction and obviously the Rust code. I want to draw a list of <em>what went wrong</em> and <em>what went right</em>. In the first time, for me. So that I have enhancement axis for the next set of demos I’ll make. And for sharing those thoughts so that people can have a sneak peek into the internals of what I do mostly – I do a lot of things! :D – as a hobby on my spare time.</p>
<blockquote>
<p>You can find the link to the production <a href="http://www.pouet.net/prod.php?which=69698">here</a> (there’re Youtube links if you need).</p>
</blockquote>
<h1 id="what-went-wrong">What went wrong</h1>
<p>Sooooooooo… What went wrong. Well, a lot of things! <strong>spectra</strong> was designed to build demo productions in the first place, and it’s pretty good at it. But something that I have to enhance is the <em>user interaction</em>. Here’s a list of what went wrong in a concrete way.</p>
<h2 id="hot-reloading-went-wiiiiiiiiiiild">Hot-reloading went wiiiiiiiiiiild²</h2>
<p>With that version of <strong>spectra</strong>, I added the possibility to <em>hot-reload</em> almost everything I use as a resource: shaders, textures, meshes, objects, cameras, animation splines, etc. I edit the file, and as soon as I save it, it gets hot-reloaded in realtime, without having to interact with the program (for curious ones, I use the straight-forward <a href="https://crates.io/crates/notify">notify crate</a> crate for registering callbacks to handle file system changes). This is very great and it saves a <strong>lot</strong> of time – Rust compilation is slow, and that’s a lesson I’ve learned from Céleri Rémoulade: keeping closing the program, making a change, compiling and then running is a waste of time.</p>
<p>So what’s the issue with that? Well, the main problem is the fact that in order to implement hot-reloading, I wanted performance and something very simple. So I decided to use <em>shared mutable smart states</em>. As a <strong>Haskeller</strong>, I kind of offended myself there – laughter! Yeah, in the Haskell world, we try hard to avoid using shared states – <code>IORef</code> – because it’s not referentially transparent and reasoning about it is difficult. However, I tend to strongly think that in some very specific cases, you need such side-effects. I’m balanced but I think it’s the way to go.</p>
<p>Well, in Rust, shared mutable state is implemented via two types: <code>Rc/Arc</code> and <code>Cell/RefCell</code>.</p>
<p>The former is a runtime implementation of the Rust <em>borrowing rules</em> and enables you to share a value. The borrowing rules are not enforced at compile-time anymore but dynamically checked. It’s great because in some cases, you can’t know how long your values will be borrowed for or live. It’s also dangerous because you have to pay extra attention to how you borrow your data – since it’s checked at runtime, you can literally crash your program if you’re not extra careful.</p>
<blockquote>
<p><code>Rc</code> means <em>ref counted</em> and <code>Arc</code> means <em>atomic-ref counted</em>. The former is for values that stay on the same and single thread; the latter is for sharing between threads.</p>
</blockquote>
<p><code>Cell/RefCell</code> are very interesting types that provide <em>internal mutation</em>. By default, Rust gives you <em>external mutation</em>. That is, if you have a value and its address, can mutate what you have at that address. On the other hand, <em>internal mutation</em> is introduced by the <code>Cell</code> and <code>RefCell</code> types. Those types enable you to mutate the content of an object stored at a given address without having the exterior mutation property. It’s a bit technical and related to Rust, but it’s often used to mutate the content of a value via a function taking an immutable value. Imagine an immutable value that only holds a pointer. Exterior mutation would give you the power to change what this pointer points to. Interior mutation would give you the power to change the object pointed by this pointer.</p>
<blockquote>
<p><code>Cell</code> only accepts values that can be copied bit-wise and <code>RefCell</code> works with references.</p>
</blockquote>
<p>Now, if you combine both – <code>Rc<RefCell<_>></code>, you end up with a single-thread shareable – <code>Rc<_></code> – mutable – <code>RefCell<_></code> – value. If you have a value of type <code>Rc<RefCell<u32>></code> for instance, that means you can clone that integer and store it everywhere in the same thread, and at any time, borrow it and inspect and/or mutate it. All copies of the value will observe the change. It’s a bit like C++’s <code>shared_ptr</code>, but it’s safer – thank you Rust!</p>
<p>So what went wrong with that? Well, the borrow part. Because Rust is about safety, you still need to tell it how you want to borrow at runtime. This is done with the [<code>RefCell::borrow()</code>](https://doc.rust-lang.org/std/cell/struct.RefCell.html#method.borrow] and <a href="https://doc.rust-lang.org/std/cell/struct.RefCell.html#method.borrow_mut"><code>RefCell::borrow_mut()</code></a> functions. Those functions return special objects that borrow the ref as long as it lives. Then, when it goes out of scope, it releases the borrow.</p>
<p>So any time you want to use an object that is hot-reloadable with my framework, you have to call one of the borrow functions presented above. You end up with a lot of borrows, and you have to keep in mind that you can litterally crash your program if you violate the borrowing rules. This is a nasty issue. For instance, consider:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> cool_object = …; <span class="co">// Rc<RefCell<Object>>, for instance</span>
<span class="kw">let</span> cool_object_ref = cool_object.borrow_mut();
<span class="co">// mutate cool_object</span>
just_read(&cool_object.borrow()); <span class="co">// borrowing rule violated here because a mut borrow is in scope</span></code></pre></div>
<p>As you can see, it’s pretty simple to fuck up the program if you don’t pay extra attention to what you’re doing with your borrow. To solve the problem above, you’d need a smaller scope for the mutable borrow:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> cool_object = …; <span class="co">// Rc<RefCell<Object>>, for instance</span>
{
<span class="kw">let</span> cool_object_ref = cool_object.borrow_mut();
<span class="co">// mutate cool_object</span>
}
just_read(&cool_object.borrow()); <span class="co">// borrowing rule violated here because a mut borrow is in scope</span></code></pre></div>
<p>So far, I haven’t really spent time trying to fix that, but that’s something I have to figure out.</p>
<h2 id="resources-declaration-in-code">Resources declaration in code</h2>
<p>This is a bit tricky. As a programmer, I’m used to write algorithms and smart architectures to transform data and resolve problems. I’m given inputs and I provide the outputs – the solutions. However, a demoscene production is special: you don’t have inputs. You create artsy audiovisual outputs from nothing but time. So you don’t really write code to solve a problem. You write code to create something that will be shown on screen or in a headphone. This aspect of demo coding has an impact on the style and the way you code. Especially in crunchtime. I have to say, I was pretty bad on that part with that demo. To me, code should only be about transformations – that’s why I love Haskell so much. But my code is clearly not.</p>
<p>If you know the <code>let</code> keyword in Rust, well, imagine hundreds and hundreds of lines starting with <code>let</code> in a single function. That’s most of my demo. In rush time, I had to declare a <em>lot</em> of things so that I can use them and transform them. I’m not really happy with that, because those were data only. Something like:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> outline_emblem = cache.get::<Model>(<span class="st">"Outline-Logo-final.obj"</span>, ()).unwrap();
<span class="kw">let</span> hedra_01 = cache.get::<Model>(<span class="st">"DSR_OTLINV_Hedra01_Hi.obj"</span>, ()).unwrap();
<span class="kw">let</span> hedra_02 = cache.get::<Model>(<span class="st">"DSR_OTLINV_Hedra02_Hi.obj"</span>, ()).unwrap();
<span class="kw">let</span> hedra_04 = cache.get::<Model>(<span class="st">"DSR_OTLINV_Hedra04_Hi.obj"</span>, ()).unwrap();
<span class="kw">let</span> hedra_04b = cache.get::<Model>(<span class="st">"DSR_OTLINV_Hedra04b_Hi.obj"</span>, ()).unwrap();
<span class="kw">let</span> twist_01 = cache.get::<Object>(<span class="st">"DSR_OTLINV_Twist01_Hi.json"</span>, ()).unwrap();</code></pre></div>
<p>It’s not that bad. As you can see, <strong>spectra</strong> features a <em>resource cache</em> that provides several candies – hot-reloading, resource dependency resolving and resource caching. However, having to declare those resources directly in the code is a nasty boilerplate to me. If you want to add a new object in the demo, you have to turn it off, add the Rust line, re-compile the whole thing, then run it once again. It breaks the advantage of having hot-reloading and it pollutes the rest of the code, making it harder to spot the actual transformations going on.</p>
<p>This is even worse with the way I handle texts. It’s all <code>&'static str</code> declared in a specific file called <code>script.rs</code> with the same insane load of <code>let</code>. Then I rasterize them in a function and use them in a very specific way regarding the time they appear. Not fun.</p>
<h2 id="still-not-enough-data-driven">Still not enough data-driven</h2>
<p>As said above, the cache is a great help and enables some data-driven development, but that’s not enough. The <code>main.rs</code> file is more than 600 lines long and 500 lines are just declarations of of clips (editing) and are all very alike. I intentionally didn’t use the runtime version of the timeline – but it’s already implemented – because I was editing a lot of code at that moment, but that’s not a good excuse. And the timeline is just a small part of it (the cuts are something like 10 lines long) and it annoyed me at the very last part of the development, when I was synchronizing the demo with the soundtrack.</p>
<p>I think the real problem is that the clips are way too abstract to be a really helpful abstraction. Clips are just lambdas that consume time and output a node. This also has implication (you cannot borrow something for the node in your clip because of borrowing rules ; duh!).</p>
<h2 id="animation-edition">Animation edition</h2>
<p>Most of the varying things you can see in my demos are driven by animation curves – splines. The bare concept is very interesting: an animation contains control points that you know have a specific value at a given time. Values in between are interpolated using an interpolation mode that can change at each control points if needed. So, I use splines to animate pretty much everything: camera movements, objects rotations, color masking, flickering, fade in / fade out effects, etc.</p>
<p>Because I wanted to be able to edit the animation in a comfy way – lesson learned from Céleri Rémoulade, splines can be edited in realtime because they’re hot-reloadable. They live in JSON files so that you just have to edit the JSON objects in each file and as soon as you save it, the animation changes. I have to say, this was very ace to do. I’m so happy having coded such a feature.</p>
<p>However, it’s JSON. It’s already a thing. Though, I hit a serious problem when editing the orientations data. In <strong>spectra</strong>, an orientation is encoded with a <a href="https://en.wikipedia.org/wiki/Quaternion#Unit_quaternion">unit quaternion</a>. This is a 4-floating number – hypercomplex. Editing those numbers in a plain JSON file is… challenging! I think I really need some kind of animation editor to edit the spline.</p>
<h2 id="video-capture">Video capture</h2>
<p>The Youtube <a href="https://www.youtube.com/watch?v=OemyLQbDTSk">capture</a> was made directly in the demo. At the end of each frame, I dump the frame into a .png image (with a name including the number of the frame). Then I simply use ffmpeg to build the video.</p>
<p>Even though this is not very important, I had to add some code into the production code of the demo and I think I could just refactor that into <strong>spectra</strong>. I’m talking about three or four lines of code. Not a big mess.</p>
<h2 id="compositing">Compositing</h2>
<p>This is will appear as both pros. and cons. Compositing, in spectra, is implemented via the concept of <em>nodes</em>. A node is just an algebraic data structure that contains <em>something</em> that can be connected to <em>another thing</em> to compose a render. For instance, you get find nodes of type <em>render</em>, <em>color</em>, <em>texture</em>, <em>fullscreen effects</em> and <em>composite</em> – the latter is used to mix nodes between them.</p>
<p>Using the nodes, you can build a tree. And the cool thing is that I implemented the most common operators from <code>std::ops</code>. I can then apply a simple color mask to a render by doing something like</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust">render_node * RGBA::new(r, g, b, a).into()</code></pre></div>
<p>This is extremely user-friendly and helped me a lot to tweak the render (the actual ASTs are more complex than that and react to time, but the idea is similar). However, there’s a problem. In the actual implementation, the <em>composite</em> node is not smart enough: it blends two nodes by rendering them into a separate framebuffer (hence two framebuffers), then sample via a fullscreen quad the left framebuffer and then the right one – and apply the appropriate blending.</p>
<p>I’m not sure about performance here, but I feel like this is the wrong way to go – bandwidth! I need to profile.</p>
<h2 id="on-a-general-note-data-vs.-transformations">On a general note: data vs. transformations</h2>
<p>My philosphy is that code should be about transformation, not data. That’s why I love Haskell. However, in the demoscene world, it’s very common to move data directly into functions – think of all those fancy shaders you see everywhere, especially on shadertoy. As soon as I see a data constant in my code, I think “Wait; isn’t there a way to remove that from the function and have access to it as an input?”</p>
<p>This is important, and that’s the direction I’ll take from now on for the future versions of my frameworks.</p>
<h1 id="what-went-right">What went right!</h1>
<p>A lot as well!</p>
<h2 id="hot-reloading">Hot reloading</h2>
<p>Hot reloading was <em>the</em> thing I needed. A hot-reload everything. I even hot-reload the tessellation of the objects (.obj), so that I can change the shape / resolution of a native object and I don’t have to relaunch the application. I saved a lot of precious time thanks to that feature.</p>
<h2 id="live-edition-in-json">Live edition in JSON</h2>
<p>I had that idea pretty quickly as well. A lot of objects – among splines – live in JSON files. You edit the file, save it and tada: the object has changed in the application – hot reloading! The JSON was especially neat to handle splines of positions, colors and masks – it went pretty bad and wrong with orientations, but I already told you that.</p>
<h2 id="compositing-1">Compositing</h2>
<p>As said before, compositing was also a win, because I lifted the concept up to the Rust AST, enabling me to express interesting rendering pipeline just by using operators like <code>*</code>, <code>+</code> and some combinators of my own (like <code>over</code>).</p>
<h2 id="editing">Editing</h2>
<p>Editing was done with a cascade of types and objects:</p>
<ul class="incremental">
<li>a <code>Timeline</code> holds several <code>Track</code>s and <code>Overlap</code>s;</li>
<li>a <code>Track</code> holds several <code>Cuts</code>;</li>
<li>a <code>Cut</code> holds information about a <code>Clip</code>: when the cut starts and ends in the clip and when such a cut should be placed in the track;</li>
<li>a <code>Clip</code> contains code defining a part of the scene (Rust code, can’t live in JSON for obvious reasons;</li>
<li>an <code>Overlap</code> is a special object used to fold several nodes if several cuts are triggered at the same time; it’s used for transitions mostly;</li>
<li>alternatively, a <code>TimelineManifest</code> can be used to live-edit all of this (the JSON for the cuts has a string reference for the clip, and a map to actual code must be provided when folding the manifest into an object of type <code>Timeline</code>).</li>
</ul>
<p>I think such a system is very neat and helped me a lot to remove naive conditions (like timed if-else if-else if-else if…-else nightmare). With that system, there’s only one test per frame to determine which cuts must be rendered (well, actually, one per track), and it’s all declarative. Kudos.</p>
<h2 id="resources-loading-was-insanely-fast">Resources loading was insanely fast</h2>
<p>I thought I’d need some kind of loading bars, but everything loaded so quickly that I decided it’d be a wast of time. Even though I might end up modifying the way resources are loaded, I’m pretty happy with it.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Writing that demo in such a short period of time – I have a job, a social life, other things I enjoy, etc. – was such a challenge! But it was also the perfect project to stress-test my framework. I saw a lot of issues while building my demo with spectra and a lot of “woah, that’s actually pretty great doing this that way!”. I’ll have to enhance a few things, but I’ll always do that with a demo as a <em>work-in-progress</em> because I target pragmatism. As I released spectra on crates.io, I might end up writing another blog entry about it sooner or later, and even speak about it at a Rust meeting!</p>
<p>Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-55204824877858613252017-02-07T02:40:00.001+01:002017-02-07T03:04:59.053+01:00Lifetimes limits – self borrowing and dropchecker<p>Lately, I’ve been playing around with <a href="https://crates.io/crates/alto">alto</a> in my demoscene framework. This crate in the <em>replacement of <a href="https://crates.io/crates/openal-rs">openal-rs</a></em> as <strong>openal-rs</strong> has been deprecated because <em>unsound</em>. It’s a wrapper over <a href="https://www.openal.org">OpenAL</a>, which enables you to play 3D sounds and gives you several physical properties and effects you can apply.</p>
<h1 id="the-problem">The problem</h1>
<p>Just to let you fully understand the problem, let me introduce a few principles from alto. As a wrapper over OpenAL, it exposes quite the same interface, but adds several safe-related types. In order to use the API, you need three objects:</p>
<ul class="incremental">
<li>an <code>Alto</code> object, which represents the <em>API object</em> (it holds dynamic library handles, function pointers, etc. ; we don’t need to know about that)</li>
<li>a <code>Device</code> object, a regular device (a sound card, for example)</li>
<li>a <code>Context</code> object, used to create audio resources, handle the audio context, etc.</li>
</ul>
<p>There are well-defined relationships between those objects that state about their lifetimes. An <code>Alto</code> object must outlive the <code>Device</code> and the <code>Device</code> must outlive the <code>Context</code>. Basically:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> alto = Alto::load_default(<span class="cn">None</span>).unwrap(); <span class="co">// bring the default OpenAL implementation in</span>
<span class="kw">let</span> dev = alto.open(<span class="cn">None</span>).unwrap(); <span class="co">// open the default device</span>
<span class="kw">let</span> ctx = dev.new_context(<span class="cn">None</span>).unwrap(); <span class="co">// create a default context with no OpenAL extension</span></code></pre></div>
<p>As you can see here, the lifetimes are not violated, because <code>alto</code> outlives <code>dev</code> which outlives <code>ctx</code>. Let’s dig in the type and function signatures to get the lifetimes right (documentation <a href="https://docs.rs/alto/1.0.5/alto">here</a>).</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">fn</span> Alto::open<<span class="ot">'s</span>, S: Into<<span class="dt">Option</span><&<span class="ot">'s</span> CStr>>>(&<span class="kw">self</span>, spec: S) -> AltoResult<Device></code></pre></div>
<p>The <code>S</code> type is just a convenient type to select a specific implementation. We need the default one, so just pass <code>None</code>. However, have a look at the result. <code>AltoResult<Device></code>. I told you about lifetime relationships. This one might be tricky, but you always have to wonder “is there an elided lifetime here?”. Look at the <code>Device</code> type:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">pub</span> <span class="kw">struct</span> Device<<span class="ot">'a</span>> { <span class="co">/* fields omitted */</span> }</code></pre></div>
<p>Yep! So, what’s the lifetime of the <code>Device</code> in <code>AltoResult<Device></code>? Well, that’s simple: the lifetime elision rule in action is one of the simplest:</p>
<blockquote>
<p>If there are multiple input lifetime positions, but one of them is &self or &mut self, the lifetime of self is assigned to all elided output lifetimes. (<a href="https://doc.rust-lang.org/beta/nomicon/lifetime-elision.html">source</a>)</p>
</blockquote>
<p>So let’s rewrite the <code>Alto::open</code> function to make it clearer:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">fn</span> Alto::open<<span class="ot">'a</span>, <span class="ot">'s</span>, S: Into<<span class="dt">Option</span><&<span class="ot">'s</span> CStr>>>(&<span class="ot">'a</span> <span class="kw">self</span>, spec: S) -> AltoResult<Device<<span class="ot">'a</span>>> <span class="co">// exact same thing as above</span></code></pre></div>
<p>So, what you can see here is that the <code>Device</code> must be valid for the same lifetime as the reference we pass in. Which means that <code>Device</code> cannot outlive the reference. Hence, it cannot outlive the <code>Alto</code> object.</p>
<hr />
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">impl</span><<span class="ot">'a</span>> Device<<span class="ot">'a</span>> {
<span class="co">// …</span>
<span class="kw">fn</span> new_context<A: Into<<span class="dt">Option</span><ContextAttrs>>>(&<span class="kw">self</span>, attrs: A) -> AltoResult<Context>
<span class="co">// …</span>
}</code></pre></div>
<p>That looks a bit similar. Let’s have a look at <code>Context</code>:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">pub</span> <span class="kw">struct</span> Context<<span class="ot">'d</span>> { <span class="co">/* fields omitted */</span> }</code></pre></div>
<p>Yep, same thing! Let’s rewrite the whole thing:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">impl</span><<span class="ot">'a</span>> Device<<span class="ot">'a</span>> {
<span class="co">// …</span>
<span class="kw">fn</span> new_context<<span class="ot">'b</span>, A: Into<<span class="dt">Option</span><ContextAttrs>>>(&<span class="ot">'b</span> <span class="kw">self</span>, attrs: A) -> AltoResult<Context<<span class="ot">'b</span>>>
<span class="co">// …</span>
}</code></pre></div>
<p>Plus, keep in mind that <code>self</code> is actually <code>Device<'a></code>. The first argument of this function then awaits a <code>&'b Device<'a></code> object!</p>
<blockquote>
<p>rustc is smart enough to automatically insert the <code>'a: 'b</code> lifetime bound here – i.e. the 'a lifetime outlives 'b. Which makes sense: the reference will die before the <code>Device<'a></code> is dropped.</p>
</blockquote>
<p>Ok, ok. So, what’s the problem then?!</p>
<h1 id="the-real-problem">The (real) problem</h1>
<p>The snippet of code above about how to create the three objects is straight-forward (though we don’t take into account errors, but that’s another topic). However, in my demoscene framework, I really don’t want people to use that kind of types. The framework should be completely agnostic about which technology or API is used internally. For my purposes, I just need a single type with a few methods to work with.</p>
<p>Something like that:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">struct</span> Audio = {}
<span class="kw">impl</span> Audio {
<span class="kw">pub</span> <span class="kw">fn</span> new<P>(track_path: P) -> <span class="dt">Result</span><<span class="kw">Self</span>> <span class="kw">where</span> P: AsRef<Path> {}
<span class="kw">pub</span> <span class="kw">fn</span> toggle(&<span class="kw">mut</span> <span class="kw">self</span>) -> <span class="dt">bool</span> {}
<span class="kw">pub</span> <span class="kw">fn</span> playback_cursor(&<span class="kw">self</span>) -> <span class="dt">f32</span> {}
<span class="kw">pub</span> <span class="kw">fn</span> set_playback_cursor(&<span class="kw">self</span>, t: <span class="dt">f32</span>) {}
}
<span class="kw">impl</span> <span class="bu">Drop</span> <span class="kw">for</span> Audio {
<span class="kw">fn</span> drop(&<span class="kw">mut</span> <span class="kw">self</span>) {
<span class="co">// stop the music if playing; do additional audio cleanup</span>
}
}</code></pre></div>
<p>This is a very simple interface, yet I don’t need more. <code>Audio::set_playback_cursor</code> is cool when I debug my demos in realtime by clicking a time panel to quickly jump to a part of the music. <code>Audio::toggle()</code> enables me to pause the demo to inspect an effect in the demo. Etc.</p>
<p>However, how can I implement <code>Audio::new</code>?</p>
<h1 id="the-current-limits-of-borrowing">The (current) limits of borrowing</h1>
<p>The problem kicks in as we need to wrap the three types – <code>Alto</code>, <code>Device</code> and <code>Context</code> – as the fields of <code>Audio</code>:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">struct</span> Audio<<span class="ot">'a</span>> {
alto: Alto,
dev: Device<<span class="ot">'a</span>>,
context: Context<<span class="ot">'a</span>>
}</code></pre></div>
<p>We have a problem if we do this. Even though the type is correct, we cannot correctly implement <code>Audio::new</code>. Let’s try:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">impl</span><<span class="ot">'a</span>> Audio<<span class="ot">'a</span>> {
<span class="kw">pub</span> <span class="kw">fn</span> new<P>(_: P) -> <span class="dt">Result</span><<span class="kw">Self</span>> <span class="kw">where</span> P: AsRef<Path> {
<span class="kw">let</span> alto = Alto::load_default(<span class="cn">None</span>).unwrap();
<span class="kw">let</span> dev = alto.open(<span class="cn">None</span>).unwrap();
<span class="kw">let</span> ctx = dev.new_context(<span class="cn">None</span>).unwrap();
<span class="cn">Ok</span>(Audio {
alto: alto,
dev: dev,
ctx: ctx
})
}
}</code></pre></div>
<p>As you can see, that cannot work:</p>
<pre><code>error: `alto` does not live long enough
--> /tmp/alto/src/main.rs:14:15
|
14 | let dev = alto.open(None).unwrap();
| ^^^^ does not live long enough
...
22 | }
| - borrowed value only lives until here
|
note: borrowed value must be valid for the lifetime 'a as defined on the body at 12:19...
--> /tmp/alto/src/main.rs:12:20
|
12 | fn new() -> Self {
| ^
error: `dev` does not live long enough
--> /tmp/alto/src/main.rs:15:15
|
15 | let ctx = dev.new_context(None).unwrap();
| ^^^ does not live long enough
...
22 | }
| - borrowed value only lives until here
|
note: borrowed value must be valid for the lifetime 'a as defined on the body at 12:19...
--> /tmp/alto/src/main.rs:12:20
|
12 | fn new() -> Self {
| ^
error: aborting due to 2 previous errors</code></pre>
<p>What’s going on here? Well, we’re hitting a problem called the problem of <strong>self-borrowing</strong>. Look at the first two lines of our implementation of <code>Audio::new</code>:</p>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> alto = Alto::load_default(<span class="cn">None</span>).unwrap();
<span class="kw">let</span> dev = alto.open(<span class="cn">None</span>).unwrap();</code></pre></div>
<p>As you can see, the call to <code>Alto::open</code> borrows <code>alto</code> – via a <code>&Alto</code> reference. And of course, you cannot move a value that is borrowed – that would invalidate all the references pointing to it. We also have another problem: imagine we could do that. All those types implement <code>Drop</code>. Because they basically all have the same lifetime, there’s no way to know which one borrows information from whom. The <em>dropchecker</em> has no way to know that. It will then refuse to code creating objects of this type, because dropping might be unsafe in that case.</p>
<h1 id="what-can-we-do-about-it">What can we do about it?</h1>
<p>Currently, this problem is linked to the fact that the lifetime system is a bit too restrictive and doesn’t allow for <strong>self-borrowing</strong>. Plus, you also have the <em>dropchecker</em> issue to figure out. Even though we were able to bring in <code>alto</code> and <code>device</code> altogether, how do you handle <code>context</code>? The <em>dropchecker</em> doesn’t know which one must be dropped first – there’s no obvious link at this stage between <code>alto</code> and all the others anymore, because that link was made with a reference to <code>alto</code> that died – we’re moving out of the scope of the <code>Audio::new</code> function.</p>
<div class="figure">
<img src="http://phaazon.net/pub/rust_self_borrowing.jpg" />
</div>
<p>That’s a bit tough. The current solution I implemented to fix the issue is ok–ish, but I dislike it because it adds a significant performance overhead: I just moved the initialization code in a thread that stays awake until the <code>Audio</code> object dies, and I use a synchronized channel to communicate with the objects in that thread. That works because the thread provides us with a stack, that is the support of lifetimes – think of scopes.</p>
<p>Another solution would be to move that initialization code in a function that would accept a closure – your application. Once everything is initialized, the closure is called with a few callbacks to toggle / set the cursor of the object living “behind” on the stack. I don’t like that solution because it modifies the main design – having an <code>Audio</code> object was the goal.</p>
<p>Other solutions are:</p>
<ul class="incremental">
<li><code>std::mem::transmute</code> to remove the lifetimes (replace them with <code>'static</code>). That’s <strong>hyper dangerous</strong> and we are just breaking Rust’s lifetimes… <em>not okay</em> :(</li>
<li>change our design to meet the same as alto’s (in a word: use the same three objects)</li>
<li>cry deeply</li>
</ul>
<p>I don’t have a satisfying solution yet to that problem. My thread solution works and lets me have a single type abstracting all of that, but having a thread for such a thing is a waste of resources to me. I think I’ll implement the closure solution as, currently, it’s not possible to embed in struct lifetimes’ semantics / logic. I guess it’s okay; I guess the problem is also linked to the fact the concept is pretty young and we’re still kind of experimenting it. But clearly, lifetimes hit a hard problem here that they cannot solve correctly. Keep in mind that even if unsafe solutions exist, we’re talking about a library that’s designed to work with Rust lifetimes as a pretty high level of abstraction. Firing <code>transmute</code> is very symptomatic of something wrong. I’m open to suggestions, because I’ve been thinking the problem all day long without finding a proper solution.</p>
<p>Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-8118679522869564912016-08-28T20:59:00.000+02:002016-08-29T01:46:51.529+02:00luminance designs<p><a href="https://crates.io/crates/luminance/0.7.0">luminance-0.7.0</a> was released a few days ago and I decided it was time to explain exactly what luminance is and what were the design choices I made. After a very interesting talk with <a href="https://github.com/nical">nical</a> about other rust graphics frameworks (e.g. <a href="https://crates.io/crates/gfx">gfx</a>, <a href="https://crates.io/crates/glium">glium</a>, <a href="https://crates.io/crates/vulkano">vulkano</a>, etc.), I thought it was time to give people some more information about luminance and how to compare it to other frameworks.</p>
<h1 id="origin">Origin</h1>
<p>luminance started as <a href="https://hackage.haskell.org/package/luminance">a Haskell package</a>, extracted from a <em>“3D engine”</em> I had been working on for a while called <a href="https://github.com/phaazon/quaazar">quaazar</a>. I came to the realization that I wasn’t using the Haskell garbage collector at all and that I could benefit from using a language without GC. Rust is a very famous language and well appreciated in the Haskell community, so I decided to jump in and learn Rust. I migrated luminance in a month or two. The mapping is described in <a href="http://phaazon.blogspot.fr/2016/04/porting-haskell-graphics-framework-to.html">this blog entry</a>.</p>
<h1 id="what-is-luminance-for">What is luminance for?</h1>
<p>I’ve been writing 3D applications for a while and I always was frustrated by how OpenGL is badly designed. Let’s sum up the lack of design of OpenGL:</p>
<ul class="incremental">
<li><em>weakly typed</em>: OpenGL has types, but… it actually does not. <code>GLint</code>, <code>GLuint</code> or <code>GLbitfield</code> are all defined as <em>aliases</em> to primary and integral types (i.e. something like <code>typedef float GLfloat</code>). Try it with <code>grep -Rn "typedef [a-zA-Z]* GLfloat" /usr/include/GL</code>. This leads to the fact that <em>framebuffers</em>, <em>textures</em>, <em>shader stages</em>, <em>shader program</em> or even <em>uniforms</em>, etc. have the same type (<code>GLuint</code>, i.e. <code>unsigned int</code>). Thus, a function like <code>glCompileShader</code> expects a <code>GLuint</code> as argument, though you can pass a framebuffer, because it’s also represented as a <code>GLuint</code> – very bad for us. It’s better to consider that those are just untyped – :( – handles.</li>
<li><em>runtime overhead</em>: Because of the point above, functions cannot assume you’re passing a value of a the expected type – e.g. the example just above with <code>glCompileShader</code> and a framebuffer. That means OpenGL implementations have to check against <em>all</em> the values you’re passing as arguments to be sure they match the type. That’s basically <strong>several tests for each call</strong> of an OpenGL function. If the type doesn’t match, you’re screwed and see the next point.</li>
<li><em>error handling</em>: This is catastrophic. Because of the runtime overhead, almost all functions might set the <em>error flag</em>. You have to check the error flag with the <code>glGetError</code> function, adding a side-effect, preventing parallelism, etc.</li>
<li><em>global state</em>: OpenGL works on the concept of global mutation. You have a state, wrapped in a <em>context</em>, and each time you want to do something with the GPU, you have to change something in the context. Such a context is important; however, some mutations shouldn’t be required. For instance, when you want to change the value of an object or use a texture, OpenGL requires you to <em>bind</em> the object. If you forget to <em>bind</em> for the next object, the mutation will occurs on the first object. Side effects, side effects…</li>
</ul>
<p>The goal of luminance is to fix most of those issues by providing a safe, stateless and elegant graphics framework. It should be as low-level as possible, but shouldn’t sacrifice runtime performances – CPU charge as well as memory bandwidth. That is why if you know how to program with OpenGL, you won’t feel lost when getting your feet wet with luminance.</p>
<p>Because of the many OpenGL versions and other technologies (among them, vulkan), luminance has an extra aim: abstract over the trending graphics API.</p>
<h1 id="types-in-luminance">Types in luminance</h1>
<p>In luminance, all graphics resources – and even concepts – have their own respective type. For instance, instead of <code>GLuint</code> for both shader programs and textures, luminance has <code>Program</code> and <code>Texture</code>. That ensures you don’t pass values with the wrong types.</p>
<blockquote>
<p>Because of static warranties provided by compile-time, with such a scheme of strong-typing, the runtime <strong>shouldn’t have to check for type safety</strong>. Unfortunately, because luminance <em>wraps over</em> OpenGL in the luminance-gl backend, we can only add static warranties; we cannot remove the runtime overhead.</p>
</blockquote>
<h1 id="error-handling">Error handling</h1>
<p>luminance follows the Rust conventions and uses the famous <code>Option</code> and <code>Result</code> types to specify errors. You will never have to check against a global error flag, because this is just all wrong. Keep in mind, you have the <code>try!</code> macro in your Rust prelude; use it as often as possible!</p>
<blockquote>
<p>Even though Rust needs to provide an exception handler – i.e. panics – there’s no such thing as exceptions in Rust. The <code>try!</code> macro is just syntactic sugar to:</p>
</blockquote>
<blockquote>
<div class="sourceCode"><pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">match</span> result {
<span class="cn">Ok</span>(x) => x,
<span class="cn">Err</span>(e) => <span class="kw">return</span> e
}</code></pre></div>
</blockquote>
<h1 id="stateless">Stateless</h1>
<p>luminance is stateless. That means you don’t have to bind an object to be able to use it. luminance takes care of that for you in a very simple way. To achieve this and keep performances running, it’s required to add a bit of high-level to the OpenGL API by leveraging how binds should happen.</p>
<p>Whatever the task you’re trying to reach, whatever computation or problem, it’s always better to gather / group the computation by batches. A good example of that is how magnetic hard drive disks work or your RAM. If you spread your data across the disk region (fragmented data) or across several non-contiguous addresses in your RAM, it will end up by unnecessary moves. The hard drive’s head will have to go all over the disk to gather the information, and it’s very likely you’ll destroy the RAM performance (and your CPU caches) if you don’t put the data in a contiguous area.</p>
<p>If you don’t group your OpenGL resources – for instances, you render 400 objects with shader A, 10 objects with shader B, then 20 objects with shader A, 32 objects with shader C, 349 objects with shader A and finally 439 objects with shader B, you’ll add more OpenGL calls to the equation – hence more global state mutations, and those are costly.</p>
<p>Instead of this:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>400 objects with shader A</li>
<li>10 objects with shader B</li>
<li>20 objects with shader A</li>
<li>32 objects with shader C</li>
<li>348 objects with shader A</li>
<li>439 objects with shader B</li>
</ol>
<p>luminance <strong>forces</strong> you to group your resources like this:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>400 + 20 + 348 objects with shader A</li>
<li>10 + 439 objects with shader B</li>
<li>32 objects with shader C</li>
</ol>
<p>This is done via types called <code>Pipeline</code>, <code>ShadingCommand</code> and <code>RenderCommand</code>.</p>
<h2 id="pipelines">Pipelines</h2>
<p>A <code>Pipeline</code> gathers shading commands under a <code>Framebuffer</code>. That means that all <code>ShadingCommand</code> embedded in the <code>Pipeline</code> will output to the embedded <code>Framebuffer</code>. Simple, yet powerful, because we can <em>bind</em> the framebuffer when executing the pipeline and don’t have to worry about framebuffer until the next execution of another <code>Pipeline</code>.</p>
<h2 id="shadingcommand">ShadingCommand</h2>
<p>A <code>ShadingCommand</code> gathers render commands under a shader <code>Program</code> along with an update function. The update function is used to customize the <code>Program</code> by providing <em>uniforms</em> – i.e. <code>Uniform</code>. If you want to change a <code>Program</code>s <code>Uniform</code> once a frame – and only if the <code>Program</code> is only called once in the frame – it’s the right place to do it.</p>
<p>All <code>RenderCommand</code> embedded in the <code>ShadingCommand</code> will be rendered using the embedded shader <code>Program</code>. Like with the <code>Pipeline</code>, we don’t have to worry about binding: we just have to use the embedded shader program when executing the <code>ShadingCommand</code>, and we’ll bind another program the next time a <code>ShadingCommand</code> is ran!</p>
<h2 id="rendercommand">RenderCommand</h2>
<p>A <code>RenderCommand</code> gathers all the information required to render a <code>Tessellation</code>, that is:</p>
<ul class="incremental">
<li>the blending equation, source and destination blending factors</li>
<li>whether the depth test should be performed</li>
<li>an update function to update the <code>Program</code> being in use – so that each object can have different properties used in the shader program</li>
<li>a reference to the <code>Tessellation</code> to render</li>
<li>the number of instances of the <code>Tessellation</code> to render</li>
<li>the size of the rasterized points (if the <code>Tessellation</code> contains any)</li>
</ul>
<h1 id="what-about-shaders">What about shaders?</h1>
<p>Shaders are written in… the backend’s expected format. For OpenGL, you’ll have to write <strong>GLSL</strong>. The backends automatically inserts the version pragma (<code>#version 330 core</code> for OpenGL 3.3 for instance). In the first place, I wanted to migrate <strong>cheddar</strong>, my Haskell shader EDSL. But… the sad part of the story is that Rust is – yet – unable to handle that kind of stuff correctly. I started to implement an EDSL for luminance with macros. Even though it was usable, the error handling is seriously terrible – macros shouldn’t be used for such an important purpose. Then some rustaceans pointed out I could implement a (rustc) compiler plugin. That enables the use of new constructs directly into Rust by extending its syntax. This is great.</p>
<p>However, with the hindsight, I will not do that. For a very simple reason. luminance is, currently, simple, stateless and most of all: it works! I released a PC demo in Köln, Germany using luminance and a demoscene graphics framework I’m working on:</p>
<p><a href="http://www.pouet.net/prod.php?which=67966">pouët.net link</a></p>
<p><a href="https://www.youtube.com/watch?v=pYqZS1C_7PU">youtube capture</a></p>
<p><a href="https://github.com/phaazon/ion">ion demoscene framework</a></p>
<p>While developping Céleri Rémoulade, I decided to bake the shaders directly into Rust – to get used to what I had wanted to build, i.e., a shader EDSL. So there’re a bunch of constant <code>&'static str</code> everywhere. Each time I wanted to make a fix to a shader, I had to leave the application, make the change, recompile, rerun… I’m not sure it’s a good thing. Interactive programming is a very good thing we can enjoy – yes, even in strongly typed languages ;).</p>
<p>I saw that <a href="https://crates.io/crates/gfx">gfx</a> doesn’t have its own shader EDSL either and requires you to provide <strong>several shader implementations (one per backend)</strong>. I don’t know; I think it’s not that bad if you only target a single backend (i.e. OpenGL 3.3 or Vulkan). Transpiling shaders is a thing, I’ve been told…</p>
<blockquote>
<p><em>sneaking out…</em></p>
</blockquote>
<p>Feel free to dig in the code of Céleri Rémoulade <a href="https://github.com/phaazon/celeri-remoulade">here</a>. It’s demoscene code, so it had been rushed on before the release – read: it’s not as clean as I wanted it to be.</p>
<p>I’ll provide you with more information in the next weeks, but I prefer spending my spare time writing code than explaining what I’m gonna do – and missing the time to actually do it. ;)</p>
<p>Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-30813856467373976562016-07-25T11:40:00.000+02:002016-07-25T11:40:37.227+02:00luminance-0.6.0 sample<p>It’s been two weeks <a href="https://crates.io/crates/luminance">luminance</a> is at version 0.6.0. I’m very busy currently but I decided to put some effort into making a very minimalistic yet usable sample. The sample uses <a href="https://crates.io/crates/luminance">luminance</a> and <a href="https://crates.io/crates/luminance-gl">luminance-gl</a> (the <strong>OpenGL 3.3</strong> backend being the single one available for now).</p>
<p>You’ll find it <a href="https://github.com/phaazon/luminance-samples-rs/blob/master/src/main.rs">here</a>. The code is heavily commented and you can of course clone the repository and and run the executable with cargo.</p>
<p>I’ll post a more detailed blog post about the application I’m building with luminance right now later on.</p>
<p>Keep the vibe! :)</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com1tag:blogger.com,1999:blog-8976038770606708499.post-7549788139333605262016-04-29T14:46:00.001+02:002016-04-30T13:30:03.004+02:00Porting a Haskell graphics framework to Rust (luminance)<p>I wanted to write that new article to discuss about something important I’ve been doing for several weeks. It’s actually been <em>a month</em> that I’ve been working on <em>luminance</em>, but not in the usual way. Yeah, I’ve put my <em>Haskell</em> experience aside to… port <em>luminance</em> into <em>Rust</em>! There are numerous reasons why I decided to jump in and I think it could be interesting for people to know about the differences I’ve been facing while porting the graphics library.</p>
<h1 id="you-said-rust">You said Rust?</h1>
<p>Yeah, <a href="https://www.rust-lang.org">Rust</a>. It’s a strong and static language aiming at system programming. Although it’s an <em>imperative</em> language, it has interesting <em>functional</em> conventions that caught my attention. Because I’m a <em>haskeller</em> and because <em>Rust</em> takes <strong>a lot</strong> from <em>Haskell</em>, learning it was a piece of cake, even though there are a few concepts I needed a few days to wrap my mind around. Having a strong C++11/14 experience, it wasn’t that hard though.</p>
<h2 id="how-does-it-compare-to-haskell">How does it compare to Haskell?</h2>
<p>The first thing that amazed me is the fact that it’s actually not that different from <em>Haskell</em>! <em>Rust</em> has a powerful type system – not as good as <em>Haskell</em>’s but still – and uses immutability as a default semantic for bindings, which is great. For instance, the following is forbidden in <em>Rust</em> and would make <code>rustc</code> – the <em>Rust</em> compiler – freak out:</p>
<pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> a = <span class="st">"foo"</span>;
a = <span class="st">"bar"</span>; <span class="co">// wrong; forbidden</span></code></pre>
<p><em>Haskell</em> works like that as well. However, you can introduce mutation with the <code>mut</code> keyword:</p>
<pre class="sourceCode rust"><code class="sourceCode rust"><span class="kw">let</span> <span class="kw">mut</span> a = <span class="st">"foo"</span>;
a = <span class="st">"bar"</span>; <span class="co">// ok</span></code></pre>
<p>Mutation should be used only when needed. In <em>Haskell</em>, we have the <code>ST</code> monad, used to introduce local mutation, or more drastically the <code>IO</code> monad. Under the wood, those two monads are actually almost the same type – with different warranties though.</p>
<p><em>Rust</em> is strict by default while <em>Haskell</em> is lazy. That means that <em>Rust</em> doesn’t know the concept of <em>memory suspensions</em>, or <em>thunks</em> – even though you can create them by hand if you want to. Thus, some algorithms are easier to implement in <em>Haskell</em> thanks to laziness, but some others will destroy your memory if you’re not careful enough – that’s a very common problem in <em>Haskell</em> due to thunks piling up in your stack / heap / whatever as you do extensive lazy computations. While it’s possible to remove those thunks by optimizing a <em>Haskell</em> program – profiling, strictness, etc., <em>Rust</em> doesn’t have that problem because it gives you full access to the memory. And that’s a good thing <strong>if you need it</strong>. <em>Rust</em> exposes <strong>a lot</strong> of primitives to work with memory. In contrast with <em>Haskell</em>, it doesn’t have a <em>garbage collector</em>, so you have to handle memory on your own. Well, not really. <em>Rust</em> has several very interesting concepts to handle memory in a very nice way. For instance, objects’ memory is held by <em>scopes</em> – which have <em>lifetimes</em>. <a href="https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization">RAII</a> is a very well known use of that concept and is important in <em>Rust</em>. You can glue code to your type that will be ran when an instance of that type dies, so that you can clean up memory and scarce resources.</p>
<p><em>Rust</em> has the concept of <em>lifetimes</em>, used to give names to scopes and specify how long an object reference should live. This is very powerful yet a bit complex to understand in the first place.</p>
<p>I won’t go into comparing the two languages because it would require several articles and a lot of spare time I don’t really have. I’ll stick to what I’d like to tell you: the <em>Rust</em> implementation of <em>luminance</em>.</p>
<h1 id="porting-luminance-from-haskell-to-rust">Porting luminance from Haskell to Rust</h1>
<p>The first very interesting aspect of that port is the fact that it originated from a realization while refactoring some of my <em>luminance</em> <em>Haskell</em> code. Although it’s functional, stateless and type-safe, a typical use of <em>luminance</em> doesn’t <em>really</em> require laziness nor a garbage collector. And I don’t like using a tool – read language – like a bazooka. <em>Haskell</em> is the most powerful language ever in terms of abstraction and expressivity over speed ratio, but all of that power comes with an overhead. Even though you’ll find folks around stating that <em>Haskell</em> is pretty okay to code a video game, I think it will never compete with languages that are <strong>made</strong> to solve real time computations or reactive programming. And don’t get me wrong: I’m sure you can write a decent video game in <em>Haskell</em> – I qualify myself as a <em>Haskeller</em> and I’ve not been writing <em>luminance</em> just for the joy of writing it. However, the way I use <em>Haskell</em> with <em>luminance</em> shouldn’t require all the overhead – and profiling got me right, almost no GC was involved.</p>
<p>So… I looked into <em>Rust</em> and discovered and learned the language in only three days. I think it’s due to the fact that <em>Rust</em>, which is simpler than <em>Haskell</em> in terms of type system features and has almost everything taken from <em>Haskell</em>, is, to me, an <strong>imperative Haskell</strong>. It’s like having a <em>Haskell</em> minus a few abstraction tools – HKT (but they’ll come soon), GADTs, fundeps, kinds, constraints, etc. – plus a total control of what’s happening. And I like that. A lot. A <em>fucking</em> lot.</p>
<p>Porting <em>luminance</em> to <em>Rust</em> wasn’t hard as a <em>Haskell</em> codebase might map almost directly to <em>Rust</em>. I had to change a few things – for instance, <em>Rust</em> doesn’t have the concept of existential quantification as-is, which is used intensively in the <em>Haskell</em> version of <em>luminance</em>. But most <em>Haskell</em> modules map directly to their respective <em>Rust</em> modules. I changed the architecture of the files to have something clearer. I was working on <em>loose coupling</em> in <em>Haskell</em> for <em>luminance</em>. So I decided to directly introduce loose coupling into the <em>Rust</em> version. And it works like a charm.</p>
<p>So there are, currently, two packages available: <code>luminance</code>, which is the core API, exporting the whole general interface, and <code>luminance-gl</code>, an <strong>OpenGL 3.3</strong> backend – though it will contain more backends as the development goes on. The idea is that you need both the dependencies to have access to <em>luminance</em>’s features.</p>
<p>I won’t say much today because I’m working on a <a href="https://en.wikipedia.org/wiki/Demoscene">demoscene</a> production using <em>luminance</em>. I want it to be a proof that the framework is usable, works and acts as a first true example. Of course, the code will be open-source.</p>
<p>The documentation is not complete yet but I put some effort documenting almost everything. You’ll find both the packages here:</p>
<p><a href="https://crates.io/crates/luminance">luminance-0.1.0</a></p>
<p><a href="https://crates.io/crates/luminance-gl">luminance-gl-0.1.0</a></p>
<p>I’ll write another article on how to use <em>luminance</em> as soon as possible!</p>
<p>Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com3tag:blogger.com,1999:blog-8976038770606708499.post-31869131976305956842016-02-18T15:07:00.000+01:002016-02-18T15:07:41.917+01:00Pure API vs. IO-bound API for graphics frameworks<p>Hi! It’s been a while I haven’t posted something here. I haven’t been around for a few weeks and I’ve been missing writing here. I didn’t work that much on <a href="https://www.stackage.org/package/luminance">luminance</a> or other projects, even though I still provided updates for stackage for my packages. I worked a bit on <a href="https://github.com/phaazon/cheddar">cheddar</a> and I hope to be able to release it soon!</p>
<p>Although I didn’t add things nor write code at home, I thought a lot about graphics API designs.</p>
<h1 id="graphics-api-designs">Graphics API designs</h1>
<h2 id="pure-api">Pure API</h2>
<p>APIs such as <a href="http://www.lambdacube3d.com/">lambdacube</a> or <a href="https://hackage.haskell.org/package/GPipe">GPipe</a> are known to be graphics <em>pure API</em>. That means you don’t have use functions bound to <code>IO</code>. You use some kind of <a href="http://www.haskellforall.com/2012/06/you-could-have-invented-free-monads.html">free monads</a> or an <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">AST</a> to represent the code that will run on the target GPU. That pure design brings numerous advantages to us:</p>
<ul class="incremental">
<li>it’s possible to write the GPU code in a declarative, free and elegant way;</li>
<li>because of not being <code>IO</code>-bound, side-effects are reduced, which is good and improves the code safety;</li>
<li>one can write GPU code and several interpreters or/and with different technologies, hence a loosely coupled graphics API;</li>
<li>we could even imagine serializing GPU code to send it through network, store it in a file or whatever.</li>
</ul>
<p>Those advantages are nice, but there are also drawbacks:</p>
<ul class="incremental">
<li>acquiring resources might not be explicit anymore and none knows exactly when sparce resources will be loaded into memory; even though we could use <em>warmup</em> to solve that, <em>warmup</em> doesn’t help much with dynamic scene where some resources are loaded only if the context enables it (like the player being in a given area of the map, in a game, for instance);</li>
<li>interfacing! people will have to learn how to use free monads, AST and all that stuff and how to interface it with their own code (<em>FRP</em>, render loops, etc.);</li>
<li>performance; even though it should be okay, you’ll still hit a performance overhead by using pure graphics frameworks, because of internals mechanisms used to make it work.</li>
</ul>
<p>So yeah, a pure graphics framework is very neat, but keep in mind there’s – so far – no proof it actually works, scales nor is usable for a decent high-performance for end-users. It’s the same dilemna as with Conal’s <em>FRP</em>: it’s nice, but we don’t really know whether it works <em>“at large scale and in the real world”</em>.</p>
<h2 id="io-bound-api">IO-bound API</h2>
<p>Most of the API out there are IO-bound. <em>OpenGL</em> is a famous C API known to be one of the worst one on the level of side-effects and global mutations. Trust me, it’s truly wrong. However, the pure API as mentioned above are based on those impure IO-bound APIs. So we couldn’t do much without them.</p>
<p>There are side effects that are not that bad. For instance, in OpenGL, creating a new buffer is a side-effect: it requires that the CPU tell the GPU <em>“Hey buddy, please create a buffer with that data, and please give me back a handle to it!”</em>. Then the GPU would reply <em>”No problem pal, here’s your handle!”</em>. This side-effect don’t harm anyone, so we shouldn’t worry about it too much.</p>
<p>However, there are nasty side-effects, like binding resources to the OpenGL context.</p>
<p>So what are advantages of IO-bound designs? Well:</p>
<ul class="incremental">
<li>simple: indeed, you have handles and side-effects and you have a (too) fine control of the instruction flow;</li>
<li>performance, because <code>IO</code> is the naked real-world monad;</li>
<li>because <code>IO</code> is the high-order kinded type of any application (think of the <code>main</code> function), an <code>IO</code> API is simple to use in any kind of application;</li>
<li>we can use <code>(MonadIO m) => m</code> to add extra flexibility and create interesting constraints.</li>
</ul>
<p>And drawbacks:</p>
<ul class="incremental">
<li><code>IO</code> is very opaque and is not referentially transparent;</li>
<li><code>IO</code> is a dangerous type in which no one has no warranty about what’s going on;</li>
<li>one can fuck up everything if they aren’t careful;</li>
<li>safety is not enforced as in pure code.</li>
</ul>
<h2 id="what-about-luminances-design">What about luminance’s design?</h2>
<p>Since the beginning, luminance has been an API built to be simple, <em>type-safe</em> and <em>stateless</em>.</p>
<p><em>Type-safe</em> means that all objects you use belong to different type sets and cannot be mixed between each other implicitely – you have to use explicit functions to do so, and it has to be meaningful. For instance, you cannot create a buffer and state that the returned handle is a texture: the type system forbids it, while in OpenGL, almost all objects are in the <code>GLuint</code> set. It’s very confusing and you might end up passing a texture (<code>GLuint</code>) to a function expecting a framebuffer (<code>GLuint</code>). Pretty bad right?</p>
<p><em>Stateless</em> means that luminance has no state. You don’t have a huge context you have to bind stuff against to make it work. Everything is stored in the objects you use directly and specific context operations are translated into a different workflow so that performance are not destroyed – for instance luminance uses batch rendering so that it performs smart resource bindings.</p>
<p>Lately, I’ve been thinking of all of that. Either turn the API pure or leave it the way it is. I started to implement a pure API using <em>self-recursion</em>. The idea is actually simple. Imagine this <code>GPU</code> type and the <code>once</code> function:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">import </span><span class="dt">Control.Arrow</span> ( (***), (&&&) )
<span class="kw">import </span><span class="dt">Data.Function</span> ( fix )
<span class="kw">newtype</span> <span class="dt">GPU</span> f a <span class="fu">=</span> <span class="dt">GPU</span> {<span class="ot"> runGPU ::</span> f (a,<span class="dt">GPU</span> f a) }
<span class="kw">instance</span> (<span class="dt">Functor</span> f) <span class="ot">=></span> <span class="dt">Functor</span> (<span class="dt">GPU</span> f) <span class="kw">where</span>
fmap f <span class="fu">=</span> <span class="dt">GPU</span> <span class="fu">.</span> fmap (f <span class="fu">***</span> fmap f) <span class="fu">.</span> runGPU
<span class="kw">instance</span> (<span class="dt">Applicative</span> f) <span class="ot">=></span> <span class="dt">Applicative</span> (<span class="dt">GPU</span> f) <span class="kw">where</span>
pure x <span class="fu">=</span> fix <span class="fu">$</span> \g <span class="ot">-></span> <span class="dt">GPU</span> <span class="fu">$</span> pure (x,g)
f <span class="fu"><*></span> a <span class="fu">=</span> <span class="dt">GPU</span> <span class="fu">$</span> (\(f',fn) (a',an) <span class="ot">-></span> (f' a',fn <span class="fu"><*></span> an)) <span class="fu"><$></span> runGPU f <span class="fu"><*></span> runGPU a
<span class="kw">instance</span> (<span class="dt">Monad</span> m) <span class="ot">=></span> <span class="dt">Monad</span> (<span class="dt">GPU</span> m) <span class="kw">where</span>
return <span class="fu">=</span> pure
x <span class="fu">>>=</span> f <span class="fu">=</span> <span class="dt">GPU</span> <span class="fu">$</span> runGPU x <span class="fu">>>=</span> runGPU <span class="fu">.</span> f <span class="fu">.</span> fst
<span class="ot">once ::</span> (<span class="dt">Applicative</span> f) <span class="ot">=></span> f a <span class="ot">-></span> <span class="dt">GPU</span> f a
once <span class="fu">=</span> <span class="dt">GPU</span> <span class="fu">.</span> fmap (id <span class="fu">&&&</span> pure)</code></pre>
<p>We can then build pure values that will have a side-effect for resource acquisition and then hold the same value for ever with the <code>once</code> function:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">let</span> buffer <span class="fu">=</span> once createBufferIO
<span class="co">-- later, in IO</span>
(_,buffer2) <span class="ot"><-</span> runGPU buffer</code></pre>
<p>Above, the type of <code>buffer</code> and <code>buffer2</code> is <code>GPU IO Buffer</code>. The first call <code>runGPU buffer</code> will execute the <code>once</code> function, calling the <code>createBufferIO</code> IO function and will return <code>buffer2</code>, which just stores a pure <code>Buffer</code>.</p>
<p>Self-recursion is great to implement local states like that and I advise having a look at the <code>Auto</code> type. You can also read <a href="http://phaazon.blogspot.fr/2015/03/getting-into-netwire.html">my article on netwire</a>, which uses self-recursion a lot.</p>
<p>However, I kinda think that a library should have well defined responsibilities, and building such a pure interface is not the responsibility of luminance because we can have type-safety and a stateless API without wrapping everything in that <code>GPU</code> type. I think that if we want such a pure type, we should add it later on, in a 3D engine or a dedicated framework – and that’s actually what I do for demoscene purposes in another, ultra secret project. ;)</p>
<p>The cool thing with luminance using <code>MonadIO</code> is the fact that it’s very easy to use in any kind of type that developpers want to use in their applications. I really don’t like frameworks which purpose is clearly not flow control that actually enforce flow control and wrapping types! I don’t want to end up with a <code>Luminance</code> type or <code>LuminanceApplication</code> type. It should be simple to use and seamless.</p>
<p>I actually start to think that I did too much about that pure API design idea. The most important part of luminance should be type-safety and statelessness. If one wants a pure API, then they should use FRP frameworks or write their own stuff – with free monads for instance, and it’s actually funny to build!</p>
<p>The next big steps for luminance will be to clean the uniform interfaces which is a bit inconsistent and unfriendly to use with render commands. I’ll let you know.</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-60943119088231776812015-12-09T00:03:00.003+01:002015-12-09T00:03:35.144+01:00Existential quantification and GADT in luminance-0.8<h1 id="luminance-0.8-and-existential-quantification">luminance 0.8 and existential quantification</h1>
<p>It’s been a while I haven’t released anything on my blog. I just wrote a few changes for the latest version of luminance, <a href="https://hackage.haskell.org/package/luminance-0.8.2">luminance-0.8.2</a> and I decided to write about it because I think those changes are interesting on a Haskell level.</p>
<h2 id="the-problem">The problem</h2>
<p>If you haven’t read the <a href="https://hackage.haskell.org/package/luminance-0.8.2/changelog">changelog</a> yet, I changed the <code>createProgram</code> function and the way it handles uniform interfaces. In luminance < 0.8, you were provided with as many functions as there are uniform kinds. Up to now, luminance supports two uniform kinds:</p>
<ul class="incremental">
<li>simple uniforms;
<ul class="incremental">
<li>uniform block (UBO).</li>
</ul></li>
</ul>
<p>So you had two rank-2 functions like <code>forall a. (Uniform a) => Either String Natural -> UniformInterface m (U a)</code> and <code>forall a. (UniformBlock a) => String -> UniformInterface m (U (Region rw (UB a)))</code> to map whichever uniforms you wanted to.</p>
<p>The issue with that is that it requires to break the interface of <code>createProgram</code> each time we want to add a new kind of uniform, and it’s also <a href="https://hackage.haskell.org/package/luminance-0.7.2/docs/Graphics-Luminance-Shader-Program.html#v:createProgram">a pretty hard to read function signature</a>!</p>
<p>So… how does luminance-0.8 solve that?</p>
<h2 id="generalized-algebraic-data-types-rank-2-and-existential-quantification">(Generalized) Algebraic data types, rank-2 and existential quantification</h2>
<p>What is the only way we have to select uniforms? Names. Names can either be a <code>String</code> or a <code>Natural</code> for explicit semantics. We could encode such a name using an algebraic data type:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">UniformName</span>
<span class="fu">=</span> <span class="dt">UniformName</span> <span class="dt">String</span>
<span class="fu">|</span> <span class="dt">UniformSemantic</span> <span class="dt">Natural</span>
<span class="kw">deriving</span> (<span class="dt">Eq</span>,<span class="dt">Show</span>)</code></pre>
<p>That’s a good start. Though, we still have the problem of choosing the kind of uniform because we still have several functions – one per kind. We could encode the kind of the uniform directly into the name. After all, when we ask for a uniform mapping through a name, we require to know the kind. So that kind of makes sense. Let’s change our <code>UniformName</code> type:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">UniformName</span><span class="ot"> ::</span> <span class="fu">*</span> <span class="ot">-></span> <span class="fu">*</span> <span class="kw">where</span>
<span class="dt">UniformName</span><span class="ot"> ::</span> <span class="dt">String</span> <span class="ot">-></span> <span class="dt">UniformName</span> a
<span class="dt">UniformSemantic</span><span class="ot"> ::</span> <span class="dt">Natural</span> <span class="ot">-></span> <span class="dt">UniformName</span> a
<span class="dt">UniformBlockName</span><span class="ot"> ::</span> <span class="dt">String</span> <span class="ot">-></span> <span class="dt">UniformName</span> (<span class="dt">Region</span> rw (<span class="dt">UB</span> a))</code></pre>
<p>That’s neat, but with that definition, we won’t go anywhere, because we’re too polymorphic. Indeed, <code>UniformName "foo" :: UniformName a</code> can have any <code>a</code>. We need to put constraints on <code>a</code>. And that’s where <em>GADTs</em> come in so handy! We can hide the constraints in the constructors and bring them into scope when pattern matching. That’s a very neat feature of <em>GADTs</em>. So now, let’s add some constraints to our constructors:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">UniformName</span><span class="ot"> ::</span> <span class="fu">*</span> <span class="ot">-></span> <span class="fu">*</span> <span class="kw">where</span>
<span class="dt">UniformName</span><span class="ot"> ::</span> (<span class="dt">Uniform</span> a) <span class="ot">=></span> <span class="dt">String</span> <span class="ot">-></span> <span class="dt">UniformName</span> a
<span class="dt">UniformSemantic</span><span class="ot"> ::</span> (<span class="dt">Uniform</span> a) <span class="ot">=></span> <span class="dt">Natural</span> <span class="ot">-></span> <span class="dt">UniformName</span> a
<span class="dt">UniformBlockName</span><span class="ot"> ::</span> (<span class="dt">UniformBlock</span> a) <span class="ot">=></span> <span class="dt">String</span> <span class="ot">-></span> <span class="dt">UniformName</span> (<span class="dt">Region</span> rw (<span class="dt">UB</span> a))</code></pre>
<p>Yes! Now, we can write a function that takes a <code>UniformName a</code>, pattern matches it and call the appropriate function regarding the infered shape of <code>a</code>!</p>
<p>However, how do we forward the error? In older version of luminance, we were using <code>ProgramError</code> and more especially two of its constructors: <code>InactiveUniform</code> and <code>InactiveUniformBlock</code>. We need to shrink that to a single <code>InactiveUniform</code> constructor and find a way to store our <code>UniformName</code>… But we can’t yet, because of the <code>a</code> parameter! So the idea is to hide it through existential quantification!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">SomeUniformName</span> <span class="fu">=</span> forall a<span class="fu">.</span> <span class="dt">SomeUniformName</span> (<span class="dt">UniformName</span> a)
<span class="kw">instance</span> <span class="dt">Eq</span> <span class="dt">SomeUniformName</span> <span class="kw">where</span>
<span class="co">-- …</span>
<span class="kw">instance</span> <span class="dt">Show</span> <span class="dt">SomeUniformName</span> <span class="kw">where</span>
<span class="co">-- …</span></code></pre>
<p>And now we can store <code>SomeUniformName</code> in <code>InactiveUniform</code>. We won’t need to recover the type, we just need the constructor and the carried name. By pattern matching, we can recover both those information!</p>
<h2 id="conclusion">Conclusion</h2>
<p>Feel free to have a look at the new <a href="https://hackage.haskell.org/package/luminance-0.8.1/docs/Graphics-Luminance-Shader-Program.html#v:createProgram"><code>createProgram</code> function</a>. As you will see, the type signature is easier to read and to work with! :)</p>
<p>Have fun, and keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com3tag:blogger.com,1999:blog-8976038770606708499.post-66034363375584816952015-11-13T19:31:00.003+01:002015-11-13T19:31:34.623+01:00OpenGL 3.2 support for luminance!<h1 id="luminance-0.7">luminance-0.7</h1>
<p>You can’t even imagine how hard it was to release <a href="http://hackage.haskell.org/package/luminance-0.7">luminance-0.7</a>. I came accross several difficulties I had to spend a lot of time on but finally, here it is. I made a lot of changes for that very special release, and I have a lot to say about it!</p>
<h2 id="overview">Overview</h2>
<p>As for all my projects, I always provide people with a <em>changelog</em>. The 0.7 release is a major release (read as: it was a major increment). I think it’s good to tell people what’s new, but it should be <strong>mandatory</strong> to warn them about what has changed so that they can directly jump to their code and spot the uses of the deprecated / changed interface.</p>
<p>Anyway, you’ll find patch, minor and major changes in luminance-0.7. I’ll describe them in order.</p>
<h2 id="patch-changes">Patch changes</h2>
<h3 id="internal-architecture-and-debugging">Internal architecture and debugging</h3>
<p>A lot of code was reviewed internally. You don’t have to worry about that. However, there’s a new cool thing that was added internally. It could have been marked as a minor change but it’s not <em>supposed</em> to be used by common people – you can use it via a flag if you use <code>cabal</code> or <code>stack</code> though. It’s about debugging the <em>OpenGL</em> part used in luminance. You shouldn’t have to use it but it could be interesting if you spot a bug someday. Anyway, you can enable it with the flag <code>debug-gl</code>.</p>
<h3 id="uniform-block-uniform-buffer-objects">Uniform Block / Uniform Buffer Objects</h3>
<p>The <a href="https://www.opengl.org/wiki/Uniform_Buffer_Object">UBO</a> system was buggy and was fixed. You might experience issue with them though. I spotted a bug and reported it – you can find the bug report <a href="https://bugs.freedesktop.org/show_bug.cgi?id=92909">here</a>. That bug is not Haskell related and is related to the i915 Intel driver.</p>
<h2 id="minor-changes">Minor changes</h2>
<p>The minor changes were the most important part of luminance-0.7. luminance now officially supports <em>OpenGL 3.2</em>! When installing luminance, you default to the <code>gl32</code> backend. You can select the backend you want with flags – <code>gl45</code> and <code>gl45-bindless-textures</code> – but keep in mind you need the appropriate hardware to be able to use it. Because you need to use flags, you won’t be able to switch to the backend you want at runtime – that’s not the purpose of such a change though.</p>
<p>The performance gap should be minor between <code>gl32</code> and <code>gl45</code> but still. Basically, OpenGL 4.5 adds the support for <a href="https://www.opengl.org/wiki/Direct_State_Access">DSA</a>, which is very handy and less ill-designed that previous iterations of OpenGL. So a lot of code had to be rewritten to implement luminance’s stateless interface without breaking performance nor bring them down.</p>
<p>I <em>might</em> add support for other backends later on – like an <em>OpenGL ES</em> backend and <em>WebGL</em> one – but that won’t ship that soon though because I have a ton of work to do, and yet need to provide you with a concrete, beautiful, fast, appealing and eye-blowing demo with luminance! ;)</p>
<p>Feel free to test the <code>gl32</code> backend and give me back feedback!</p>
<p>However, if I spent so much time on that 0.7 version, it’s because I had issue whilst writing the <code>gl32</code> backend. Indeed, I spotted several bugs on my Intel HD card. This is my OpenGL version string for my Intel IGP card:</p>
<blockquote>
<p>OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.0.4</p>
</blockquote>
<p>The architecture is Haswell. And on such a card (<code>i915</code> linux driver) I’ve found two bugs while trying the <code>gl32</code> backend with <a href="https://hackage.haskell.org/package/luminance-samples-0.7">luminance-samples-0.7</a>.</p>
<h3 id="usampler2d"><code>usampler2D</code></h3>
<p>For unknown reason, the <code>Texture</code> sample failed on my Intel IGP but ran smoothly and flawlessly on my nVidia GPU. I spent a lot of time trying to figure out what I was missing, but eventually changed the sampler type – it’s now a <code>sampler2D</code> – and… it worked. I reported the issue to the intel dev team. So if you hit that error too, please leave a message here so that I can have more hindsight about that error and see what I can do.</p>
<h3 id="uniform-block-and-vec3">Uniform block and <code>vec3</code></h3>
<p>This is a very nasty issue that kept me awoken for days trying to fix <strong>my</strong> code while it was a driver bug. It’s a big technical, so I’ll just leave a <a href="https://bugs.freedesktop.org/show_bug.cgi?id=92909">link to the bug tracker</a> so that you can read it if you want to.</p>
<h2 id="breaking-changes">Breaking changes</h2>
<p>Ok, let’s talk.</p>
<p>When creating a new shader stage, you now have to use the function <code>createStage</code> – instead of several functions like <code>createVertexShader</code>. That change is important so that I can add new shader types without changing the interface, and because some shader can fail to be created. For instance, on the <code>gl32</code> backend, trying to build a tessellation shader will raise an error.</p>
<p>When a shader stage creation fails, the <code>UnsupportedStage</code> error is raised and holds the type of the stage that failed.</p>
<p>Finally, the interface for the cubemaps changed a bit – you don’t have access to <em>width</em> and <em>height</em> anymore, that was error-prone and useless; you’re stick to a <em>size</em> parameter.</p>
<p>I’d like to thank all people supporting me and luminance. I’ll be watching reactions to that major and important release as it will cover more people with cheaper and well-spread GPUs.</p>
<p>Happy hacking! :)</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-56470480220122804452015-10-25T23:29:00.002+01:002015-10-25T23:29:42.725+01:00luminance, episode 0.6: UBO, SSBO, Stackage.<p>Up to now, <a href="https://hackage.haskell.org/package/luminance">luminance</a> has been lacking two cool features: <a href="https://www.opengl.org/wiki/Uniform_Buffer_Object">UBO</a> and <a href="https://www.opengl.org/wiki/Shader_Storage_Buffer_Object">SSBO</a>. Both are <em>buffer-backed</em> uniform techniques. That is, a way to pass uniforms to shader stages through buffers.</p>
<p>The <a href="https://hackage.haskell.org/package/luminance-0.6">latest version of luminance</a> has one of the two features. <em>UBO</em> were added and <em>SSBO</em> will follow for the next version, I guess.</p>
<h1 id="what-is-ubo">What is UBO?</h1>
<p>UBO stands for <strong>U</strong>niform <strong>B</strong>buffer <strong>O</strong>bject. Basically, it enables you to create <em>uniform blocks</em> in GLSL in feed them with <em>buffers</em>. Instead of passing values directly to the uniform interface, you just write whatever values you want to to buffers, and then pass the buffer as a source for the uniform block.</p>
<p>Such a technique has a lot of advantages. Among them, you can pass <strong>a lot of values</strong>. It’s also cool when you want to pass values instances of a structure (in the GLSL source code). You can also use them to share uniforms between several shader programs as well as quickly change all the uniforms to use.</p>
<p>In luminance, you need several things. First thing first, you need… a buffer! More specifically, you need a buffer <a href="https://hackage.haskell.org/package/luminance-0.6/docs/Graphics-Luminance-Buffer.html#t:Region"><code>Region</code></a> to store values in. However, you cannot use any kind of region. You have to use a region that can hold values that will be fetched from shaders. This is done with a type called <code>UB a</code>. A buffer of <code>UB a</code> can be used as <em>UBO</em>.</p>
<p>Let’s say you want to store colors in a buffer, so that you can use them in your fragment shader. We’ll want three colors to shade a triangle. We need to create the buffer and get the region:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">colorBuffer ::</span> <span class="dt">Region</span> <span class="dt">RW</span> (<span class="dt">UB</span> (<span class="dt">V3</span> <span class="dt">Float</span>)) <span class="ot"><-</span> createBuffer (newRegion <span class="dv">3</span>)</code></pre>
<p>The explicit type is there so that GHC can infer the correct types for the <code>Region</code>. As you can see, nothing fancy, except that we just don’t want a <code>Region RW (V3 Float</code> but <code>Region RW (UB (V3 Float))</code>. Why <code>RW</code>?</p>
<p>Then, we’ll want to store colors in the buffer. Easy peasy:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell">writeWhole colorBuffer (map <span class="dt">UB</span> colors)
<span class="ot">colors ::</span> [<span class="dt">V3</span> <span class="dt">Float</span>]
colors <span class="fu">=</span> [<span class="dt">V3</span> <span class="dv">1</span> <span class="dv">0</span> <span class="dv">0</span>,<span class="dt">V3</span> <span class="dv">0</span> <span class="dv">1</span> <span class="dv">0</span>,<span class="dt">V3</span> <span class="dv">0</span> <span class="dv">0</span> <span class="dv">1</span>] <span class="co">-- red, green, blue</span></code></pre>
<p>At this point, <code>colorBuffer</code> represents a GPU buffer that holds three colors: red, green and blue. The next part is to get the uniform interface. That part is experimental in terms of exposed interface, but the core idea will remain the same. You’re given a function to build <em>UBO</em> uniforms as you also have a function to build simple and plain uniforms in <a href="https://hackage.haskell.org/package/luminance-0.6/docs/Graphics-Luminance-Shader-Program.html#v:createProgram"><code>createProgram</code></a>:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell">createProgram shaderList <span class="fu">$</span> \uni uniBlock <span class="ot">-></span> <span class="co">{- … -}</span></code></pre>
<p>Don’t spend too much time reading the signature of that function. You just have to know that <code>uni</code> is a function that takes <code>Either String Natural</code> – either a uniform’s name or its integral semantic – and gives you mapped <code>U</code> in return and that <code>uniBlock</code> does the same thing, but for uniform blocks instead.</p>
<p>Here’s our vertex shader:</p>
<pre class="sourceCode c"><code class="sourceCode c">in vec2 co;
out vec4 vertexColor;
<span class="co">// This is the uniform block, called "Colors" and storing three colors</span>
<span class="co">// as an array of three vec3 (RGB).</span>
uniform Colors {
vec3 colors[<span class="dv">3</span>];
};
<span class="dt">void</span> main() {
gl_Position = vec4(co, <span class="dv">0</span>., <span class="dv">1</span>.);
vertexColor = vec4(colors[gl_VertexID], <span class="dv">1</span>.);
}<span class="st">"</span></code></pre>
<p>So we want to get a <code>U a</code> mapped to that <code>"Colors"</code> uniform block. Easy!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell">(program,colorsU) <span class="ot"><-</span> createProgram shaderStages <span class="fu">$</span> \_ uniBlock <span class="ot">-></span> uniBlock <span class="st">"Colors"</span></code></pre>
<p>And that’s all! The type of <code>colorsU</code> is <code>U (Region rw (UB (V3 Float)))</code>. You can then gather <code>colorBuffer</code> and <code>colorsU</code> in a uniform interface to send <code>colorBuffer</code> to <code>colorsU</code>!</p>
<p>You can find the complete sample <a href="https://github.com/phaazon/luminance-samples/blob/c4f428dc68ed38f0b80032aa9de90df01bf8ab15/src/UBO.hs">here</a>.</p>
<p>Finally, you can augment the type you can use <code>UB</code> with by implementing the <code>UniformBlock</code> typeclass. You can derive the <code>Generic</code> typeclass and then use a default instance:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">MyType</span> <span class="fu">=</span> <span class="co">{- … -}</span> <span class="kw">deriving</span> (<span class="dt">Generic</span>)
<span class="kw">instance</span> <span class="dt">UniformBlock</span> <span class="dt">MyTpe</span> <span class="co">-- we’re good to go with buffer of MyType!</span></code></pre>
<h1 id="luminance-luminance-samples-and-stackage">luminance, luminance-samples and Stackage</h1>
<p>I added <em>luminance</em> and <em>luminance-samples</em> into Stackage. You can then find them in the nightly snapshots and the future LTS ones.</p>
<h1 id="whats-next">What’s next?</h1>
<p>I plan to add stencil support for the framebuffer, because it’s missing and people might like it included. I will of course add support for *SSBO** as soon as I can. I also need to work on cheddar but that project is complex and I’m still stuck with design decisions.</p>
<p>Thanks for reading my and for your feedback. Have you great week!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-79299880979598603132015-10-18T17:50:00.001+02:002015-10-18T17:50:35.454+02:00luminance-0.5.1 and wavefront-0.4.0.1<p>It’s been a few days I haven’t talked about <a href="http://hackage.haskell.org/package/luminance">luminance</a>. I’ve been working on it a lot those days along with <a href="http://hackage.haskell.org/package/wavefront">wavefront</a>. In order that you keep up to date, I’ll describe the changes I made in those packages you have a talk about the future directions of those packages.</p>
<p>I’ll also give a snippet you can use to load geometries with wavefront and adapt them to embed into luminance so that you can actually render them! A package might come up from that kind of snippet – <code>luminance-wavefront</code>? We’ll see that!</p>
<h1 id="wavefront">wavefront</h1>
<p>This package has received several changes among two major increments and several fixes. In the first place, I removed some code from the interface that was useless and used only for test purposes. I removed the <code>Ctxt</code> object – it’s a type used by the internal lexer anyways, so you don’t have to know about it – and exposed a type called <a href="http://hackage.haskell.org/package/wavefront-0.4.0.1/docs/Codec-Wavefront.html#t:WavefrontOBJ">WavefrontOBJ</a>. That type reprents the parsed Wavefront data and is the <em>main</em> type used by the library in the interface.</p>
<p>Then, I also removed most of the modules, because they’re re-exported by the main module – <a href="http://hackage.haskell.org/package/wavefront-0.4.0.1/docs/Codec-Wavefront.html">Codec.Wavefront</a>. I think the documentation is pretty straight-forward, but you think something is missing, please shoot a PM or an email! ;)</p>
<p>On the bugs level, I fixed a few things. Among them, there was a nasty bug in the implementation of an internal recursive parser that caused the last wavefront statement to be silently ignored.</p>
<p>I’d also like to point out that I performed some benchmarks – I will provide the data later on with a heap profile and graphs – and I’m pretty astonished with the results! The parser/lexer is insanely fast! It only takes a few milliseconds (between 7ms and 8ms) to load 50k faces (a 2MB .obj file). The code is not yet optimized, so I guess the package could go even faster!</p>
<p>You can find the changelog <a href="http://hackage.haskell.org/package/wavefront-0.4.0.1/changelog">here</a>.</p>
<h1 id="luminance">luminance</h1>
<p>I made a <em>lot of work</em> on luminance lately. First, the <code>V</code> type – used to represent <em>vertex components</em> – is not anymore defined by luminance but by <a href="http://hackage.haskell.org/package/linear">linear</a>. You can find the type <a href="http://hackage.haskell.org/package/linear-1.20.2/docs/Linear-V.html">here</a>. You’ll need the <code>DataKinds</code> extension to write types like <code>V 3 Float</code>.</p>
<p>That change is due to the fact linear is a mature library with a lot of interesting functions and types everyone might use when doing graphics. Its <a href="http://hackage.haskell.org/package/linear-1.20.2/docs/Linear-V.html"><code>V</code></a> type has several interesting instances – <code>Storable</code>, <code>Ord</code>, etc. – that are required in luminance. Because it’s not simple to build such <code>V</code>, luminance provides you with three functions to build the 1D, 2D and 3D versions – <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Vertex.html#v:vec2"><code>vec2</code></a>, <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Vertex.html#v:vec3"><code>vec3</code></a> and <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Vertex.html#v:vec2"><code>vec4</code></a>. Currently, that type is the only one you can use to build <em>vertex components</em>. I might add <code>V2</code>, <code>V3</code> and <code>V4</code> as well later.</p>
<p>An interesting change: the <code>Uniform</code> typeclass has <strong>a lot</strong> of new instances! Basically, all vector types from linear, their array version and the 4x4 floating matrix – <code>M44 Float</code>. You can find the list of all instances <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Shader-Uniform.html#t:Uniform">here</a>.</p>
<p>A new function was added to the <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Geometry.html"><code>Graphics.Lumimance.Geometry</code></a> module called <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Geometry.html#v:nubDirect"><code>nubDirect</code></a>. That function performs in linear logarithmic time and is used to turn a <em>direct</em> representation of vertices into a pair of data used to represent <em>indexed</em> vertices. The new list of vertices stores only unique vertices and the list of integral values stores the indices. You can then use both the information to build <em>indexed geometries</em> – see <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Geometry.html#v:createGeometry"><code>createGeometry</code></a> for further details.</p>
<p>The interface to transfer texels to textures has changed. It doesn’t depend on <code>Foldable</code> anymore but on <code>Data.Vector.Storable.Vector</code>. That change is due to the fact that the <code>Foldable</code> solution uses <code>toList</code> behind the hood, which causes bad performance for the simple reason that we send the list to the GPU through the FFI. It’s then more efficient to use a <code>Storable</code> version. Furthermore, th most known package for textures loading – <a href="http://hackage.haskell.org/package/JuicyPixels">JuicyPixels</a> – already uses that type of <code>Vector</code>. So you just have to enjoy the new performance boost! ;)</p>
<p>About bugs… I fixed a few ones. First, the implementation of the <code>Storable</code> instance for <a href="http://hackage.haskell.org/package/luminance-0.5.1/docs/Graphics-Luminance-Core-Tuple.html#t::."><code>(:.)</code></a> had an error for <code>sizeOf</code>. The implementation must be lazy in its argument, and the old one was not, causing <code>undefined</code> crashes when using that type. The strictness was removed and now everything works just fine!</p>
<p>Two bugs that were also fixed: the indexed render and the render of geometries with several vertex components. Those bugs were easy to fix and now you won’t experience those issues anymore.</p>
<h1 id="interfacing-luminance-with-wavefront-to-render-geometries-from-artists">Interfacing luminance with wavefront to render geometries from artists!</h1>
<p>I thought it would be a hard task but I’m pretty proud of how easy it was to interface both the packages! The idea was to provide a function that would turn a <code>WavefrontOBJ</code> into a <em>direct representation</em> of luminance vertices. Here’s the function that implements such a conversion:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">type</span> <span class="dt">Vtx</span> <span class="fu">=</span> <span class="dt">V</span> <span class="dv">3</span> <span class="dt">Float</span> <span class="fu">:.</span> <span class="dt">V</span> <span class="dv">3</span> <span class="dt">Float</span> <span class="co">-- location :. normal</span>
<span class="ot">objToDirect ::</span> <span class="dt">WavefrontOBJ</span> <span class="ot">-></span> <span class="dt">Maybe</span> [<span class="dt">Vtx</span>]
objToDirect obj <span class="fu">=</span> traverse faceToVtx (toList faces)
<span class="kw">where</span>
locations <span class="fu">=</span> objLocations obj
normals <span class="fu">=</span> objNormals obj
faces <span class="fu">=</span> objFaces obj
faceToVtx face <span class="fu">=</span> <span class="kw">do</span>
<span class="kw">let</span> face' <span class="fu">=</span> elValue face
vni <span class="ot"><-</span> faceNorIndex face'
v <span class="ot"><-</span> locations <span class="fu">!?</span> (faceLocIndex face' <span class="fu">-</span> <span class="dv">1</span>)
vn <span class="ot"><-</span> normals <span class="fu">!?</span> (vni <span class="fu">-</span> <span class="dv">1</span>)
<span class="kw">let</span> loc <span class="fu">=</span> vec3 (locX v) (locY v) (locZ v)
nor <span class="fu">=</span> vec3 (norX vn) (norY vn) (norZ vn)
pure (loc <span class="fu">:.</span> nor)</code></pre>
<p>As you can see, that function is pure and will eventually turn a <code>WavefrontOBJ</code> into a list of <code>Vtx</code>. <code>Vtx</code> is our own vertex type, encoding the location and the normal of the vertex. You can add texture coordinates if you want to. The function fails if a face’s index has no normal associated with or if an index is out-of-bound.</p>
<p>And… and that’s all! You can already have your <code>Geometry</code> with that – <em>direct</em> one:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"> x <span class="ot"><-</span> fmap (fmap objToDirect) (fromFile <span class="st">"./ubercool-mesh.obj"</span>)
<span class="kw">case</span> x <span class="kw">of</span>
<span class="dt">Right</span> (<span class="dt">Just</span> vertices) <span class="ot">-></span> createGeometry vertices <span class="dt">Nothing</span> <span class="dt">Triangle</span>
_ <span class="ot">-></span> throwError <span class="co">{- whatever you need as error there -}</span></code></pre>
<p>You want an indexed version? Well, you already have everything to do that:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"> x <span class="ot"><-</span> fmap (fmap (nubDirect <span class="fu">.</span> objToDirect) (fromFile <span class="st">"./ubercool-mesh.obj"</span>)
<span class="kw">case</span> x <span class="kw">of</span>
<span class="dt">Right</span> (<span class="dt">Just</span> (vertices,indices)) <span class="ot">-></span> createGeometry vertices (<span class="dt">Just</span> indices) <span class="dt">Triangle</span>
_ <span class="ot">-></span> throwError <span class="co">{- whatever you need as error there -}</span></code></pre>
<p>Even though the <code>nubDirect</code> performs in a pretty good complexity, it takes time. Don’t be surprised to see the “loading” time longer then.</p>
<p>I might package those snippets and helpers around them into a <code>luminance-wavefront</code> package, but that’s not trivial as the vertex format should be free.</p>
<h1 id="future-directions-and-thank-you">Future directions and thank you</h1>
<p>I received a lot of warm feedback from people about what I do in the Haskell community, and I’m just amazed. I’d like to thank each and everyone of you for your support – I even got support from non-Haskellers!</p>
<p>What’s next then… Well, I need to add a few more textures to luminance – texture arrays are not supported yet, and the framebuffers have to be altered to support all kind of textures. I will also try to write a <a href="">cheddar</a> interpreter directly into luminance to dump the <code>String</code> type of shader stages and replace it with cheddar’s whatever will be. For the long terms, I’ll add UBO and SSBO to luminance, and… compatibility with older OpenGL versions.</p>
<p>Once again, thank you, and keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-79200954627137759702015-10-11T16:44:00.000+02:002015-10-11T16:44:17.958+02:00Load geometries with wavefront-0.1!<p>I’ve been away from <a href="https://hackage.haskell.org/package/luminance">luminance</a> for a few days because I wanted to enhance the graphics world of Haskell. luminance might be interesting, if you can’t use the art works of your artists, you won’t go any further for a real-world application. I decided that I to write a parser/lexer to load 3D geometries from files. The <a href="https://en.wikipedia.org/wiki/Wavefront_.obj_file">Wavefront OBJ</a> is an old yet simple and efficient way of encoding such objects. It supports materials, surfaces and a lot of other cool stuff – I don’t cover them yet, though.</p>
<p>There’s a <a href="http://hackage.haskell.org/package/obj">package</a> out there to do that, but it hasn’t been updated since 2008 and has a lot of dependencies I don’t like (InfixApplicative, OpenGL, OpenGLCheck, graphicsFormats, Codec-Image-Devil, and so on…). I like to keep things ultra simple and lightweight. So here we go. <a href="http://hackage.haskell.org/package/wavefront">wavefront</a>.</p>
<p>Currently, my package only builds up a pure value you can do whatever you want with. Upload it to the GPU, modify it, pretty print it, perform some physics on it. Whatever you want. The interface is not frozen yet and I need to perform some benchmarks to see if I have to improve the performance – the lexer is very simple and naive, I’d be amazed if the performance were that good yet.</p>
<p>As always, feel free to contribute, and keep in mind that the package will move quickly along the performance axis.</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-61129020748950834952015-10-06T18:03:00.000+02:002015-10-06T18:03:51.105+02:00luminance-0.3 – Adding more texture kinds to the equation…<h1 id="unleashing-the-power-of-textures">Unleashing the power of textures!</h1>
<p>From <a href="https://hackage.haskell.org/package/luminance-0.1">luminance-0.1</a> to <a href="https://hackage.haskell.org/package/luminance-0.2">luminance-0.2</a> included, it was not possible to use any texture types different than two-dimensional textures. This blog entry tags the new release, <a href="https://hackage.haskell.org/package/luminance-0.3">luminance-0.3</a>, which adds support for several kinds of texture.</p>
<h2 id="a-bit-more-dimensions">A bit more dimensions</h2>
<p><a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#t:Texture1D"><code>Texture1D</code></a>, <a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#t:Texture2D"><code>Texture2D</code></a> and <a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#t:Texture3D"><code>Texture3D</code></a> are all part of the new release. The interface has changed – hence the breaking changes yield a major version increment – and I’ll explain how it has.</p>
<p>Basically, textures are now fully polymorphic and are constrained by a typeclass: <a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#t:Texture"><code>Texture</code></a>. That typeclass enables ad hoc polymorphism. It is then possible to add more texture types without having to change the interface, which is cool. Like everything else in luminance, you just have to ask the typesystem which kind of texture you want, and everything will be taken care of for you.</p>
<p>Basically, you have three functions to know:</p>
<ul class="incremental">
<li><a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#v:createTexture"><code>createTexture</code></a>, which is used to create a new texture ;</li>
<li><a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#v:uploadSub"><code>uploadSub</code></a>, used to upload texels to a subpart of the texture ;</li>
<li><a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#v:fillSub"><code>fillSub</code></a>, used to fill – <em>clear</em> – a subpart of the texture with a given value.</li>
</ul>
<p>All those functions work on <code>(Texture t) => t</code>, so it will work with all kinds of texture.</p>
<h2 id="cubemaps">Cubemaps</h2>
<p><a href="https://hackage.haskell.org/package/luminance-0.3/docs/Graphics-Luminance-Texture.html#t:Cubemap"><code>Cubemap</code>s</a> are also included. They work like other textures but add the concept of <em>faces</em>. Feel free to dig in the documentation for further details.</p>
<h1 id="whats-next">What’s next?</h1>
<p>I need to find a way to wrap <em>texture arrays</em>, which are very nice and useful for <em>layered rendering</em>. After that, I’ll try to expose the change to the framebuffers so that we can create framebuffers with cubemaps or that kind of cool feature.</p>
<p>In the waiting, have a good a week!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-65667900796763175292015-09-24T01:47:00.000+02:002015-09-24T01:47:15.708+02:00luminance first tutorial<h1 id="woah">Woah!</h1>
<p>I’m very happy about people getting interested about my <a href="http://hackage.haskell.org/package/luminance">luminance</a> graphics framework. I haven’t received use case feedback yet, but I’m pretty confident I will sooner or later.</p>
<p>In the waiting, I decided to write an <em>embedded tutorial</em>. It can be found <a href="http://hackage.haskell.org/package/luminance-0.1.1/docs/Graphics-Luminance.html">here</a>.</p>
<p>That tutorial explains all the basic types of luminance – not all though, you’ll have to dig in the documentation ;) – and describes how you should use it. I will try to add more documentation for each modules in order to end up with a very well documented piece of software!</p>
<h1 id="lets-sum-up-what-you-need">Let’s sum up what you need</h1>
<p>People on <a href="https://www.reddit.com/r/haskell/comments/3lyxzc/luminance_01_released">reddit</a> complain – they are right to – about the fact the samples just <em>“didn’t work</em>. They actually did, but the errors were muted. I released <a href="http://hackage.haskell.org/package/luminance-samples-0.1.1">luminance-0.1.1</a> to fix that issue. Now you’ll get the proper error messages.</p>
<p>The most common issue is when you try to run a sample without having the required hardware implementation. luminance requires <strong>OpenGL 4.5</strong>. On Linux, you might need to use <code>primusrun</code> or <code>optirun</code> if you have the <strong>Optimus</strong> technology. On Windows, I guess you have to allow the samples to run on the dedicated <em>GPU</em>. And on Mac OSX… I have no idea; <code>primusrun</code> / <code>optirun</code>, I’d go.</p>
<p>Anyways, I’d like to thank all people who have/will tried/try the package. As always, I’ll keep you informed about all the big steps I take about luminance. Keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com2tag:blogger.com,1999:blog-8976038770606708499.post-26912203388655134782015-09-22T20:58:00.000+02:002015-09-24T01:47:29.700+02:00luminance 0.1 released!<h1 id="here-we-are">Here we are</h1>
<p><a href="http://hackage.haskell.org/package/luminance-0.1">luminance-0.1</a> was released yesterday night, along with <a href="http://hackage.haskell.org/package/luminance-samples-0.1">luminance-samples-0.1</a>! I’ll need to enhance the documentation and add directions so that people don’t feel too overwhelmed.</p>
<p>I’m also going to write a wiki to help people get their mind wrapped around <strong>luminance</strong>.</p>
<p>If you think something is missing; if you think something could be enhanced; or if you’ve found a bug, please, feel free to fill in an issue on the <a href="https://github.com/phaazon/luminance/issues">issues tracker</a>.</p>
<h1 id="next-big-steps">Next big steps</h1>
<p>I need to test the framework. I need a <em>lot</em> of tests. I’ll write a demoscene production with it so that I can give a good feedback to the community and prove that <strong>luminance</strong> can be used and works.</p>
<p>In the waiting, keep the vibe!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-25155584664204053322015-09-13T18:47:00.000+02:002015-09-13T22:41:35.179+02:00Thoughts about software meta-design<p>I’ve been thinking of writing such an article for a while. A few weeks ago, I got contacted by people who wanted to know more about my experience with <a href="https://github.com/phaazon/luminance">luminance</a> so that they can have more hindsight about their own APIs and products.</p>
<p>I came to the realization that I could write a blog entry to discuss designs decisions and, at some extent, what a good design entails. Keep in mind it’s only personal thoughts and that I won’t talk for someone else.</p>
<h1 id="elegancy">Elegancy</h1>
<p>I love mathematics because they’re elegant. Elegancy implies several traits, among <em>simplicity</em>, <em>flexibility</em> and <em>transparency</em>. They solve problems with very nice abstractions. In mathematics, we have a concept that is – astonishingly – not very spread and barely known outside of math geeks circles: <em>free</em> objects.</p>
<p>The concept of <em>free</em> is a bit overwhelming at first, because people are used to put labels and examples on everything. For instance, if I say that an object is <em>free</em>, you might already have associated some kind of <em>lock</em> to that object, so that you can get why it’s <em>free</em>. But we’re mistaken. We don’t need <em>locks</em> to define what <em>free</em> implies. In mathematic, a <em>free</em> object is an object that can’t be defined in terms of others. It’s a bit like a <em>core</em> object. It’s <em>free</em> because it can be there, no matter what other objects are around. It has no dependency, it doesn’t require no other interaction. You can also say that such an object is <em>free</em> of extra features that wouldn’t be linked to its nature.</p>
<p>This <em>free</em> property is a very interesting property in mathematics, because it’s surprisingly simple! We can leverage that mathematic abstraction to software design. I like keeping my softwares as much <em>free</em> as possible. That is – with a more human language to say it – constraining them to keep low responsibilities about what they’re doing.</p>
<h1 id="responsibility-domains">Responsibility domains</h1>
<p>The important thing to keep in mind is that you should, at first, define what the responsibility domain is all about. Let’s say you’d like to create a library to implement audio effects, like the <a href="https://en.wikipedia.org/wiki/Doppler_effect">Doppler effect</a> – that effect actually exists for any kind of wave, but it’s interesting to synthetize it for a sound-related application. If you end up writing functions or routines to play sound or to load audio samples, you’re already doing it wrong! You’d have violated your reponsibility domain, which is, <em>“audio effects”</em>. Unfortunately, <strong>a lot</strong> of libraries do that. Adding extra stuff – and sometimes, worse; relying on them!</p>
<p>A lot of people tend to disagree with that – or they just <em>ignore</em> / <em>don’t know</em>. There’re plenty of examples of libraries and softwares that can do everything and nothing. For instance, take <a href="http://www.qt.io/">Qt</a> – pronounce <em>cute</em> or <em>cutie</em>. At first, <em>Qt</em> is a library and an API to build up <em>GUIs</em> – Graphical User Interfaces – and handle windows, events and so on. Let’s have a look at the documentation of modules, <a href="http://doc.qt.io/qt-5/qtmodules.html">here</a>.</p>
<p>You can see how the responsibility domain is <strong>huge</strong>! GUI, radio, audio, video, camera, network, database, printing, concurrency and multithreading… <em>Qt</em> isn’t a library anymore; it’s a whole new language!</p>
<p>People tend to like that. <em>“Yeah, I just have to use Qt, and I can do everything!”</em>. Well, that’s a point. But you can also think it another way. Qt is a very massive “library” you’ll spend hours reading the documentation and will use a lot of different classes / functions from different aspects. That doesn’t compose at all. What happens when you want to – or when you don’t have the choice? – use something else? For instance, if you want to use a smaller–but–dedicated threading library? What happens if you want to use a database service you wrote or that you know it’s great? Do you wipeout your Qt use? Do you… try to make both work in harmony? If so, do you have to write a lot of boilerplate code? Do you forget about those technologies and fallback on Qt? Do the concepts map to each others?</p>
<p>The problem with massive libraries is the tight bound it creates between the libraries and the developers. It’s very hard with such libraries to say that you can use it whenever you want because you perfectly know them. You could even just need a few things from it; like, the <em>SQL</em> part. You’ll then have to install a lot of code you’ll perhaps use 10% of.</p>
<h1 id="kiss">KISS</h1>
<p>I love how the <em>free</em> objects from mathematics can be leveraged to build simpler libraries here. The good part about <em>free</em> objects is the fact that they don’t have any extra features embedded. That’s very cool, because thanks to that, you can reason in terms of such objects <em>as-is</em>. For instance, <a href="http://www.openal.org">OpenAL</a> is a very <em>free</em> audio library. Its responsibility domain is to be able to play sound and apply simple effects on them – raw and primary effects. You won’t find anything to load music from files nor samples. And that’s very nice, because the API is <strong>small</strong>, <strong>simple</strong> and <strong>straight-forward</strong>.</p>
<p>Those adjectives are the base of the <a href="https://en.wikipedia.org/wiki/KISS_principle">KISS principle</a>. The ideas behind <em>KISS</em> are simple: keep it simple and stupid. Keep it simple, because the simpler the better. A too complex architecture is bloated and ends up unmaintainable. Simplicity implies elegancy and then, flexibility and composability.</p>
<p>That’s why I think a good architecture is a small – in terms of responsibility – and simple one. If you need complexity, that’s because your responsibility domain is already a bit more complex than the common ones. And even though the design is complex for someone outside of the domain, for the domain itself, it should stay simple and as most straight-forward as possible.</p>
<h1 id="api">API</h1>
<p>I think a good API design is to pick a domain, and stick to it. Whatever extra features you won’t provide, you’ll be able to create other libraries to add those features. Because those features will also be <em>free</em>, they will be useful in other projects that you don’t even have any idea they exist! That’s a very cool aspect of <em>free</em> objects!</p>
<p>There’s also a case in which you have to make sacrifices – and crucial choices. For instance, event-driven programming can be implemented via several techniques. A popular one in the functional programming world nowadays is <a href="https://wiki.haskell.org/Functional_Reactive_Programming">FRP</a>. Such a library is an <em>architectural codebase</em>. If you end up adding <em>FRP</em>-related code lines in your networking-oriented library, you might be doing it wrong. Because, eh, what if I just want to use imperative event-driven idioms, like <a href="https://en.wikipedia.org/wiki/Observer_pattern">observers</a>? You shouldn’t integrate such architectural design choices in specific libraries. Keep them <em>free</em>, so that everyone can quickly learn them, and enjoy them!</p>
<p>I like to see good-designed libraries as a set of very powerful, tiny tools I can compose and move around freely. If a tool gets broken or if it has wrong performances, I can switch to a new one or write my very own. Achieving such a flexibility without following the <em>KISS principle</em> is harder or may be impossible to reach.</p>
<p>So, in my opinion, we should keep things simple and stupid. They’re simpler to reason about, they compose and scale greatly and they of course are easier to maintain. Compose them with architectural or whatever designs in the actual final executable project. Don’t make premature important choices!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com7tag:blogger.com,1999:blog-8976038770606708499.post-39124260053538676142015-09-08T11:59:00.000+02:002015-09-08T11:59:49.513+02:00Luminance – ASAP<h2 id="were-almost-there">We’re almost there!</h2>
<p><a href="https://github.com/phaazon/luminance">luminance</a>, the Haskell graphics framework I’ve been working on for a month and a half, will be released very soon as <strong>0.1</strong> on <a href="https://hackage.haskell.org">hackage</a>. I’m still working actively on several parts of it, especially the embedded documentation, wikis and main interface.</p>
<p>Keep in mind that the internal design is 80% done, but the end-user interface might change a lot in the future. Because I’m a demoscener, I’ll be using <strong>luminance</strong> for the next months to release a demoscene production in Germany and provide you with a nice feedback about how usable it is, so that I can make it more mature later on.</p>
<h2 id="what-to-expect">What to expect?</h2>
<p>Currently, <strong>luminance</strong> works. You can create <em>buffers</em>, <em>shaders</em>, <em>framebuffers</em>, <em>textures</em> and blend the whole thing to create nice (animated) images. Everything is strongly and (almost) dependently typed, so that you have an extra type safety.</p>
<p>As I was developing the interface, I also wrote a new package that will be released on <strong>hackage</strong> as well: <a href="https://github.com/phaazon/luminance-samples">luminance-samples</a>. As you might have guessed, that package contains several executables you can launch to test <strong>luminance</strong>. Those are just <em>features sets</em>. There’s an <em>Hello, World!</em> executable, a <em>depth test</em> executable, a <em>blending</em> one, a <em>texture</em> one, and so on and so forth. I’ll refactor them to make the code cleaner, but you should have a look to see what it entails to use <strong>luminance</strong>! ;)</p>
<p>I’ll be very open-minded about what you guys think of <strong>luminance</strong> once it gets released. Even though I’ve started writing it for my own purposes, I clearly understand that a lot of people are interested in that project. I’ve been contacted by the developers of <strong>waylandmonad</strong> to explain them the choices I made with <strong>luminance</strong> so that they can do the same thing when migrating <strong>xmonad</strong> from the <a href="http://www.x.org">Xorg</a> technology to <a href="http://wayland.freedesktop.org">Wayland</a>. If I can help in any ways, even if it’s not about <strong>luminance</strong> directly, don’t hesitate then contact me!</p>
<p>I can’t give you a 0.1 release milestone yet, but you should be able to install it from hackage and <a href="https://www.stackage.org">stackage</a> very soon! I’ll write an article when it gets released, I promise.</p>
<p>In the waiting, keep the vibe. Happy hacking around!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com1tag:blogger.com,1999:blog-8976038770606708499.post-69836527568334445842015-08-23T18:28:00.000+02:002015-08-25T17:02:12.588+02:00Contravariance and luminance to add safety to uniforms<p>It’s been a few days I haven’t posted about <a href="https://github.com/phaazon/luminance">luminance</a>. I’m on holidays, thus I can’t be as involved in the development of the graphics framework as I’m used to on a daily basis. Although I’ve been producing less in the past few days, I’ve been actively thinking about something very important: <a href="https://www.opengl.org/wiki/Uniform_%28GLSL%29">uniform</a>.</p>
<h1 id="what-people-usually-do">What people usually do</h1>
<p>Uniforms are a way to pass data to shaders. I won’t talk about <em>uniform blocks</em> nor <em>uniform buffers</em> – I’ll make a dedicated post for that purpose. The common OpenGL uniform flow is the following:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>you ask OpenGL to retrieve the location of a GLSL uniform through the function <code>glGetUniformLocation</code>, or you can use an explicit location if you want to handle the semantics on your own ;</li>
<li>you use that location, the identifier of your shader program and send the actual values with the proper <code>glProgramUniform</code>.</li>
</ol>
<p>You typically don’t retrieve the location each time you need to send values to the GPU – you only retrieve them once, while initializing.</p>
<p>The first thing to make uniforms more elegant and safer is to provide a typeclass to provide a shared interface. Instead of using several functions for each type of uniform – <code>glProgramUniform1i</code> for <code>Int32</code>, <code>glProgramUniform1f</code> for <code>Float</code> and so on – we can just provide a function that will call the right OpenGL function for the type:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">class</span> <span class="dt">Uniform</span> a <span class="kw">where</span>
<span class="ot"> sendUniform ::</span> <span class="dt">GLuint</span> <span class="ot">-></span> <span class="dt">GLint</span> <span class="ot">-></span> a <span class="ot">-></span> <span class="dt">IO</span> ()
<span class="kw">instance</span> <span class="dt">Uniform</span> <span class="dt">Int32</span> <span class="kw">where</span>
sendUniform <span class="fu">=</span> glProgramUniform1i
<span class="kw">instance</span> <span class="dt">Uniform</span> <span class="dt">Float</span> <span class="kw">where</span>
sendUniform <span class="fu">=</span> glProgramUniform1f
<span class="co">-- and so on…</span></code></pre>
<p>That’s the first step, and I think everyone should do that. However, that way of doing has several drawbacks:</p>
<ul class="incremental">
<li>it still relies on side-effects; that is, we can call <code>sendUniform</code> pretty much everywhere ;</li>
<li>imagine we have a shader program that <strong>requires</strong> several uniforms to be passed each time we draw something; what happens if we forget to call a <code>sendUniform</code>? If we haven’t sent the uniform yet, we might have an undefined behavior. If we already have, we will <em>override</em> all future draws with that value, which is very wrong… ;</li>
<li>with that way of representing uniforms, we have a very imperative interface; we can have a more composable and pure approach than that, hence enabling us to gain in power and flexibility.</li>
</ul>
<h1 id="what-luminance-used-to-do">What luminance used to do</h1>
<p>In my <a href="https://github.com/phaazon/luminance">luminance</a> package, I used to represent uniforms as values.</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">newtype</span> <span class="dt">U</span> a <span class="fu">=</span> <span class="dt">U</span> {<span class="ot"> runU ::</span> a <span class="ot">-></span> <span class="dt">IO</span> () }</code></pre>
<p>We can then alter the <code>Uniform</code> typeclass to make it simpler:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">class</span> <span class="dt">Uniform</span> a <span class="kw">where</span>
<span class="ot"> toU ::</span> <span class="dt">GLuint</span> <span class="ot">-></span> <span class="dt">GLint</span> <span class="ot">-></span> <span class="dt">U</span> a
<span class="kw">instance</span> <span class="dt">Uniform</span> <span class="dt">Int32</span> <span class="kw">where</span>
toU prog l <span class="fu">=</span> <span class="dt">U</span> <span class="fu">$</span> glProgramUniform1i prog l
<span class="kw">instance</span> <span class="dt">Uniform</span> <span class="dt">Float</span> <span class="kw">where</span>
toU prog l <span class="fu">=</span> <span class="dt">U</span> <span class="fu">$</span> glProgramUniform1f prog l</code></pre>
<p>We also have a pure interface now. I used to provide another type, <code>Uniformed</code>, to be able to <em>send</em> uniforms without exposing <code>IO</code>, and an operator to accumulate uniforms settings, <code>(@=)</code>:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">newtype</span> <span class="dt">Uniformed</span> a <span class="fu">=</span> <span class="dt">Uniformed</span> {<span class="ot"> runUniformed ::</span> <span class="dt">IO</span> a } <span class="kw">deriving</span> (<span class="dt">Applicative</span>,<span class="dt">Functor</span>,<span class="dt">Monad</span>)
<span class="ot">(@=) ::</span> <span class="dt">U</span> a <span class="ot">-></span> a <span class="ot">-></span> <span class="dt">Uniformed</span> ()
<span class="dt">U</span> f <span class="fu">@=</span> a <span class="fu">=</span> <span class="dt">Uniformed</span> <span class="fu">$</span> f a</code></pre>
<p>Pretty simple.</p>
<h1 id="the-new-uniform-interface">The new uniform interface</h1>
<p>The problem with that is that we still have the completion problem and the side-effects, because we just wrap them without adding anything special – <code>Uniformed</code> is isomorphic to <code>IO</code>. We have no way to create a type and ensure that <em>all</em> uniforms have been sent down to the GPU…</p>
<h2 id="contravariance-to-save-us">Contravariance to save us!</h2>
<p>If you’re an advanced <strong>Haskell</strong> programmer, you might have noticed something very interesting about our <code>U</code> type. It’s contravariant in its argument. What’s cool about that is that we could then create new uniform types – new <code>U</code> – by contramapping over those types! That means we can enrich the scope of the hardcoded <code>Uniform</code> instances, because the single way we have to get a <code>U</code> is to use <code>Uniform.toU</code>. With contravariance, we can – in theory – extend those types to <strong>all types</strong>.</p>
<p>Sounds handy eh? First thing first, contravariant functor. A contravariant functor is a functor that flips the direction of the morphism:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">class</span> <span class="dt">Contravariant</span> f <span class="kw">where</span>
<span class="ot"> contramap ::</span> (a <span class="ot">-></span> b) <span class="ot">-></span> f b <span class="ot">-></span> f a
<span class="ot"> (>$) ::</span> b <span class="ot">-></span> f b <span class="ot">-></span> f a</code></pre>
<p><code>contramap</code> is the <em>contravariant</em> version of <code>fmap</code> and <code>(>$)</code> is the contravariant version of <code>(<$)</code>. If you’re not used to contravariance or if it’s the first time you see such a type signature, it might seem confusing or even <strong>magic</strong>. Well, that’s the mathematic magic in the place! But you’ll see just below that there’s no magic no trick in the implementation.</p>
<p>Because <code>U</code> is contravariant in its argument, we can define a <code>Contravariant</code> instance:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">instance</span> <span class="dt">Contravariant</span> <span class="dt">U</span> <span class="kw">where</span>
contramap f u <span class="fu">=</span> <span class="dt">U</span> <span class="fu">$</span> runU u <span class="fu">.</span> f</code></pre>
<p>As you can see, nothing tricky here. We just apply the <code>(a -> b)</code> function on the input of the resulting <code>U a</code> so that we can pass it to <code>u</code>, and we just <code>runU</code> the whole thing.</p>
<p>A few friends of mine – not <strong>Haskeller</strong> though – told me things like <em>“That’s just theory bullshit, no one needs to know what a contravariant thingy stuff is!”</em>. Well, here’s an example:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">newtype</span> <span class="dt">Color</span> <span class="fu">=</span> <span class="dt">Color</span> {
<span class="ot"> colorName ::</span> <span class="dt">String</span>
,<span class="ot"> colorValue ::</span> (<span class="dt">Float</span>,<span class="dt">Float</span>,<span class="dt">Float</span>,<span class="dt">Float</span>)
}</code></pre>
<p>Even though we have an instance of <code>Uniform</code> for <code>(Float,Float,Float,Float)</code>, there will never be an instance of <code>Uniform</code> for <code>Color</code>, so we can’t have a <code>U Color</code>… Or can we?</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell">uColor <span class="fu">=</span> contramap colorValue float4U</code></pre>
<p>The type of <code>uColor</code> is… <code>U Color</code>! That works because contravariance enabled us to <em>adapt</em> the <code>Color</code> structure so that we end up on <code>(Float,Float,Float,Float)</code>. The contravariance property is then a very great ally in such situations!</p>
<h2 id="more-contravariance">More contravariance</h2>
<p>We can even dig in deeper! Something cool would be to do the same thing, but for several fields. Imagine a mouse:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">Mouse</span> <span class="fu">=</span> <span class="dt">Mouse</span> {
<span class="ot"> mouseX ::</span> <span class="dt">Float</span>
,<span class="ot"> mouseY ::</span> <span class="dt">Float</span>
}</code></pre>
<p>We’d like to find a cool way to have <code>U Mouse</code>, so that we can send the mouse cursor to shaders. We’d like to contramap over <code>mouseX</code> and <code>mouseY</code>. A bit like with <code>Functor</code> + <code>Applicative</code>:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">getMouseX ::</span> <span class="dt">IO</span> <span class="dt">Float</span>
<span class="ot">getMouseY ::</span> <span class="dt">IO</span> <span class="dt">Float</span>
<span class="ot">getMouse ::</span> <span class="dt">IO</span> <span class="dt">Mouse</span>
getMouse <span class="fu">=</span> <span class="dt">Mouse</span> <span class="fu"><$></span> getMouseX <span class="fu"><*></span> getMouseY</code></pre>
<p>We could have the same thing for contravariance… And guess what. That exists, and that’s called <strong>divisible contravariant functors</strong>! A <code>Divisible</code> contravariant functor is the exact contravariant version of <code>Applicative</code>!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">class</span> (<span class="dt">Contravariant</span> f) <span class="ot">=></span> <span class="dt">Divisible</span> f <span class="kw">where</span>
<span class="ot"> divide ::</span> (a <span class="ot">-></span> (b,c)) <span class="ot">-></span> f b <span class="ot">-></span> f c <span class="ot">-></span> f a
<span class="ot"> conquer ::</span> f a</code></pre>
<p><code>divide</code> is the contravariant version of <code>(<*>)</code> and <code>conquer</code> is the contravariant version of <code>pure</code>. You know that <code>pure</code>’s type is <code>a -> f a</code>, which is isomorphic to <code>(() -> a) -> f a</code>. Take the contravariant version of <code>(() -> a) -> f a</code>, you end up with <code>(a -> ()) -> f a</code>. <code>(a -> ())</code> is isomorphic to <code>()</code>, so we can simplify the whole thing to <code>f a</code>. Here you have <code>conquer</code>. <em>Thank you to Edward Kmett for helping me understand that!</em></p>
<p>Let’s see how we can implement <code>Divisible</code> for <code>U</code>!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">instance</span> <span class="dt">Divisible</span> <span class="dt">U</span> <span class="kw">where</span>
divide f p q <span class="fu">=</span> <span class="dt">U</span> <span class="fu">$</span> \a <span class="ot">-></span> <span class="kw">do</span>
<span class="kw">let</span> (b,c) <span class="fu">=</span> f a
runU p b
runU q c
conquer <span class="fu">=</span> <span class="dt">U</span> <span class="fu">.</span> const <span class="fu">$</span> pure ()</code></pre>
<p>And now let’s use it to get a <code>U Mouse</code>!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">let</span> uMouse <span class="fu">=</span> divide (\(<span class="dt">Mouse</span> mx my) <span class="ot">-></span> (mx,my)) mouseXU mouseYU</code></pre>
<p>And here we have <code>uMouse :: U Mouse</code>! As you can see, if you have several uniforms – for each fields of the type, you can <code>divide</code> your type and map all fields to the uniforms by applying several times <code>divide</code>.</p>
<p>The current implementation is almost the one shown here. There’s also a <code>Decidable</code> instance, but I won’t talk about that for now.</p>
<p>The cool thing about that is that I can lose the <code>Uniformed</code> monadic type and rely only on <code>U</code>. Thanks to the <code>Divisible</code> typeclass, we have completion, and we can’t override future uniforms then!</p>
<hr />
<p>I hope you’ve learnt something cool and useful through this. Keep in mind that category abstractions <strong>are powerful</strong> and are useful in some contexts.</p>
<p>Keep hacking around, keep being curious. A <strong>Haskeller</strong> never stops learning! And that’s what so cool about <strong>Haskell</strong>! Keep the vibe, and see you another luminance post soon!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com68tag:blogger.com,1999:blog-8976038770606708499.post-51930746620677491382015-08-16T20:10:00.001+02:002015-08-16T20:10:48.905+02:00Never forget your git stashes again!<p>It’s been a while I’m experiencing issues with <code>git stash</code>. If you don’t know that command yet, <code>git stash</code> is used to move all the changes living in your <em>staging area</em> into a special place: the <em>stash</em>.</p>
<p>The <em>stash</em> is a temporary area working like a stack. You can push changes onto it via <code>git stash</code> or <code>git stash save</code>; you can pop changes from top with <code>git stash pop</code>. You can also apply a very specific part of the stack with <code>git stash apply <stash id></code>. Finally you can get the list of all the stashes with <code>git stash list</code>.</p>
<p>We often use the <code>git stash</code> command to stash changes in order to make the working directory clear again so that we can apply a patch, pull some changes, change branch, and so on. For those purposes, the <em>stash</em> is pretty great.</p>
<p>However, I often forget about my stashes – I know I’m not the only one. Sometimes, I stash something and go to cook something or just go out, and when I’m back again, I might have forgotten about what I had stashed, especially if it was a very small change.</p>
<p>My current prompt for my shell, <a href="http://www.zsh.org/">zsh</a>, is in two parts. I set the <code>PS1</code> environnment variable to set the regular prompt, and the <code>RPROMPT</code> environnment variable to set a reversed prompt, starting from the right of the terminal. My reversed prompt just performs a <code>git</code> command to check whether we’re actually in a <code>git</code> project, and get the current branch. Simple, but nice.</p>
<p>I came up to the realization that I could use the exact same idea to know whether I have stashed changes so that I never forget them! Here’s a screenshot to explain that:</p>
<div class="figure">
<img src="http://phaazon.net/pub/git_stash_shell.png" />
</div>
<p>As you can see, my prompt now shows me how many stashed changes there are around!</p>
<h1 id="the-code">The code</h1>
<p>I share the code I wrote with you. Feel free to use it, modify it and share it as well!</p>
<pre><code># …
function gitPrompt() {
# git current branch
currentBranch=`git rev-parse --abbrev-ref HEAD 2> /dev/null`
if (($? == 0))
then
echo -n "%F{green}$currentBranch%f"
fi
# git stash
stashNb=`git stash list 2> /dev/null | wc -l`
if [ "$stashNb" != "0" ]
then
echo -n " %F{blue}($stashNb)%f"
fi
echo ''
}
PS1="%F{red}%n%F{cyan}@%F{magenta}%M %F{cyan}%~ %F{yellow}%% %f"
RPROMPT='$(gitPrompt)'
# …</code></pre>
<p>Have fun!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com5tag:blogger.com,1999:blog-8976038770606708499.post-60239046772587847272015-08-11T15:56:00.000+02:002015-08-11T15:56:12.420+02:00Luminance – what was that alignment stuff already?<p>Yesterday, I released a new article about how I implement vertex arrays in luminance. In that article, I told you that the memory was packed with alignment set to <strong>1</strong>.</p>
<p>Well, I’ve changed my mind. Some people pointed out that the good thing to do for most GPU is to align on 32-bit. That is, <strong>4</strong> bytes. The alignment should be <strong>4</strong> bytes, then, not <strong>1</strong>.</p>
<p>There might be an issue with that. If you store a structure with attributes which sizes are not a multiple of <strong>4</strong> bytes, it’s likely you need to add padding.</p>
<p>However, I just reviewed my code, and found this:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">instance</span> (<span class="dt">GPU</span> a,<span class="dt">KnownNat</span> n,<span class="dt">Storable</span> a) <span class="ot">=></span> <span class="dt">Vertex</span> (<span class="dt">V</span> n a) <span class="kw">where</span>
<span class="kw">instance</span> (<span class="dt">Vertex</span> a,<span class="dt">Vertex</span> b) <span class="ot">=></span> <span class="dt">Vertex</span> (a <span class="fu">:.</span> b) <span class="kw">where</span></code></pre>
<p>Those are the single instances for <code>Vertex</code>. That means you can only use <code>V</code> and <code>(:.)</code> to build up vertices. Look at the <code>V</code> instance. You’ll find a <code>GPU</code> typeclass constraint. Let’s look at its definition and instances:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">class</span> <span class="dt">GPU</span> a <span class="kw">where</span>
<span class="ot"> glType ::</span> <span class="dt">Proxy</span> a <span class="ot">-></span> <span class="dt">GLenum</span>
<span class="kw">instance</span> <span class="dt">GPU</span> <span class="dt">Float</span> <span class="kw">where</span>
glType _ <span class="fu">=</span> <span class="dt">GL_FLOAT</span>
<span class="kw">instance</span> <span class="dt">GPU</span> <span class="dt">Int32</span> <span class="kw">where</span>
glType _ <span class="fu">=</span> <span class="dt">GL_INT</span>
<span class="kw">instance</span> <span class="dt">GPU</span> <span class="dt">Word32</span> <span class="kw">where</span>
glType _ <span class="fu">=</span> <span class="dt">GL_UNSIGNED_INT</span></code></pre>
<p>Woah. How did I forget that?! Let me translate those information to you. That means we can only have 32-bit vertex component! So the memory inside vertex buffers will always be aligned on <strong>4</strong> bytes! No need to worry about padding then!</p>
<p>The first implication is the fact you won’t be able to use <code>Word16</code>, for instance. You’ll need to stick to the three types that have a <code>GPU</code> instance.</p>
<p><strong>Note</strong>: that doesn’t prevent us from adding <code>Double</code> later on, because a <code>Double</code> is a 64-bit type, which is a multiple of <strong>4</strong> bytes!</p>
<p>That’s all I have for today. I’m working on something very exciting linked to render batching. I’ll talk about that when it’s cooked. ;)</p>
<p>Keep the vibe; keep building awesome things, and as always, thank you for reading me!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-86905215944057760772015-08-10T12:24:00.002+02:002015-08-10T12:24:52.575+02:00Luminance – Vertex Arrays<p>I’ve been up working on vertex arrays in my work-in-progress graphics framework, <a href="https://github.com/phaazon/luminance">luminance</a>, for several days. I’m a bit slow, because I’ve been through a very hard breakup and have been struggling to recover and focus. But here I am!</p>
<h1 id="so-whats-new">So, what’s new?</h1>
<p><strong>OpenGL</strong> allows programmers to send <em>vertices</em> to the GPU through what is called a <a href="https://www.opengl.org/wiki/Vertex_Specification">vertex array</a>. Vertex specification is performed through several functions, operating on several objects. You need, for instance, a <em>vertex buffer object</em>, an <em>index buffer object</em> and a <em>vertex array object</em>. The <em>vertex buffer</em> stores the vertices data.</p>
<div class="figure">
<img src="http://wiki.splashdamage.com/upload/2/2f/Teapot_mesh.jpg" alt="Teapot" /><p class="caption">Teapot</p>
</div>
<p>For instance, you could imagine a <em>teapot</em> as a set of vertices. Those vertices have several attributes. We could use, for instance, a <strong>position</strong>, a <strong>normal</strong> and a <strong>bone index</strong>. The vertex buffer would be responsible of storing those positions, normals and bone indices. There’re two ways to store them:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>interleaved arrays ;</li>
<li>deinterleaved arrays.</li>
</ol>
<p>I’ll explain those later on. The <em>index buffer</em> stores integral numbers – mainly set to <code>unsigned int</code> – that index the vertices, so that we can connect them and create lines, triangles or more complex shapes.</p>
<p>Finally, the <em>vertex array object</em> is a state object that stores links to the two buffers and makes a connection between pointers in the buffer and attribute indices. Once everything is set up, we might only use the <em>vertex array object</em>. The exception is when we need to change the geometry of an object. We need to access the vertex buffer and the index buffer and upload new data. However, <strong>for now</strong>, that feature is disabled so that the buffers are not exposed to the programmer. If people think that feature should be implemented, I’ll create specialized code for that very purpose.</p>
<h2 id="interleaved-and-deinterleaved-arrays">Interleaved and deinterleaved arrays</h2>
<p>Interleaved arrays might be the most simple to picture, because you use such arrays every day when programming. Let’s imagine you have the following type in <strong>Haskell</strong>:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> <span class="dt">Vertex</span> <span class="fu">=</span> <span class="dt">Vertex</span> {
<span class="ot"> vertPos ::</span> <span class="dt">X</span>
,<span class="ot"> vertNor ::</span> <span class="dt">Y</span>
,<span class="ot"> vertBoneID ::</span> <span class="dt">Z</span>
} <span class="kw">deriving</span> (<span class="dt">Eq</span>,<span class="dt">Show</span>)</code></pre>
<p>Now, the teapot would have several vertices. Approximately, let’s state the teapot has five vertices – yeah, ugly teapot. We can represent such vertices in an interleaved array by simply recording them in a list or an array:</p>
<div class="figure">
<img src="http://phaazon.net/pub/interleaved.png" alt="Interleaved" /><p class="caption">Interleaved</p>
</div>
<p>As you can see, the attributes are interleaved in memory, and the whole pattern is cycling. That’s the common way to represent an array of struct in a lot of languages, and it’s very natural for a machine to do things like that.</p>
<p>The deinterleaved version is:</p>
<div class="figure">
<img src="http://phaazon.net/pub/deinterleaved.png" alt="Deinterleaved" /><p class="caption">Deinterleaved</p>
</div>
<p>As you can see, with deinterleaved arrays, all attributes are extracted and grouped. If you want to access the third vertex, you need to read the third <code>X</code>, the third <code>Y</code> and the third <code>Z</code>.</p>
<p>Both the methods have advantages and drawbacks. The cool thing about deinterleaved arrays is that we can copy huge regions of typed memory at once whilst we cannot with interleaved arrays. However, interleaved arrays store continuous structures, so writing and reading a structure back might be faster.</p>
<p>An important point to keep in mind: because we plan to pass those arrays to <strong>OpenGL</strong>, there’s no <a href="https://en.wikipedia.org/wiki/Data_structure_alignment">alignment</a> restriction on the structure. That is, everything is <em>packed</em>, and we’ll have to pass extra information to <strong>OpenGL</strong> to tell it how to advance in memory to correctly build vertices back.</p>
<h2 id="generalized-tuple">Generalized tuple</h2>
<p>I think I haven’t told you yet. I have a cool type in <a href="https://github.com/phaazon/luminance">luminance</a>: the <code>(:.)</code> type. No, you don’t have to know how to pronounce that. I like to call it the <em>gtuple</em> type, because it’s a generalized tuple. You can encode <code>(a,b)</code>, <code>(a,b,c)</code> and all kind of tuples with <code>(:.)</code>. You can even encode single-typed infinite tuple! – a very special kind of list, indeed.</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">data</span> a <span class="fu">:.</span> b <span class="fu">=</span> a <span class="fu">:.</span> b
<span class="kw">infixr</span> <span class="dv">6</span> <span class="fu">:.</span>
<span class="co">-- a :. b is isomorphic to (a,b)</span>
<span class="co">-- a :. b :. c is isomorphic to (a,b,c)</span>
<span class="kw">newtype</span> <span class="dt">Fix</span> f <span class="fu">=</span> <span class="dt">Fix</span> (f (<span class="dt">Fix</span> f)) <span class="co">-- from Control.Monad.Fix</span>
<span class="kw">type</span> <span class="dt">Inf</span> a <span class="fu">=</span> <span class="dt">Fix</span> ((<span class="fu">:.</span>) a) <span class="co">-- infinite tuple!</span></code></pre>
<p>Pretty simple, but way more powerful than the regular, monomorphic tuples. As you can see, <code>(:.)</code> is a right-associative. That means that <code>a :. b :. c = a :. (b :. c)</code>.</p>
<p>That type will be heavily used in <a href="https://github.com/phaazon/luminance">luminance</a>, thus you should get your fet wet with it. There’s actually nothing much to know about it. It’s a <code>Functor</code>. I might add other features to it later on.</p>
<h3 id="the-storable-trick">The Storable trick</h3>
<p>The cool thing about <code>(:.)</code> is that we can provide a <code>Storable</code> instance for packed memory, as <strong>OpenGL</strong> requires it. Currently, the <code>Storable</code> instance is <a href="https://github.com/phaazon/luminance/blob/05ef2e4879aae92189535f0121765931b20de2fd/src/Graphics/Luminance/Tuple.hs#L23">implemented like this</a>:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">instance</span> (<span class="dt">Storable</span> a,<span class="dt">Storable</span> b) <span class="ot">=></span> <span class="dt">Storable</span> (a <span class="fu">:.</span> b) <span class="kw">where</span>
sizeOf (a <span class="fu">:.</span> b) <span class="fu">=</span> sizeOf a <span class="fu">+</span> sizeOf b
alignment _ <span class="fu">=</span> <span class="dv">1</span> <span class="co">-- packed data</span>
peek p <span class="fu">=</span> <span class="kw">do</span>
a <span class="ot"><-</span> peek <span class="fu">$</span> castPtr p
b <span class="ot"><-</span> peek <span class="fu">.</span> castPtr <span class="fu">$</span> p <span class="ot">`plusPtr`</span> sizeOf (undefined<span class="ot"> ::</span> a)
pure <span class="fu">$</span> a <span class="fu">:.</span> b
poke p (a <span class="fu">:.</span> b) <span class="fu">=</span> <span class="kw">do</span>
poke (castPtr p) a
poke (castPtr <span class="fu">$</span> p <span class="ot">`plusPtr`</span> sizeOf (undefined<span class="ot"> ::</span> a)) b</code></pre>
<p>As you can see, the <code>alignment</code> is set to <code>1</code> to express the fact the memory is packed. The <code>peek</code> and <code>poke</code> functions use the size of the head of the tuple to advance the pointer so that we effectively write the whole tuple in packed memory.</p>
<p>Then, let’s rewrite our <code>Vertex</code> type in terms of <code>(:.)</code> to see how it’s going on:</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="kw">type</span> <span class="dt">Vertex</span> <span class="fu">=</span> <span class="dt">X</span> <span class="fu">:.</span> <span class="dt">Y</span> <span class="fu">:.</span> <span class="dt">Z</span></code></pre>
<p>If <code>X</code>, <code>Y</code> and <code>Z</code> are in <code>Storable</code>, we can directly <code>poke</code> one of our <code>Vertex</code> into a <a href="http://phaazon.blogspot.fr/2015/07/introducing-luminance-safer-opengl-api.html">luminance buffer</a>! That is, directly into the GPU buffer!</p>
<p>Keep in mind that the <code>Storable</code> instance implements packed-memory uploads and reads, and won’t work with special kinds of buffers, like <em>shader storage</em> ones, which require specific memory alignment. To cover them, I’ll create specific typeclasses instances. No worries.</p>
<h1 id="creating-a-vertex-array">Creating a vertex array</h1>
<p>Creating a vertex array is done through the function <code>createVertexArray</code>. I might change the name of that object – it’s ugly, right? Maybe <code>Shape</code>, or something cooler!</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="ot">createVertexArray ::</span> (<span class="dt">Foldable</span> f,<span class="dt">MonadIO</span> m,<span class="dt">MonadResource</span> m,<span class="dt">Storable</span> v,<span class="dt">Traversable</span> t,<span class="dt">Vertex</span> v)
<span class="ot">=></span> t v
<span class="ot">-></span> f <span class="dt">Word32</span>
<span class="ot">-></span> m <span class="dt">VertexArray</span></code></pre>
<p>As you can see, the type signature is highly polymorphic. <code>t</code> and <code>f</code> represent <em>foldable</em> structures storing the vertices and the indices. And that’s all. Nothing else to feed the function with! As you can see, there’s a typeclass constraint on <code>v</code>, the inner vertex type, <code>Vertex</code>. That constraint ensures the vertex type is representable on the <strong>OpenGL</strong> side and has a known vertex format.</p>
<p><strong>Disclaimer:</strong> the <code>Traversable</code> constraint might be relaxed to be <code>Foldable</code> very soon.</p>
<p>Once tested, I’ll move all that code from the <code>unstable</code> branch to the <code>master</code> branch so that you guys can test it. :)</p>
<h2 id="about-opengl">About OpenGL…</h2>
<p>I eventually came to the realization that I needed to inform you about the <strong>OpenGL</strong> prerequisites. Because I want the framework to be as modern and well-designed as possible, you’ll need… <strong>OpenGL 4.5</strong>. The latest version, indeed. You <strong>might</strong> also need an extension, <a href="https://www.opengl.org/wiki/Bindless_Texture">ARB_bindless_texture</a>. That would enable the framework to pass textures to shader in a very stateless way, which is our objective!</p>
<p>I’ll let you know what I decide about that. I don’t want to use an extension that is not implemented almost everywhere.</p>
<h1 id="whats-next">What’s next?</h1>
<p>Well, tests! I need to be sure everything is correctly done on the GPU side, especially the vertex format specification. I’m pretty confident though.</p>
<p>Once the vertex arrays are tested, I’ll start defining a <em>render interface</em> as stateless as I can. As always, I’ll keep you informed!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-39808277429518093572015-08-01T13:29:00.000+02:002015-08-01T20:58:12.455+02:00Luminance – framebuffers and textures<p>I’m happily suprised that so many <strong>Haskell</strong> people follow <a href="https://github.com/phaazon/luminance">luminance</a>! First thing first, let’s tell you about how it grows.</p>
<p>Well, pretty quickly! There’s – yet – no method to make actual renders, because I’m still working on how to implement some stuff (I’ll detail that below), but it’s going toward the right direction!</p>
<h1 id="framebuffers">Framebuffers</h1>
<p>Something that is almost done is the <a href="https://www.opengl.org/wiki/Framebuffer_Object">framebuffer</a> part. The main idea of <em>framebuffers</em> – in <strong>OpenGL</strong> – is supporting <em>offscreen renders</em>, so that we can render to several framebuffers and combine them in several fancy ways. Framebuffers are often bound <em>textures</em>, used to pass the rendered information around, especially to <em>shaders</em>, or to get the pixels through texture reads CPU-side.</p>
<p>The thing is… <strong>OpenGL</strong>’s <em>framebuffers</em> are tedious. You can have incomplete framebuffers if you don’t attach textures with the right format, or to the wrong attachment point. That’s why the <em>framebuffer</em> layer of <strong>luminance</strong> is there to solve that.</p>
<p>In <strong>luminance</strong>, a <code>Framebuffer rw c d</code> is a framebuffer with two formats. A <em>color</em> format, <code>c</code>, and a <em>depth</em> format, <code>d</code>. If <code>c = ()</code>, then no color will be recorded. If <code>d = ()</code>, then no depth will be recorded. That enables the use of <em>color-only</em> or <em>depth-only</em> renders, which are often optimized by GPU. It also includes a <code>rw</code> type variable, which has the same role as for <code>Buffer</code>. That is, you can have <em>read-only</em>, <em>write-only</em> or <em>read-write</em> framebuffers.</p>
<p>And of course, all those features – having a <em>write-only</em> <em>depth-only</em> framebuffer for instance – are set through… <strong>types</strong>! And that’s what is so cool about how things are handled in <strong>luminance</strong>. You just tell it what you want, and it’ll create the required state and manage it for you GPU-side.</p>
<h2 id="textures">Textures</h2>
<p>The format types are used to know which textures to create and how to attach them internally. The textures are hidden from the interface so that you can’t mess with them. I still need to find a way to provide some kind of access to the information they hold, in order to use them in shaders for instance. I’d love to provide some kind of <em>monoidal</em> properties between framebuffers – to mimick <a href="https://hackage.haskell.org/package/gloss">gloss</a> <code>Monoid</code> instance for its <a href="https://hackage.haskell.org/package/gloss-1.9.2.1/docs/Graphics-Gloss-Data-Picture.html#t:Picture">Picture</a> type, basically.</p>
<p>You can create textures, of course, by using the <code>createTexture w h mipmaps</code> function. <code>w</code> is the <em>width</em>, <code>h</code> the <em>height</em> of the texture. <code>mipmaps</code> is the number of <em>mipmaps</em> you want for the texture.</p>
<p>You can then upload <em>texels</em> to the texture through several functions. The basic form is <code>uploadWhole tex autolvl texels</code>. It takes a <em>texture</em> <code>tex</code> and the <code>texels</code> to upload to the whole texture region. It’s your responsibility to ensure that you pass the correct number of texels. The <code>texels</code> are represented with a polymorphic type. You’re not bound to any kind of textures. You can pass a list of texels, a <code>Vector</code> of texels, or whatever you want, as long as it’s <code>Foldable</code>.</p>
<p>It’s also possible to fill the whole texture with a single value. In <strong>OpenGL</strong> slang, such an operation is often called <em>clearing</em> – clearing a <em>buffer</em>, clearing a <em>texture</em>, clearing the <em>back buffer</em>, and so on. You can do that with <code>fillWhole</code>.</p>
<p>There’re two over functions to work with subparts of textures, but it’s not interesting for the purpose of that blog entry.</p>
<h2 id="pixel-format">Pixel format</h2>
<p>The cool thing is the fact I’ve unified pixel formats. <em>Textures</em> and <em>framebuffers</em> share the same pixel format type (<code>Format t c</code>). Currently, they’re all phantom types, but I might unify them further and use <code>DataKinds</code> to promote them to the type-level. A format has two type variables, <code>t</code> and <code>c</code>.</p>
<p><code>t</code> is the underlying type. Currently, it can be either <code>Int32</code>, <code>Word32</code> or <code>Float</code>. I might add support for <code>Double</code> as well later on.</p>
<p><code>c</code> is the channel type. There’re basically five channel types:</p>
<ul class="incremental">
<li><code>CR r</code>, a red channel ;</li>
<li><code>CRG r g</code>, red and green channels ;</li>
<li><code>CRGB r g b</code>, red, green and blue channels ;</li>
<li><code>CRGBA r g b a</code>, red, green, blue and alpha channels ;</li>
<li><code>CDepth d</code>, a depth channel (special case of <code>CR</code>; for depths only).</li>
</ul>
<p>The type variables <code>r</code>, <code>g</code>, <code>b</code>, <code>a</code> and <code>d</code> represent <em>channel sizes</em>. There’re currently three kind of <em>channel sizes</em>:</p>
<ul class="incremental">
<li><code>C8</code>, for 8-bit ;</li>
<li><code>C16</code>, for 16-bit ;</li>
<li><code>C32</code>, for 32-bit.</li>
</ul>
<p>Then, <code>Format Float (CR C32)</code> is a red channel, 32-bit float – the <strong>OpenGL</strong> equivalent is <code>R32F</code>. <code>Format Word32 (CRGB C8 C8 C16)</code> is a <em>RGB</em> channel with red and green 8-bit unsigned integer channels and the blue one is a 16-bit unsigned integer channel.</p>
<p>Of course, if a pixel format doesn’t exist on the <strong>OpenGL</strong> part, you won’t be able to use it. Typeclasses are there to enforce the fact pixel format can be represented on the <strong>OpenGL</strong> side.</p>
<h1 id="next-steps">Next steps</h1>
<p>Currently, I’m working hard on how to represent vertex formats. That’s not a trivial task, because we can send vertices to <strong>OpenGL</strong> as interleaved – or not – arrays. I’m trying to design something elegant and safe, and I’ll keep you informed when I finally get something. I’ll need to find an interface for the actual render command, and I should be able to release something we can actually use!</p>
<p>By the way, some people already tried it (Git HEAD), and that’s amazing! I’ve created the <code>unstable</code> branch so that I can push unstable things, and keep the master branch as clean as possible.</p>
<p>Keep the vibe, and have fun hacking around!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0tag:blogger.com,1999:blog-8976038770606708499.post-32341371837183692262015-07-24T23:23:00.000+02:002015-07-25T12:03:47.669+02:00Introducing Luminance, a safer OpenGL API<p>A few weeks ago, I was writing <strong>Haskell</strong> lines for a project I had been working on for a very long time. That project was a 3D engine. There are several posts about it on my blog, feel free to check out.</p>
<p>The thing is… Times change. The more it passes, the more I become mature in what I do in the <strong>Haskell</strong> community. I’m a demoscener, and I need to be productive. Writing a whole 3D engine for such a purpose is a good thing, but I was going round and round in circles, changing the whole architecture every now and then. I couldn’t make my mind and help it. So I decided to stop working on that, and move on.</p>
<p>If you are a <strong>Haskell</strong> developer, you might already know <a href="https://www.fpcomplete.com/user/edwardk?show=all">Edward Kmett</a>. Each talk with him is always interesting and I always end up with new ideas and new knowledge. Sometimes, we talk about graphics, and sometimes, he tells me that writing a 3D engine from scratch and release it to the community is not a very good move.</p>
<p>I’ve been thinking about that, and in the end, I agree with Edward. There’re two reasons making such a project hard and not interesting for a community:</p>
<ol class="incremental" style="list-style-type: decimal">
<li>a good “3D engine” is a specialized one – for FPS games, for simulations, for sport games, for animation, etc. If we know what the player will do, we can optimize a lot of stuff, and put less details into not-important part of the visuals. For instance, some games don’t really care about skies, so they can use simple skyboxes with nice textures to bring a nice touch of atmosphere, without destroying performance. In a game like a flight simulator, skyboxes have to be avoided to go with other techniques to provide a correct experience to players. Even though an engine could provide both techniques, apply that problem to almost everything – i.e. space partitionning for instance – and you end up with a nightmare to code ;</li>
<li>an engine can be a very bloated piece of software – because of point <em>1</em>. It’s very hard to keep an engine up to date regarding technologies, and make every one happy, especially if the engine targets a large audience of people – i.e. <a href="http://hackage.haskell.org">hackage</a>.</li>
</ol>
<p>Point <em>2</em> might be strange to you, but that’s often the case. Building a flexible 3D engine is a very hard and non-trivial task. Because of point <em>1</em>, you utterly need to restrict things in order to get the required level of performance or design. There are people out there – especially in the demoscene world – who can build up 3D engines quickly. But keep in mind those engines are limited to demoscene applications, and enhancing them to support something else is not a trivial task. In the end, you might end up with a lot of bloated code you’ll eventually zap later on to build something different for another purpose – eh, demoscene is about going dirty, right?! ;)</p>
<h1 id="basics">Basics</h1>
<p>So… Let’s go back to the basics. In order to include everyone, we need to provide something that everyone can download, install, learn and use. Something like <a href="https://www.opengl.org/sdk">OpenGL</a>. For <strong>Haskell</strong>, I highly recommend using <a href="https://hackage.haskell.org/package/gl">gl</a>. It’s built against the <code>gl.xml</code> file – released by Khronos. If you need sound, you can use the complementary library I wrote, using the same name convention, <a href="https://hackage.haskell.org/package/al">al</a>.</p>
<p>The problem with that is the fact that <strong>OpenGL</strong> is a low-level API. Especially for new comers or people who need to get things done quickly. The part that bothers – wait, no, <em>annoys</em> – me the most is the fact that <strong>OpenGL</strong> is a very old library which was designed two decades ago. And we suffer from that. A lot.</p>
<p><strong>OpenGL</strong> is a <em>stateful</em> graphics library. That means it maintains a <em>state</em>, a <em>context</em>, in order to work properly. Maintaining a context or state is a legit need, don’t get it twisted. However, if the design of the API doesn’t fit such a way of dealing with the state, we come accross a lot of problems. Is there <em>one</em> programmer who hasn’t experienced black screens yet? I don’t think so.</p>
<p>The <strong>OpenGL</strong>’s API exposes a lot of functions that perform <em>side-effects</em>. Because <strong>OpenGL</strong> is weakly typed – almost all objects you can create in <strong>OpenGL</strong> share the same <code>GL(u)int</code> type, which is very wrong – you might end up doing nasty things. Worse, it uses an internal binding system to <em>select</em> the objects you want to operate on. For instance, if you want to upload data to a texture object, you need to <em>bind</em> the texture before calling the texture upload function. If you don’t, well, that’s bad for you. There’s no way to verify code safety at compile-time.</p>
<p>You’re not convinced yet? <strong>OpenGL</strong> doesn’t tell you directly how to change things on the GPU side. For instance, do you think you have to <em>bind</em> your vertex buffer before performing a render, or is it sufficient to <em>bind</em> the <em>vertex array object</em> only? All those questions don’t have direct answers, and you’ll need to dig in several wikis and forums to get your answers – the answer to that question is <em>“Just bind the VAO, pal.”</em></p>
<h1 id="what-can-we-do-about-it">What can we do about it?</h1>
<p>Several attempts to enhance that safety have come up. The first thing we <strong>have</strong> to do is to wrap all <strong>OpenGL</strong> object types into proper types. For instance, we need several types for <code>Texture</code> and <code>Framebuffer</code>.</p>
<p>Then, we need a way to ensure that we cannot call a function if the context is not setup for. There are a few ways to do that. For instance, <a href="https://hackage.haskell.org/package/indexed-0.1">indexed monads</a> can be a good start. However, I tried that, and I can tell you it’s way too complicated. You end up with very long types that make things barely unreadable. <a href="https://github.com/phaazon/igl/blob/master/src/Graphics/Rendering/IGL/Buffer.hs#L114">See this</a> and <a href="https://github.com/phaazon/igl/blob/master/src/Graphics/Rendering/IGL/GL.hs#L51">this</a> for excerpts.</p>
<h2 id="luminance">Luminance</h2>
<p>In my desperate quest of providing a safer <strong>OpenGL</strong>’s API, I decided to create a library from scratch called <a href="https://github.com/phaazon/luminance">luminance</a>. That library is not really an <strong>OpenGL</strong> safe wrapper, but it’s very close to being that.</p>
<p><code>luminance</code> provides the same objects than <strong>OpenGL</strong> does, but via a safer way to create, access and use them. It’s an effort for providing safe abstractions without destroying performance down and suited for graphics applications. It’s not a 3D engine. It’s a rendering framework. There’s no <em>light</em>, <em>asset managers</em> or that kind of features. It’s just a <em>tiny</em> and <em>simple</em> powerful API.</p>
<h3 id="example">Example</h3>
<p><code>luminance</code> is still a huge <em>work in progress</em>. However, I can already show an example. The following example opens a window but doesn’t render anything. Instead, it creates a buffer on the GPU and perform several simple operations onto it.</p>
<pre class="sourceCode haskell"><code class="sourceCode haskell"><span class="co">-- Several imports.</span>
<span class="kw">import </span><span class="dt">Control.Monad.IO.Class</span> ( <span class="dt">MonadIO</span>(..) )
<span class="kw">import </span><span class="dt">Control.Monad.Trans.Resource</span> <span class="co">-- from the resourcet package</span>
<span class="kw">import </span><span class="dt">Data.Foldable</span> ( traverse_ )
<span class="kw">import </span><span class="dt">Graphics.Luminance.Buffer</span>
<span class="kw">import </span><span class="dt">Graphics.Luminance.RW</span>
<span class="kw">import </span><span class="dt">Graphics.UI.GLFW</span> <span class="co">-- from the GLFW-b package</span>
<span class="kw">import </span><span class="dt">Prelude</span> <span class="kw">hiding</span> ( init ) <span class="co">-- clash with GLFW-b’s init function</span>
windowW,<span class="ot">windowH ::</span> <span class="dt">Int</span>
windowW <span class="fu">=</span> <span class="dv">800</span>
windowH <span class="fu">=</span> <span class="dv">600</span>
<span class="ot">windowTitle ::</span> <span class="dt">String</span>
windowTitle <span class="fu">=</span> <span class="st">"Test"</span>
<span class="ot">main ::</span> <span class="dt">IO</span> ()
main <span class="fu">=</span> <span class="kw">do</span>
init
<span class="co">-- Initiate the OpenGL context with GLFW.</span>
windowHint (<span class="dt">WindowHint'Resizable</span> <span class="dt">False</span>)
windowHint (<span class="dt">WindowHint'ContextVersionMajor</span> <span class="dv">3</span>)
windowHint (<span class="dt">WindowHint'ContextVersionMinor</span> <span class="dv">3</span>)
windowHint (<span class="dt">WindowHint'OpenGLForwardCompat</span> <span class="dt">False</span>)
windowHint (<span class="dt">WindowHint'OpenGLProfile</span> <span class="dt">OpenGLProfile'Core</span>)
window <span class="ot"><-</span> createWindow windowW windowH windowTitle <span class="dt">Nothing</span> <span class="dt">Nothing</span>
makeContextCurrent window
<span class="co">-- Run our application, which needs a (MonadIO m,MonadResource m) => m</span>
<span class="co">-- we traverse_ so that we just terminate if we’ve failed to create the</span>
<span class="co">-- window.</span>
traverse_ (runResourceT <span class="fu">.</span> app) window
terminate
<span class="co">-- GPU regions. For this example, we’ll just create two regions. One of floats</span>
<span class="co">-- and the other of ints. We’re using read/write (RW) regions so that we can</span>
<span class="co">-- send values to the GPU and read them back.</span>
<span class="kw">data</span> <span class="dt">MyRegions</span> <span class="fu">=</span> <span class="dt">MyRegions</span> {
<span class="ot"> floats ::</span> <span class="dt">Region</span> <span class="dt">RW</span> <span class="dt">Float</span>
,<span class="ot"> ints ::</span> <span class="dt">Region</span> <span class="dt">RW</span> <span class="dt">Int</span>
}
<span class="co">-- Our logic.</span>
<span class="ot">app ::</span> (<span class="dt">MonadIO</span> m,<span class="dt">MonadResource</span> m) <span class="ot">=></span> <span class="dt">Window</span> <span class="ot">-></span> m ()
app window <span class="fu">=</span> <span class="kw">do</span>
<span class="co">-- We create a new buffer on the GPU, getting back regions of typed data</span>
<span class="co">-- inside of it. For that purpose, we provide a monadic type used to build</span>
<span class="co">-- regions through the 'newRegion' function.</span>
region <span class="ot"><-</span> createBuffer <span class="fu">$</span>
<span class="dt">MyRegions</span>
<span class="fu"><$></span> newRegion <span class="dv">10</span>
<span class="fu"><*></span> newRegion <span class="dv">5</span>
clear (floats region) pi <span class="co">-- clear the floats region with pi</span>
clear (ints region) <span class="dv">10</span> <span class="co">-- clear the ints region with 10</span>
readWhole (floats region) <span class="fu">>>=</span> liftIO <span class="fu">.</span> print <span class="co">-- print the floats as an array</span>
readWhole (ints region) <span class="fu">>>=</span> liftIO <span class="fu">.</span> print <span class="co">-- print the ints as an array</span>
floats region <span class="ot">`writeAt`</span> <span class="dv">7</span> <span class="fu">$</span> <span class="dv">42</span> <span class="co">-- write 42 at index=7 in the floats region</span>
floats region <span class="fu">@?</span> <span class="dv">7</span> <span class="fu">>>=</span> traverse_ (liftIO <span class="fu">.</span> print) <span class="co">-- safe getter (Maybe)</span>
floats region <span class="fu">@!</span> <span class="dv">7</span> <span class="fu">>>=</span> liftIO <span class="fu">.</span> print <span class="co">-- unsafe getter</span>
readWhole (floats region) <span class="fu">>>=</span> liftIO <span class="fu">.</span> print <span class="co">-- print the floats as an array</span></code></pre>
<p>Those read/write regions could also have been made <em>read-only</em> or <em>write-only</em>. For such regions, some functions can’t be called, and trying to do so will make your compiler angry and throw errors at you.</p>
<p>Up to now, the buffers are created <em>persistently</em> and <em>coherently</em>. That might cause issues with <strong>OpenGL</strong> <em>synchronization</em>, but I’ll wait for benchmarks before changing that part. If benchmarking spots performance bottlenecks, I’ll introduce more buffers and regions to deal with special cases.</p>
<p><code>luminance</code> doesn’t force you to use a specific windowing library. You can then embed it into any kind of host libraries.</p>
<h1 id="whats-to-come">What’s to come?</h1>
<p><a href="https://github.com/phaazon/luminance">luminance</a> is very young. At the moment of writing this article, it’s only 26 commits old. I just wanted to present it so that people know it exists will be released as soon as possible. The idea is to provide a library that, if you use it, won’t create black screens because of framebuffers incorrectness or buffers issues. It’ll ease debugging <strong>OpenGL</strong> applications and prevent from making nasty mistakes.</p>
<p>I’ll keep posting about <code>luminance</code> as I get new features implemented.</p>
<p>As always, keep the vibe, and happy hacking!</p>
Anonymoushttp://www.blogger.com/profile/06180476773002153033noreply@blogger.com0