Showing posts with label texture. Show all posts
Showing posts with label texture. Show all posts

Tuesday, October 06, 2015

luminance-0.3 – Adding more texture kinds to the equation…

Unleashing the power of textures!

From luminance-0.1 to luminance-0.2 included, it was not possible to use any texture types different than two-dimensional textures. This blog entry tags the new release, luminance-0.3, which adds support for several kinds of texture.

A bit more dimensions

Texture1D, Texture2D and Texture3D are all part of the new release. The interface has changed – hence the breaking changes yield a major version increment – and I’ll explain how it has.

Basically, textures are now fully polymorphic and are constrained by a typeclass: Texture. That typeclass enables ad hoc polymorphism. It is then possible to add more texture types without having to change the interface, which is cool. Like everything else in luminance, you just have to ask the typesystem which kind of texture you want, and everything will be taken care of for you.

Basically, you have three functions to know:

  • createTexture, which is used to create a new texture ;
  • uploadSub, used to upload texels to a subpart of the texture ;
  • fillSub, used to fill – clear – a subpart of the texture with a given value.

All those functions work on (Texture t) => t, so it will work with all kinds of texture.

Cubemaps

Cubemaps are also included. They work like other textures but add the concept of faces. Feel free to dig in the documentation for further details.

What’s next?

I need to find a way to wrap texture arrays, which are very nice and useful for layered rendering. After that, I’ll try to expose the change to the framebuffers so that we can create framebuffers with cubemaps or that kind of cool feature.

In the waiting, have a good a week!

Saturday, August 01, 2015

Luminance – framebuffers and textures

I’m happily suprised that so many Haskell people follow luminance! First thing first, let’s tell you about how it grows.

Well, pretty quickly! There’s – yet – no method to make actual renders, because I’m still working on how to implement some stuff (I’ll detail that below), but it’s going toward the right direction!

Framebuffers

Something that is almost done is the framebuffer part. The main idea of framebuffers – in OpenGL – is supporting offscreen renders, so that we can render to several framebuffers and combine them in several fancy ways. Framebuffers are often bound textures, used to pass the rendered information around, especially to shaders, or to get the pixels through texture reads CPU-side.

The thing is… OpenGL’s framebuffers are tedious. You can have incomplete framebuffers if you don’t attach textures with the right format, or to the wrong attachment point. That’s why the framebuffer layer of luminance is there to solve that.

In luminance, a Framebuffer rw c d is a framebuffer with two formats. A color format, c, and a depth format, d. If c = (), then no color will be recorded. If d = (), then no depth will be recorded. That enables the use of color-only or depth-only renders, which are often optimized by GPU. It also includes a rw type variable, which has the same role as for Buffer. That is, you can have read-only, write-only or read-write framebuffers.

And of course, all those features – having a write-only depth-only framebuffer for instance – are set through… types! And that’s what is so cool about how things are handled in luminance. You just tell it what you want, and it’ll create the required state and manage it for you GPU-side.

Textures

The format types are used to know which textures to create and how to attach them internally. The textures are hidden from the interface so that you can’t mess with them. I still need to find a way to provide some kind of access to the information they hold, in order to use them in shaders for instance. I’d love to provide some kind of monoidal properties between framebuffers – to mimick gloss Monoid instance for its Picture type, basically.

You can create textures, of course, by using the createTexture w h mipmaps function. w is the width, h the height of the texture. mipmaps is the number of mipmaps you want for the texture.

You can then upload texels to the texture through several functions. The basic form is uploadWhole tex autolvl texels. It takes a texture tex and the texels to upload to the whole texture region. It’s your responsibility to ensure that you pass the correct number of texels. The texels are represented with a polymorphic type. You’re not bound to any kind of textures. You can pass a list of texels, a Vector of texels, or whatever you want, as long as it’s Foldable.

It’s also possible to fill the whole texture with a single value. In OpenGL slang, such an operation is often called clearing – clearing a buffer, clearing a texture, clearing the back buffer, and so on. You can do that with fillWhole.

There’re two over functions to work with subparts of textures, but it’s not interesting for the purpose of that blog entry.

Pixel format

The cool thing is the fact I’ve unified pixel formats. Textures and framebuffers share the same pixel format type (Format t c). Currently, they’re all phantom types, but I might unify them further and use DataKinds to promote them to the type-level. A format has two type variables, t and c.

t is the underlying type. Currently, it can be either Int32, Word32 or Float. I might add support for Double as well later on.

c is the channel type. There’re basically five channel types:

  • CR r, a red channel ;
  • CRG r g, red and green channels ;
  • CRGB r g b, red, green and blue channels ;
  • CRGBA r g b a, red, green, blue and alpha channels ;
  • CDepth d, a depth channel (special case of CR; for depths only).

The type variables r, g, b, a and d represent channel sizes. There’re currently three kind of channel sizes:

  • C8, for 8-bit ;
  • C16, for 16-bit ;
  • C32, for 32-bit.

Then, Format Float (CR C32) is a red channel, 32-bit float – the OpenGL equivalent is R32F. Format Word32 (CRGB C8 C8 C16) is a RGB channel with red and green 8-bit unsigned integer channels and the blue one is a 16-bit unsigned integer channel.

Of course, if a pixel format doesn’t exist on the OpenGL part, you won’t be able to use it. Typeclasses are there to enforce the fact pixel format can be represented on the OpenGL side.

Next steps

Currently, I’m working hard on how to represent vertex formats. That’s not a trivial task, because we can send vertices to OpenGL as interleaved – or not – arrays. I’m trying to design something elegant and safe, and I’ll keep you informed when I finally get something. I’ll need to find an interface for the actual render command, and I should be able to release something we can actually use!

By the way, some people already tried it (Git HEAD), and that’s amazing! I’ve created the unstable branch so that I can push unstable things, and keep the master branch as clean as possible.

Keep the vibe, and have fun hacking around!