Poor Man’s Multitexturing – Procedural Palettes

  • Enable color decoding:
  • Active channel : r g b a

When doing multi texturing, most of the time you end up sampling multiple textures of the same size at the same coordinate (s,t). On some hardware, texture fetches are a major bottleneck. Instead of sampling 4 textures, we store one image in each channel of a host texture, sample it and recompose the 4 images. Now, this technique is only useful if you meet a certain set of conditions:
  • Same texture size, same sampling coordinates
  • Application is Fill Rate Limited
  • No efficient texture caching mechanism in the driver/GPU
  • 256 colours are enough for your needs
  • Being stuck with GL_NEAREST filtering is ok

Since we use one channel per image, we are restricted to 8-bit colour spaces, i.e. 256 possible colours.

There is no direct palette support in modern OpenGL implementations. To emulate colour indexing, a well known technique is to load a palette in a texture slot and sample it at the right offset. That’s two texture sampling per texel, but reading from the palette is virtually free thanks to texture caching. However, developing for mobile platforms comes with some limitations:

  • Some device support PoT textures only, with a minimum size of 64×64
  • Inconsistency between drivers when computing sampling offset
  • Limited number of texture slots (as low as 4 on some apple devices)

We basically end up wasting texture space and potentially exposing ourselves to device-dependent colour glitches. Texture slot limitations prevent us from using multiple palettes.

We can replace the palette lookup process with procedural colour generation in the fragment shader, the input being the value read in a channel of the host texture.

Palette Generation

Let’s take a subset of the 2:2:2:2 BGRI colour space as an example (the ‘I’ stands for intensity).

static inline void unpack_2222BGRI(const byte pkVal, byte & r, byte & g, byte & b) {
	const byte i = 1 + ((pkVal & 0xC0) >> 6);
	const byte nr = (pkVal & 0x30) >> 4;
	const byte ng = (pkVal & 0x0C) >> 2;
	const byte nb = (pkVal & 0x03);
	r = nr * i * 16;
	g = ng * i * 16;
	b = nb * i * 16;
}

This produces the following palette:

Note that the maximum value for a channel is 3 * 4 * 16 = 192. There is no true white in this palette, as it is best suited for dark to mid tone images. A lighter palette can be obtained by offsetting each channel:

	r = (nr + 1) * (i + 1) * 16;
	g = (ng + 1) * (i + 1) * 16;
	b = (nb + 1) * (i + 1) * 16;

Creating a Phtoshop Palette File

We won’t waste our time implementing colour quantization and dithering, and let Photoshop convert our resources to indexed images. We only need to generate a .act file for our palette. The .act file format is extremely simple: the palette’s 256 colours are defined sequentially; each colour takes 24 bits (8 bits per channel).

static inline void writeActFile() {
	byte buff[256*3]; // 256 colours in the palette, 1 byte per colour channel, 3 channels per colour
	for (int idx = 0, clr = 0; idx <= 0xFF; idx += 1, clr += 3) {
		unpack_2222BGRI((byte)idx, buff[clr], buff[clr + 1], buff[clr + 2]);
	}
	FILE * const fh = fopen("pal.act", "wb");
	fwrite(buff, 256*3, 1, fh);
	fclose(fh);
}

Converting Resources

Open you image in Photoshop, then switch to indexed colour: Image -> Mode -> Indexed Color… In the “Palette” combo box, chose “Custom…” then press the “load…” button and select your pal.act file. Select the “Dithering” option that gives the best result on your image and press “Ok”. Congratulations, your image has been quantized using our custom palette

To export your packed image, click “File” -> “Save As…” and select “Photoshop Raw (*.RAW)” as file format. In our example, our 128×128 indexed images are named “0.raw”, “1.raw”, 2.raw” and “3.raw”.

Building the Host Texture

Now, we need to pack each indexed image in a host texture. One image per channel. We use the following C# program to create our host texture:

using System;
using System.Drawing; // NOTE: add a reference to the System.Drawing assembly
using System.Drawing.Imaging;
using System.IO;
using System.Runtime.InteropServices;

namespace HostTexture {
    class Program {
        private static void _loadChanelFromRawFile(string filename, byte[] data, int dataSz, int chan_id) {
            using (BinaryReader rd = new BinaryReader(File.Open(filename, FileMode.Open, FileAccess.Read))) {
                for (int pxl_idx = chan_id; pxl_idx < dataSz; pxl_idx += 4) {
                    byte idx = rd.ReadByte();
                    data[pxl_idx] = idx;
                }
            }
        }
        static const int _imgSz = 128; // must match the dimensions of the indexed textures
        static void Main(string[] args) {
            int dataSz = _imgSz * _imgSz * 4;
            byte[] data = new byte[dataSz];
            _loadChanelFromRawFile("0.raw", data, dataSz, 0);
            _loadChanelFromRawFile("1.raw", data, dataSz, 1);
            _loadChanelFromRawFile("2.raw", data, dataSz, 2);
            _loadChanelFromRawFile("3.raw", data, dataSz, 3);
            Bitmap bmp = new Bitmap(_imgSz , _imgSz, PixelFormat.Format32bppArgb);
            BitmapData dst_lk = bmp.LockBits(new Rectangle(0, 0, _imgSz, _imgSz), ImageLockMode.WriteOnly, bmp.PixelFormat);
            Marshal.Copy(data, 0, dst_lk.Scan0, dataSz);
            bmp.UnlockBits(dst_lk);
            bmp.Save(@"host.png", ImageFormat.Png);
        }
    }
}

Loading the Host texture in OpenGL

The major drawback with this technique is that we are stuck with NEAREST filtering. We can still use mipmaped texture.

glBindTexture(GL_TEXTURE_2D, hostTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Apart from that, it’s business as usual: bind your texture to a slot and set the shader’s sampler index according to that slot.

Getting our colours back

Here is the GLSL equivalent of our previous unpack_2222BGRI function :

vec4 unpack_2222BGRI(in float val) {
	const vec4 vMod = vec4(256.0, 64.0, 16.0, 4.0);
	const vec4 vDiv = vec4(64.0, 16.0, 4.0, 1.0);

	vec4 tmp = floor(mod(vec4(val), vMod) / vDiv);
	return ((tmp * 16.0 * (1.0 + tmp.x)) / 255.0).yzwx;
}

simply pass a texture’s channel as argument to get the color back

Comments are closed