Previous post | Photo-editor

In the last post I built the UX component, but so far it doesn’t do anything other than being highly entertaining for cats.

In this one I’m gonna write a render pipeline and by the end it still won’t makes sense how to connect the *cough* Dots.

How old is this browser?

In a past life I worked on an extremely cool photo editor that had a ton of features and 100k MAUs. It was built using WebGL, and it works on most browsers and most phones. So naturally I wanted to build this one on WebGPU which is available on less than 50% of the browsers that matter.

I use Firefox, so to build this thing I downloaded Firefox Nightly. Safari works too, but I had to enable the WebGPU feature flag.

Render..er

As I’m quite an overachiever I decideed that the API had to be right from the beginning, and I couldn’t just throw everything inside a useEffect.

So I wrote a class that takes care of initialization, tear down, and rendering. WebGPU initialization is async, and Javascript doesn’t have async constructors so I did the right thing and routed the initialization through a static function. Do the right thing.

export class RenderDude {
  gpuConfiguration: GPUConfiguration;

  // making the constructor private ensures that RenderDude can't be called with `new`
  private constructor(gpuConfiguration: GPUConfiguration) {
    this.gpuConfiguration = gpuConfiguration;
  }

  // this is the only API that gets called from external code to initialize the GPU and return an instance of RenderDude
  static async init() {
    const gpuConfig = await RenderDude.initializeGPU();
    return new RenderDude(gpuConfig);
  }

  // this does the heavy lifting of getting the WebGPU context and initializing the pipeline
  private static async initializeGPU() {
    if (!navigator.gpu) {
      throw new Error("Your browser doesn't support GPUs");
    }

    const canvas = document.getElementById("canvas-root") as HTMLCanvasElement;
    const adapter = await navigator.gpu.requestAdapter();
    const device = await adapter?.requestDevice();
    const context = canvas.getContext("webgpu");

    // ... pipeline init omitted for later
    return { device, context, pipeline };
  }
}

Loading an image

Vertex Shader

It’s easier to render a triangle, but that’s unrelated to photo editors so I just skipped that part. The simplest vertex shader I could write for the editor is one that loads an image.

const vertexShaderOutputStructName = "VertexShaderOutput";
const vertexShaderOutputStruct = {
  name: vertexShaderOutputStructName,
  struct: `
struct ${vertexShaderOutputStructName} {
  @builtin(position) position: vec4<f32>,
  @location(0) texcoord: vec2<f32>,
};`,
};

const vertex = `
${vertexShaderOutputStruct.struct}

@vertex
fn main(@builtin(vertex_index) vertexIndex: u32) -> ${vertexShaderOutputStruct.name} {

  // I'm using triangle lists, so have to pass 2 sets of 3 vertices to render a rectangle
  // texture coordinates go 0.0 -> 1.0, across and down
  var pos = array<vec2<f32>, 6>(
    vec2f( 0.0,  0.0),  // center
    vec2f( 1.0,  0.0),  // right, center
    vec2f( 0.0,  1.0),  // center, top
    vec2f( 0.0,  1.0),  // center, top
    vec2f( 1.0,  0.0),  // right, center
    vec2f( 1.0,  1.0),  // right, top
    );

  var vsOutput: VertexShaderOutput;
  let xy = pos[vertexIndex];

  // scale the quarter screen, so it renders on the entire canvas
  vsOutput.position = vec4<f32>(xy * 2.0 - 1.0, 0.0, 1.0);
  vsOutput.texcoord = xy;
  return vsOutput;
}
`;

Fragment Shader

This is even simpler as all I did here was to return the color sampled from the texture, using the texture coordinates.

const fragment = `
${vertexShaderOutputStruct.struct}

@group(0) @binding(1) var imageSampler: sampler;
@group(0) @binding(2) var imageTexture: texture_2d<f32>;

@fragment
fn main(fsInput: ${vertexShaderOutputStruct.name}) -> @location(0) vec4<f32> {
  let sampledColor = textureSample(imageTexture, imageSampler, fsInput.texcoord);

  return sampledColor;
}
`;