I’m sure you’ve heard the term shaders before if you’re into gaming. There are some games that allow you to make custom shaders to change how the game looks, the most popular being Minecraft. Here we’ll take a brief look at the different shader stages and what they are used for.

In the history of drawing pixels on a 2D screen, we’ve found two different methods for presenting 3D data:

  • Rasterization: This is the most hardware supported rendering method at this moment. All realtime game engines and visualizations use some form of this. In this method we typically project triangles to a 2D plane (your screen) and then decide for every pixel what triangle we should visualize.
  • Raytracing: With raytracing or pathtracing we actually reverse this process, instead of ending up at the screen we start out at the screen and then trace rays back into the scene. For every pixel we determine what object we collide with in the scene and determine what color the pixel is that way.

With the shaders below and continued articles here we’ll dive into rasterization. If instead you’re more interested in learning raytracing you can visit the raytracing course page!

Note: this page uses webgpu, this should run out of the box on chromium based browsers. If the examples below are not running, check the implementation status.

Shaderstages overview

Below we’ll take a look at the non-optional shader stages that are almost always used.

Vertex shader

The vertex shader is the first programmable stage of the graphics pipeline. It takes input data such as the position, color, and texture coordinates of each vertex and performs transformations like scaling, rotating, and translating them to fit the scene.

Pixel shader

Also known as fragment shader. The pixel shader is responsible for calculating the color, lighting, and texture of each pixel in the final image. It processes fragments, which are potential pixels, and decides their final appearance based on inputs like textures, lighting models, and other material properties.

Your first pixel shader

Let’s tackle all the shaders one by one. We’re going to skip any setup for now. I’ve done all of that behind the scenes. Let’s focus on the pixel shader here. We’re going to apply a color to the output screen for every pixel in a triangle:

Try changing the vec4f and notice the output changes for every pixel inside the triangle.

There’s a lot to unpack about this shader already. Of course there are is some specific syntax to notice: @fragment indicates that the next function definition will be interpreted as a fragment shader. The -> @location(0) vec4f tells you that the function outputs a float4 vector. The @location is an attribute that determines what outputs you want from the shader and at what location in the memory.

Extending with a vertex shader

Let’s add the next part, a vertex shader:

The pixel shader remains unchanged from the last example, but now we are able to change the vertex positions. Try to change the vertex shader output to vec4f(pos + vec2f(1, 0), 0, 1); and see if you understand what happens when you run it. Can you figure out what the vertex positions for this test triangle are?

Similar to the pixel shader we have some specific syntax: @vertex to indicate the function is a vertex shader. This time around the function is getting some input! @location(0) pos: vec2f tells us we’re expecting to get a float2 vector at position 0 which we are naming pos. And as output we’re returning a position that’s a @builtin attribute.

In this case the builtin attribute is position, which is a special attribute that tells the graphics pipeline where to place the vertex on the screen. The output of the vertex shader returns positions in the so called clip space. This is a coordinate system between [-1, 1]. Any position (output from the vertex shader) that’s outside this clip space is not processed (it’s clipped).

For these examples I simply specified a triangle with coordinates that are within the clip space, so we don’t have to do any processing in the vertex shader at all, we just output the input directly. Usually in the vertex shader you would perform a projection to transform 3D positions to 2D positions on your screen.

Extending with the javascript setup

Finally we’ll look at the complete code example that sets up the webgpu context, creates a vertex buffer, compiles the shaders, creates a render pipeline, and draws the triangle on the screen. This involves more code than the vertex and pixel shaders only. Take a look at the js tab below and see if you can follow along with the comments.

Adapter, device, and context

There is a lot to breakdown from this code. The first couple of lines are required for any webgpu setup. We request an adapter and device. The adapter is your actual GPU we want to render on, the device is an abstraction so we can call function on the GPU in isolation from other applications. After we have the basics, we initialize the context to use the canvas on this webpage as output.

Vertex buffer

Now we dive straight into graphics code you would see in almost any other API as well. First we create a vertex buffer as a float32 array. These are the actual locations of the vertices in 2D coordinates, we later define this as an array of 2D vectors in the vertexBufferLayout. We’ll go more in depth on buffers in a future course, but for now note the GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST specifiers. These explain we want to use this buffer in the vertex shader, and we are only copying to the buffer on the GPU, not reading back from it on the CPU.

Shader modules and pipeline

The next part is the shader modules. We compile the vertex and pixel shaders from the code defined in the other tabs to modules that we can use in the render pipeline. The pipeline is where everything comes together: We create the vertex and fragment stages and tell them to use the modules above. As input for the vertex shader we want to add the vertexBufferLayout we created earlier. The pixel shader module is defined with the canvas format we’re using, so we know what output format to return the color in.

Render pass

For the actual rendering we create a command encoder and a render pass. The encoder is used to record commands that will be executed on the GPU. For our commands we want to create a render pass. In this pass we have to set the pipeline and vertexbuffer so everything is connected. We then issue a draw command, this command expects the number of vertices we want to draw, so we can just use our vertices array size divided by 2.

Finally we submit the command encoder to the device queue. This queues it up for execution on the GPU which will run through all the steps we laid out in the render pass.

Conclusion

That’s all for now! We’ve covered the basics of shaders and how to set up a simple webgpu context to render a triangle with a color.