Here I show the progress I make with ray tracing concepts as I slowly implement features in my own path tracer. I’ll start from the very basics and build up to more advanced features.

Quick look


Intersections and normals

Sphere intersection Sphere normals

Multiple object intersections

Sphere and floor

Antialiasing by generating pixels with multiple samples

1 Sample Per Pixel 100 Samples Per Pixel

Diffuse surfaces and Gamma correction

Diffuse Diffuse gamma corrected


Specular metal Coarse metal
Hollow glass

Camera control and lens effects

Camera control Depth of field

Raytracing is embarrassingly parallel. OpenMP, as explained by Bisqwit’s Guide to OpenMP, allows for very easy multithreading at runtime. Simply adding the following compiler #pragma before the main rendering loop gives us multithreading performance:

#pragma omp parallel for ordered schedule(dynamic)
for (int y = 0; y < image.m_surface.height; y++){
    for (int x = 0; x < image.m_surface.width; x++){
        Color pixel_Color;
        // trace rays per pixel
        image.m_surface.pixels[(y*image.m_surface.width +x)] = pixel_color;

The approximate speedup in rendering will be equal the number of cores your computer supports (multiplied by any hyperthreading for Intel CPUs). The M1 Macbook comes with a 10 core CPU so this gives approximately 10 times the performance increase. However, rendering on complex scenes is not real-time even when parallelising over CPUs. I would have to port the code to the GPU. I will tackle this problem a bit later…

Making a GUI

So far, all the images have been rendered directly into an image file (ppm/png/jpg). Most professional renderers have GUIs so that the user can experiment with various rendering parameters without having to compile the program again. So I set out to use OpenGL, GLFW, GLEW, and ImGUI to make a simple cross-platform GUI application for the renderer.

Displaying a triangle
Little did I know that displaying a simple triangle would take around 200 lines of code

Displaying a rendered image
Displaying a rendered image, such as a jpg file, onto the display required some research. References online suggest hacks like displaying images as textured rectangles that match the size of your display; however, this would require defining external vertex and fragment shaders like in the above triangle example. Instead, I learned that OpenGL 3.0 introduced the concept of read and draw frame buffers that allow the user to render images to other destinations besides the default frame buffer displayed on the screen. Directly attaching a 2D texture onto the default frame buffer is not possible due to it being owned by a resource external to OpenGL (mainly the OS); however, rendering to a separate draw buffer and copying the texture to the default buffer, by an optimized blitting function, is possible.

The main idea is to create a pixel buffer to store color information from the main rendering loop (making sure you account for changes in the window height/width), attaching the pixel buffer data as a texture to a draw frame buffer, and lastly blitting the texture onto the main display.

Updating with samples
Whilst the rendering process occurs behind the scenes, the GUI should visualize the progress as new samples per pixel come.

Adding buttons to control settings and rendering parameters is easy with ImGUI.