Rust + OpenGL: Learning to build a simple OpenGL rendering pipeline in rust.

Flex GameDev DK30 Quarantine 2020 5 5

Description

Currently I have written an abstraction in rust of the the OpenGL functions required to build a very simple rendering pipeling using the amazing “Rust and OpenGL from scratch” series over at Ironic Blog (the programming one, not the lifestyle website) as a starting point. To learn more about OpenGL, Rust and various lighting techniques I will be building on this and make a simple forward rendering pipeline with per vertex lighting and support for textured meshes.

Recent Updates

Flex 5 years ago

Update after (about) 1 week: Structuring and refactoring code

Adding the functionality of the week 1 goal.

Adding the functionality itself wasn’t all that hard, setting up the projection matrix for the camera took a little bit of freshening up my linear algebra/OpenGL knowledge but in the end I got the new camera and transform modules set up in about one day.

The real issues started when trying to integrate this functionality into the rest of the code. Before that however I did some refactoring.

Refactoring of materials.

In the initial update I said the following:

I added a way to specify a material which in essence is no more than a wrapper around the OpenGL glUniform calls

However materials as wrappers around glUniforms API calls appear to be too broad of a definition. for example skeletal animation requires uniform data but this intuitively should not be defined as part of a material. In a similar vein default data for primitives such as the MVP matrix, which transforms a vertex in a mesh first to its location in the world, then to its location relative to the camera and finally to what’s known as ‘clip space’ where it is determined if a vertex is in view of the camera or not, does not fall under materials. Thus the abstraction of the glUniform calls was decoupled from the implementation of materials and the definition of what a material is narrowed down.

glUniform calls are now implemented directly onto the datatypes for which these calls are defined by OpenGL (Vectors of floats and unsigned integers and matrices of the same types). For example:

#[allow(non_camel_case_types)]
#[repr(C, packed)]  
pub struct f32_f32_f32 {  
    pub d0: f32,  
    pub d1: f32,  
    pub d2: f32,  
}

impl f32_f32_f32 {
    pub unsafe fn gl_uniform(&self, gl: &Gl, location: usize) {  
        gl.Uniform3f(  
            location as gl::types::GLint,  
            self.d0 as gl::types::GLfloat,  
            self.d1 as gl::types::GLfloat,  
            self.d2 as gl::types::GLfloat  
        );  
    }
}

After looking up the documentation of a material that Unity and Unreal Engine use it seems that a material is the data the describes the visual properties of some surface. Thus it seems to be limited to the fragment shader stage.

The end result in the way the code is structured is that we now need to define additional struct to fully describe what a mesh looks like on the screen; just a specification of the vertex format and the material don’t cut it anymore. Creating additional structs isn’t too hard to set up now that we have moved the glUniform calls onto the data types. We just move the place where the function gets called to the program and define a trait which structs that want to upload uniform data to the GPU have to implement;

pub struct Program {  
    /* field omitted */
}

pub trait Uploadable {
	fn gl_uniform(&self, gl: &gl::Gl)
}

impl Program {
	...
	pub fn upload_data<D: Uploadable>(&self, data: D) {
		// Make sure the current program is the one in use.
		self.set_used();
		data.gl_uniform(&self.gl)
	}
}

And any struct can easily implement this now. For example the struct that contains the MVP matrix (and in the future possibly more things):

pub struct IMeshDefaults {  
    mvp: matrix_data::mat4,  
}

impl Uploadable for IMeshDefaults {
	fn gl_uniform(&self, gl: &gl::Gl) {
		// We let the struct decide the location of the uniform for now.
		self.mvp.gl_uniform(&gl, 0)
    }
}

And we can define our materials in this way as well. On nice thing about this is that a material is now contains nothing more than the data it will upload to the GPU. This ties in nicely with the next thing I added.

Entity Component Systems.

So in the week 1 goal I stated the following:

Adding location, scale and rotation attributes might be more complex then it seems. I don’t want to add it to the mesh directly in order to prevent endless hierarchies of structs and traits build on top of each other. On the other hand building some kind of Entity Component System is out of the scope of this project. I’ll have to do some research in finding a middle ground.

So naturally I did the exact opposite and started work on integrating an existing ECS into my code. I underestimated how quickly it would become a pain to integrate the new functionality into the renderer. It very quickly becomes hard to reason about how you pass information around when you put the functionality of

If you’re curious about what an ECS is this blogpost offers a nice explanation: https://medium.com/ingeniouslysimple/entities-components-and-systems-89c31464240d

The ECS I have chosen (specs) has a focus on parallelism which brings in a whole bunch of new problems. Firstly the components have to implement Send and Sync traits so that they can be shared safely across threads. Most of the time when defining the components are just structs consisting of primitive data this is fine. However we have defined a bunch of our OpenGL abstractions as containing a reference to the gl context. For example a shader is defined as follows:

pub struct Shader {  
    gl: gl::Gl,  
	id: gl::types::GLuint,  
}

Why? Well as explained in the blog post I used a starting off point this allows us to implement the Drop trait.

impl Drop for Shader {
	fn drop(&self) {
		unsafe {
			self.gl.DeleteShader(self.id);		
		}
	}
}

Now when a shader gets dropped by rust it will also be dropped on the GPU! This way we will never lose the reference to some resource on the GPU and we’ll never accidentally free the resource and use it later. The same was done buffers and programs. However when transitioning to using an ECS and defining these GPU resources as components we start running into some problems. See the gl::Gl struct defined as follows:

#[derive(Clone)]  
pub type Gl = Rc<bindings::Gl>;

Because we can have multiple shaders, programs and buffers which all need a reference to this gl struct it’s wrapped in an Rc to allow it to be cheaply cloned. Rc explicitly does not implement Send or Sync so when trying to define a component which contains the gl struct for example a mesh, which contains some buffers on the GPU to hold the vertex data:

impl Component for IMeshDefaults {  
    type Storage = VecStorage<IMeshDefaults>;  
}

The compiler throws an error:

error[E0277]: `std::rc::Rc<gl::bindings::Gl>` cannot be shared between threads safely
  --> roest_runtime/src/core_components/indexed_mesh.rs:57:6
   |
57 | impl Component for IndexedMesh {
   |      ^^^^^^^^^ `std::rc::Rc<gl::bindings::Gl>` cannot be shared between threads safely
   |

So most of my time over the last days has been spend trying to figure out a good way to solve this and I can’t say I have found the solution yet. There is a way to store thread local data onto a system so in theory we could create two different versions of a mesh. One thread local for the render system and one proxy component which contains the index of the component on the render thread. However you would have to make sure that if that proxy component would be dropped we would have to manually drop the render thread version of the component, completely defeating the purpose of storing the gl struct.

Another option would be to change the definition of our gl struct to something like the following:

pub type Gl = Arc<Mutex<bindings::Gl>>;

This isn’t for performance reasons but what I find more important it also means we have to call the lock function everywhere to get the value protected by the mutex.

And that is the current state of things. I might transfer to a different ECS that allow me to implement components more flexibly, or I might dive into the various traits and structs in specs and see if there is a way to define a new component trait. I sadly haven’t had time to update the tool I use to import meshes to support vertex normal, but that might be something for the coming days.

Flex 5 years ago

Initial update: The current state of things.

My starting off point is a modified and extended version where the “Rust and OpenGL from stratch” series of blog posts by Nercury ended. If you are interested in either this project or Rust and/or OpenGL in general I would highly recommend following this series and coding with it as you go.

The relevant part for this update is we can define the way a vertex looks on the GPU by making a struct as follows:

#[derive(VertexAttribPointers)]
#[repr(C, packed)]
pub struct Vertex {
    #[location = 0]
    pub pos: gl_data::f32_f32_f32,
    #[location = 0]
    pub clr: gl_data::f32_f32_f32,
}

Through the magic of the VertexAttribPointers macro this will automatically implement the API calls needed to define the data layout of this vertex on the GPU as long as we use the data types defined in the gl_data module. Given that Nercury defined all the different OpenGL types in here we can practically define any vertex we want

The one defined above would be accompanied by a shader with the inputs such as the following:

#version 450  
  
layout (location = 0) in vec3 position;  
layout (location = 1) in vec4 color;  
  
out VS_OUTPUT {  
    vec3 color;  
} OUT;  
  
void main() {  
    gl_Position = vec4(position, 1.0);  
    OUT.color = color.xyz;  
}

I extended the data types in the gl_data module to implement the Serialize and Deserialize traits from the serde crate so that we can serialize and deserialize any vertex defined with these types.

This allowed me to write a tool that takes in Wavefront OBJ, constructs a vector of these vertex structs, this is what we call a ‘mesh’, and write them to the disk. In the runtime code I can then load this mesh from the disk and upload them to the GPU without having to do any additional processing on the data.

Finally I added a way to specify a material which in essence is no more than a wrapper around the OpenGL glUniform calls which allows us to set uniform variables in GLSL. I can define a material in Rust as follows:

#[derive(gl_getters, gl_setters)]  
pub struct Material {
    gl: gl::Gl,  
    program: Program,  
    #[location = 0]  
    MVP: gl_data::mat4,  
}

impl MatrialTrait for Material {
    fn set_used(&self) {
        self.program.set_used()
    }
}

Notice the gl_getters and gl_setters macros here. They will add gl_set_*() and gl_get_*() functions for every field of the struct tagged with a location macro attribute which will set and get the uniform variables at the corresponding location in the shader:

#version 450  
  
layout (location = 0) in vec3 position;  
layout (location = 1) in vec4 color;  
  
layout (location = 0) uniform mat4 mvp;  
out VS_OUTPUT {  
    vec3 color;  
} OUT;  
  
void main() {  
    gl_Position = mvp * vec4(position, 1.0);
    OUT.color = color.xyz;  
}

Once again we can use any type defined in gl_data for this which is all the OpenGL types except for arrays.

This this is where we are now. Currently the simple shaders I have written just color every vertex belonging to a mesh the same color so the only thing we see it the silhouette of a mesh, but it is an arbitrary mesh that is constructed from an OBJ file, saved to the disk and then loaded from the disk again:

A teapot

Estimated Timeframe

Apr 2nd - May 2nd

Week 1 Goal

While it is possible to specify and upload data to the GPU for use in the shaders relatively easily with the current codebase preparing this data on the CPU is still time consuming and error prone for the programmer. The first week I will focus on making sure I can easily do the following:

  • Create the model matrix by adding location, scale and rotation attributes.
  • Create the view and projection matrices by adding some sort of camera.
  • Create meshes with vertex normals by updating the tool I wrote for importing meshes in the Wavefront OBJ format.

Adding location, scale and rotation attributes might be more complex then it seems. I don’t want to add it to the mesh directly in order to prevent endless hierarchies of structs and traits build on top of each other. On the other hand building some kind of Entity Component System is out of the scope of this project. I’ll have to do some research in finding a middle ground.

Adding a camera is another big unknown in terms of integrating it with the rest of the code for similar reasons: Passing references to the camera around to each mesh seems like it will get very messy very fast. I’ll have to do some research and see how other game engines handle this.

Allowing vertex normals to be loaded is by far the most straightforward as I won’t need to add anything to the runtime code except update the vertex data layout. I just need to add support in the importer program, it helps that I don’t care a whole lot about the code quality of this tool.

Week 2 Goal

As this project is very much a learning process as the weeks go on milestones will be more vague. That being said the goal this week it to finish the shader side of things so I can light colored vertices for this I will need to at least do the following:

  • Research per vertex lighting models
  • Research representations of lights in a scene, both on GPU and CPU
  • Possibly add code to facilitate uploading lights to the GPU

Week 3 Goal

This week will focus on the CPU side of textures. I still need to do a lot of research as to how this is handled to the milestones for this week are TBA.

Week 4 Goal

This week I will focus on the shader side of texturing and integrating it with the per vertex lighting model. The actual milestones are still TBA for similar reasons as week 3.

Tags

  • OpenGL
  • Rust
  • rendering
  • graphics
  • lighting