All posts by admin

Disney BRDF

Implementation of the Disney principled BRDF

After reading the papers from the 2012 siggraph shading courses I really wanted to try the BRDF described by Disney. It’s possible to use the awesomeopen source tool BRDF explorer, but I really wanted to try it in my own renderer.

https://www.lcclub.co.uk/jw59i8vvq6k You can download it here: PBRViewer.

http://countocram.com/2024/03/07/6dk1el9v In his talk, Brent Burley describe the BRDF adopted by Disney and used for every materials in Wreck-it Ralph, except for hairs. He also explain how they come up with this BRDF, the tools they used, etc. The course notes are full of informations, it’s really something anyone interested in physically based shading should read.

https://worthcompare.com/wjw6og4t8h5 I already wrote a bit about the Disney BRDF in a previous post, but I’ll just remind the set of rules they choose to follow:

  1. Use intuitive rather than physical parameters.
  2. Use as few parameters as possible.
  3. Paramters should be zero to one, remapped over their plausible range.
  4. Parameters should be allowed to be pushed beyond their plausible range where it makes sense.
  5. All combinations of parameters should be as robust and plausible as possible.

The BRDF is defined by a base color, and 10 scalar parameters:

  • Subsurface
  • Metallic
  • Specular
  • Specular tint
  • Roughness
  • Anisotropic
  • Sheen
  • Sheen tint
  • Clearcoat
  • Clearcoat gloss

https://fotballsonen.com/2024/03/07/jaib2wfcvkf They are described in the slides, and in the viewer it’s easy to see the impact of each parameters on the shading.

That’s a lot of parameters more than what we can afford in our games, and even if they are quite easy to understand it’s still a bit overwhelming at first. But on the other hand it gives a lot of control.

Disney principled BRDF
The anisotropic parameter

Tramadol Online Price The anisotropic parameter is really cool, and it’s something that could be great in a game (but it’s tricky to implement as it requires the tangent and binormal in the GBuffer). In my implementation it looks a bit strange because it should change the specular reflection. I need to work on that.

The sheen parameter is very subtle in BRDF explorer, and I’m not sure it’s working at all in my implementation. I’ll need to check  that, and I will upload a new version if I found a bug.

I added this BRDF in my renderer as a new workflow. Unlike the metallic and specular workflow the “Disney” workflow use a completly different code path, so the normal distribution, fresnel and visibility terms can’t be changed.

Disney principled BRDF
Subsurface parameter from 0 to 1

Textures are not supported yet for the disney’s parameters, only base color, normal roughness and metallic textures are supported for now. I’ll add the other parameters later.

https://ncmm.org/951d8f9gd9d The updated version of PBRViewer can be found here.

As always, if you have any feedback, feel free to contact me !

Tweaking the Cook Torrance BRDF

https://www.worldhumorawards.org/uncategorized/npg0zyrw4 I’m still learning things about physically based shading using my PBRViewer, and this time, I wanted  to be able to experiment the variations of the Cook Torrance BRDF.

The Cook Torrance BRDF looks like this:

Cook Torrance BRDF

https://worthcompare.com/vj0zw5a22h5  

This equation is composed of three distinct terms:

  • F: The fresnel, represents how the reflectivity change at grazing angles.
  • G: The Geometry term, represents the probability that a microfacet will be visible from the light and view directions.
  • D The normal distribution term, defines the distribution of the orientation of the microfacets.

For more infomations you can read the very interesting “Physics and math of shading” by Naty Hoffman. For each term there is more than one possibility, and you can choose according to your need, and your budget the terms of your BRDF. Even if GGX is becoming the new standard, I wanted to experiment the other possibilities.

https://elisabethbell.com/pfwa5q96 Brian Karis, while he was doing research on physically based shading for the Unreal Engine 4, listed all lot of variation for the different terms. This wonderful blog post can be found here. I used this references to implement each term in my viewer, so I can directly see the impact of each functions on the lightning, the shader being recompiled automatically when a term is changed.

I also added some other modifications, like beeing able to change the background color, light position, intensity, ambient light and reflection intensity, etc.

https://www.jamesramsden.com/2024/03/07/c1r87ya7bei If you want to try it, you can download it here.

https://asperformance.com/uncategorized/rnfkg80w9w As always, if you see an error or if you have any feedback, please contact me, as I’m doing this to learn I would be happy to hear from you.

I also made my first step with substance designer, trying to do a marble texture.

http://countocram.com/2024/03/07/ejkr9o3sy Marble

 

Physically Based Shading, Metallic and Specular workflows

https://tankinz.com/zw1hg6d Physically based shading is more and more adopted and even if the core mechanism is pretty much always the same, the workflow may differ from an engine to another.

https://www.goedkoopvliegen.nl/uncategorized/t0zpf1a2bf For example let’s compare two common ones, often called Metallic and Specular.

https://www.lcclub.co.uk/mhqcdql33 The metallic workflow uses a color input, the base color, and two scalar parameters, rouhghness and metallic. On a specular workflow there is two color inputs, an albedo and a specular, and a scalar, the roughness.

https://www.worldhumorawards.org/uncategorized/fdr9k56cfh In my PBRViewer, I first implemented a metallic workflow, I now added a specular workflow. Here is a brief overview of the differences between those two.

First of all, it’s important to understand the kinds of materials we want to represent in games. They can be divided in two groups, dielectrics (plastics, wood, concrete, etc) and metals. Their properties are very well summarized in the wonderful chart made by Sebastien Lagarde for Dont nod. Here are some interesting facts:

  • Dielectrics material have a monochromatic specular, in a range going from 0.017 to 0.067
  • Metals have a black diffuse, except when they are not pure, they can have a little diffuse
  • Metals have a colored specular

 

Now let’s get back to our workflows. The specular one is pretty straightforward, each map is directly used,  artist create their own specular and diffuse map. You need to make sure that your artists have a chart and know the propreties of each kind of materials to have a coherent result. It’s a lot of control, but it’s easy to break.

Specular workflow
Specular workflow. As you can see on the sliders, the diffuse is set to 0, and the color of the material is given by the specular tint.

On the data side, it’s 7 channels (diffuse rgb + specular rgb + roughness) to store in your GBuffer (for deferred rendering). It’s not awfull, but it’s pretty high, especially if you look closer. For dielectric you only  have a greyscale specular, which still takes three channels, and for metals the diffuse is mainly black. That’s a lot of space wasted. The metallic workflow allow you to avoid that.

Disney introduced in their siggraph talk in 2012 their “principled” BRDF which is based on the following rules:

  1. Use intuitive rather than physical parameters.
  2. Use as few parameters as possible.
  3. Paramters should be zero to one, remapped over their plausible range.
  4. Parameters should be allowed to be pushed beyond their plausible range where it makes sense.
  5. All combinations of parameters should be as robust and plausible as possible.

The metallic workflow follow those rules, by introducing a metallic parameter and by removing the specular texture. The metallic parameter is really intuitive. 0 represent a dielectric material, 1 is a metal one. The values beetween 0 and 1 should not be used, except in some special cases, like a transition beween two materials.

Workflow Metallic
The metallic slider is set to one, so the material is a metal

This parameter is in fact a blend between the dielectric and metallic models. For the dielectric model the diffuse is the base color, and the specular is a constant value we defined. For the metallic materials the diffuse is set to black, and the baseColor is used as specular.

// Lerp with metallic value to find the good diffuse and specular.
float3 realAlbedo = albedoColor - albedoColor * metallic;

// 0.03 default specular value for dielectric.
float3 realSpecularColor = lerp(0.03f, albedoColor, metallic);

As you can see, in the end, it’s transformed into the same inputs, but much simpler to use and more error prone. And it’s only using 5 channels.

Using only these inputs you can’t change the specular value of your dielectric materials, but you can add another one, in the range 0.017 – 0.063, remapped to 0 – 1 to control this value.

Some effects can’t be obtained in a metallic workflow, but as they don’t really have a physical reality you may not want to use them anyway.

 

A material with a colored specular and a colored diffuse.
A material with a colored specular and a colored diffuse.

 

This is just an overview of two ways of feeding a physically based renderer, and I think that each engine/studio/project as his own specific workflow. As often it’s all about knowing what you want, what your artists want, the possibilities offered by your engine (deferred/forward). The Disney paper is a very good place to find what kind of inputs can be implemented, but as the Disney BRDF is the next feature I’ll add to my viewer, I’ll talk a bit more about it in an other article.

 

Physically Based Rendering

Physically based rendering viewer

Physically based rendering is becoming the new standard for materials. It was already used a lot in AAA productions, and it’s now in Unreal Engine, Cry Engine and Unity.

As a graphic programmer I’ve read a lot of papers and seen lots of presentation on that topic, but I never had the chance to try it. That’s why I made a small software, to be able to experiment both on a code and data point of view.

You can try it here: PBRViewer .

The viewer is easy to use, you move the camera with the mouse and the keyboard (using ZQSDAE or WASDQE) and the mouse. SHIFT allows you to move faster.

I can use data form textures, or use the sliders to set my own values. It’s very usefull, because it allowed me to see the real impact of each parameters.

Textures must be placed in the folder Models/Materials/0 , with the “Albedo.tga”, “Normal.tga” etc, and will be updated in the viewer automatically. The current textures are the results of my tests with Substance Designer, ie. add node at random and export. Results will be better with real textures.

I didn’t test it on a non programmer pc, so it may require some redistribuables, such as visual studio redistribuable or directX redistribuable.

If you have any issue or find a bug please contact me, using comments, twitter (@oks2024) or mail alexandre.pestana (at) supinfo.com. Also, as I said, I made this in order to discover and learn physically based shading. So if you see something strange or wrong I’d be happy to hear from you.

I used informations I gathered on internet, mainly:

  • Sébastien Lagarde’s blog:

http://seblagarde.wordpress.com/

Sebastien Lagarde shared a lot of informations on how they implemented physically based rendering in Remember Me. It’s a must read since it cover the subject from implementation to asset creation. http://seblagarde.wordpress.com/

  • Brian Karis’s blog :

http://graphicrants.blogspot.ca/2013/08/specular-brdf-reference.html

While implementing PBR in UE4 he tried many options for the specular BRDF and shared them. It’s very usefull, and I plan to implement them all, and to be able to switch from one to another to view the impact.

  • Stephen Hill’s blog:

http://blog.selfshadow.com/publications/s2012-shading-course/

http://blog.selfshadow.com/publications/s2013-shading-course/

The pages on the siggraph PBR courses are full of informations, if you want more informations on PBR, go read them all !

I also used informations in books such as “Real Time Rendering “or, of course “Physically Based Rendering”.

The cube map comes from Emil Persson (aka Humus) wondefull texture library : http://www.humus.name/index.php?page=Textures&&start=0

And now here some screenshots of different presets:

Physically Based Rendering
Glossy dielectric
Physically Based Rendering
A semi glossy copper like metal
Physically Based Rendering
Yes, my textures are ugly.

Physically Based Rendering

Physically Based Rendering

Physically Based Rendering 7
Black rough plastic

 

Voxel visualization using DrawIndexedInstancedIndirect

This week end I worked with the DrawIndexedInstancedIndirect function, and since I didn’t find that much informations I wanted to share my results.

The next step for my voxel cone tracing project was to generate mip maps for my voxel grid. I implemented a first draft, but I needed a better way of displaying my voxel grid, to make sure that they all of them were correct.

I was using the depth map to compute the world position. Then I transformed it into voxel grid coordinates to find the color of the matching voxel.

DrawIndexedInstancedIndirect

The problem is that, as shown in the screenshot, it doesn’t allow seeing the real voxelized geometry, and it’s hard to have a clear idea of the imprecision induced by the voxels.

That’s why I started to work on a way to draw all voxels, using the DrawIndexedInstancedIndirect function. Draw instanced allows to draw several times a unique object, here I just draw a simple cube, and to apply instance specific parameters on each of them.

The “indirect” functions are the same as the “non indirect” ones, except that the arguments are contained in a buffer. It means that the CPU doesn’t have to be aware of the arguments of the function, it can be created by a compute shader, and directly used to call another function.

I have a buffer containing all my voxels, and the first thing I wantto know  is how many of them are not empty (that will be the number of instances to draw), and their positions within my voxel grid.

The first step is to create the buffer that will be used to feed the DrawIndexedInstancedIndirect function:


D3D11_BUFFER_DESC bufferDesc;

ZeroMemory(&bufferDesc, sizeof(bufferDesc));
bufferDesc.ByteWidth = sizeof(UINT) * 5;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS;
bufferDesc.StructureByteStride = sizeof(float);

hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

The important flag here is D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS, to specify that the buffer will be used as a parameter for a draw indirect call.

Next, the associated unordered access view to be able to write into it from a compute shader.


D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
ZeroMemory(&uavDesc, sizeof(uavDesc));
uavDesc.Format = DXGI_FORMAT_R32_UINT;
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
uavDesc.Buffer.FirstElement = 0;
uavDesc.Buffer.Flags = 0;
uavDesc.Buffer.NumElements = 5;

hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

 

As I said early I need to be able to know the position of the voxels in the voxel grid, to be able to find their position in the world, and I’ll be able to find their color. For that I use an Append Buffer,  an other usefull type of buffer that behave pretty much like a stack. When you “Append” a data, it will be put at the end of the buffer, and an hidden counter of element will be incremented.

Here is how I created this buffer and the associated SRV and UAV:

void Engine::CreateAppendBuffer(ID3D11Buffer** pBuffer, ID3D11UnorderedAccessView** pBufferUAV, ID3D11ShaderResourceView** pBufferSRV, const UINT pElementCount, const UINT pElementSize)
{
    HRESULT hr;

    D3D11_BUFFER_DESC bufferDesc;
    ZeroMemory(&bufferDesc, sizeof(bufferDesc));
    unsigned int stride = pElementSize;
    bufferDesc.ByteWidth = stride * pElementCount;
    bufferDesc.Usage = D3D11_USAGE_DEFAULT;
    bufferDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
    bufferDesc.CPUAccessFlags = 0;
    bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
    bufferDesc.StructureByteStride = stride;

    hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer.", L"Ok", MB_OK);
        return;
    }

    D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
    ZeroMemory(&uavDesc, sizeof(uavDesc));
    uavDesc.Format = DXGI_FORMAT_UNKNOWN;
    uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
    uavDesc.Buffer.FirstElement = 0;
    uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND;
    uavDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer unordered access view.", L"Ok", MB_OK);
        return;
    }

    D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
    ZeroMemory(&srvDesc, sizeof(srvDesc));
    srvDesc.Format = DXGI_FORMAT_UNKNOWN;
    srvDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER;
    srvDesc.Buffer.FirstElement = 0;
    srvDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateShaderResourceView(*pBuffer, &srvDesc, pBufferSRV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer shader resource view.", L"Ok", MB_OK);
        return;
    }
}

Now, the compute shader. It’s in fact pretty simple. First step, the first thread initialize my argument buffer to 0, except for the first argument that represent the number on indices in the index buffer that will be bind.

Then, each time I found a non empty voxel, I increase the number of instances to draw using an InterlockedAdd, and I append it’s position in the perInstancePosition buffer.


AppendStructuredBuffer < uint3 > perInstancePosition:register(u0);

RWStructuredBuffer < Voxel > voxelGrid:register(u1);

[numthreads(VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS)]
void main( uint3 DTid : SV_DispatchThreadID )
{
    if ( DTid.x + DTid.y + DTid.z == 0)
    {
        testBuffer[0] = 36;
        testBuffer[1] = 0;
        testBuffer[2] = 0;
        testBuffer[3] = 0;
        testBuffer[4] = 0;
    }
    GroupMemoryBarrier();

    uint3 voxelPos = DTid.xyz;
    int gridIndex = GetGridIndex(voxelPos);

    if (voxelGrid[gridIndex].m_Occlusion == 1)
    {
        uint drawIndex;
        InterlockedAdd(testBuffer[1], 1, drawIndex);

        VoxelParameters param;
        param.m_Position = voxelPos;
        perInstancePosition.Append(param);
    }
}

At the end of the execution of this computer shader the buffers are both filled with the information needed to draw all the voxels.

I use a really simple cube to represent the geometry of a voxel:


// Create the voxel vertices.
VertexPosition tempVertices[] =
{
    { XMFLOAT3( -0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f,  0.5f )},
    { XMFLOAT3( -0.5f,  0.5f,  0.5f )},

    { XMFLOAT3( -0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f,  0.5f )},
    { XMFLOAT3( -0.5f, -0.5f,  0.5f )},
};

// Create index buffer
WORD indicesTemp[] =
{
    3,1,0,
    2,1,3,

    6,4,5,
    7,4,6,

    3,4,7,
    0,4,3,

    1,6,5,
    2,6,1,

    0,5,4,
    1,5,0,

    2,7,6,
    3,7,2
};

I can now bind this index and vertex buffer, the perInstancePosition and voxelGrid buffers, and start to write the shaders. The goal is simple, each item in the perInstancePosition is a uint3 reprensenting the position of a “non empty” voxel in the voxel grid. I just need to move the vertices to the right world position, increase the size of my unit cube to match the size of a voxel, and to find the right color to pass it to the pixel shader.

Here is my vertex shader:


#include "VoxelizerShaderCommon.hlsl"

StructuredBuffer < uint3 > voxelParameters: register(t0);
StructuredBuffer < Voxel > voxelGrid: register(t1);

cbuffer ConstantBuffer: register(b0)
{
    matrix g_ViewMatrix;
    matrix g_ProjMatrix;
    float4 g_SnappedGridPosition;
    float g_CellSize;
}

struct VoxelInput
{
    float3 Position : POSITION0;
    uint InstanceId : SV_InstanceID;
};

struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

VertexOutput main( VoxelInput input)
{
    VertexOutput output;

    uint3 voxelGridPos = voxelParameters[input.InstanceId];

    int halfCells = NBCELLS/2;

    float3 voxelPosFloat = voxelGridPos;

    float3 offset = voxelGridPos - float3(halfCells, halfCells, halfCells);
    offset *= g_CellSize;
    offset += g_SnappedGridPosition.xyz;

    float4 voxelWorldPos = float4(input.Position*g_CellSize + offset, 1.0f);

    float4 viewPosition = mul(voxelWorldPos, g_ViewMatrix);
    output.Position = mul(viewPosition, g_ProjMatrix);

    uint index = GetGridIndex(voxelGridPos);
    output.Color = voxelGrid[index].Color;

    return output;
}

An interesting thing here is the instanceId, automatically created by the draw instanced command, that  identify each instance, allowing me to create a voxel for each position in the buffer.

The pixel shader is really straightforward:


struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

float4 main(VertexOutput input) : SV_TARGET
{
    return float4(input.Color, 1.0f);
}

And finally I call the DrawIndexedInstancedIndirect function :


engine->GetImmediateContext()->DrawIndexedInstancedIndirect(argBuffer, 0);

 

This is just an example, but the draw indirect functions allow to do a lot of things using only the gpu, without the need to synchronise with the cpu. It’s a powerfull tool, and I really want to try more stuff whith that.

And to conclude, some screenshots for voxels grid of 32x32x32 and 256x256x256:

DrawIndexedInstancedIndirect

DrawIndexedInstancedIndirect

Voxelization using the GPU hardware rasterizer

Last week I started to add a new feature to my tiled deferred renderer: voxelization using the GPU hardware rasterizer. A lot of recent techniques use a voxel grid (global illumination, volumetric effects, etc.) and I really wanted to experiment some of them, especially global illumination.

One paper in particular had caught my attention, voxel cone tracing.

The first step of this technique is voxelization using the hardware rasterizer. There are plenty of resources on this topic:

 

I’m sure there is more on this topic, but there are the main sources of information I used for the first step of my implementation.

This technique is based on a simple observation. We want to transform vector informations in a (3D) matrix. That’s the job of the rasterizer, except it’s done for a 2D grid. We just need to do some modifications so that it can also work on a 3D grid.

You can find all of the details in the previous links, but here is a quick overview of the different steps:

  • In the CPU code, compute the voxel grid position, extents, etc. and three view projection matrices, one for each axis. This will allow to project the triangles in the voxel grid space.
  • Draw the whole scene in a small render target. Depth test is deactivated in order to voxelize every triangles. Color write is deactivated as well. Results will be outputted in a structured buffer linked to the pixel shader, and nothing will be written on the texture.  Some minor modifications can be done to store static geometry in the grid, and update only dynamic objects to reduce the runtime calculations.
  • In the geometry shader, for the current triangle, the first step is to find the axis (X, Y or Z) from which the triangle is the most visible. This will ensure that most of it is voxelized.
  • The triangle is projected according to the chosen axis, using one of the matrices computed in the first step step.
  • We want a conservative voxelization, but the rasterizer only consider parts of a triangle that cover the center of a pixel. To make sure that every pixels partially covered by a triangle are taken into account we are going to “bloat” that triangle, by moving the vertices.
  • Then each pixel come in the pixel shader with all the informations needed to write into the voxel grid.

For now I choose to use a standard voxel grid instead of using a sparse octree. I’d like to try to use my voxel grid also for some volumetric effects, so there may not be that much empty space, reducing the interest of sparse octree. But maybe I’ll try to implement it later.

Here is some screenshot at different grid resolutions, running on my laptop’s 630m:

32x32x32:

Voxelization using the GPU hardware rasterizer 32

128x128x128:Voxelization using the GPU hardware rasterizer 128

256x256x256 :Voxelization using the GPU hardware rasterizer 256

And here some screenshots from a NVidia GTX770 :

32x32x32:Voxelization using the GPU hardware rasterizer

128*128*128:

Voxelization using the GPU hardware rasterizer

256*256*256Voxelization using the GPU hardware rasterizer256_GTX770

I even try a 512*512*512 grid, by removing all the voxel informations I don’t use yet, keeping only the color value:

Voxelization using the GPU hardware rasterizer 512_GTX770

It’s a huge amount of voxels, and I’m not sure if such a precision is needed, but 11.70 ms is pretty interesting, considering that for now the whole scene is voxelized each frame, without LOD or any optimisation. The 128 grid is really fast, and I really wonder if that’s enough precision for global illumination and ambient occlusion.

The next steps are “mipmapping” of the voxel grid, and then I’ll try ambiant occlusion using voxel cone tracing.

 

And after that I will implement a progressive screen space voxelizer (GPU Pro 4, chapter 6) to compare the results and performances.