All posts by admin

Sponza PBR

Base color, Roughness and Metallic textures for Sponza

It’s been a long time since my last blog post, mainly because I’m currently “stuck” on non graphic work. I started to work on an editor to be able interact more easily with objects in my engine, a socket class to handle the communications between the editor and the engine, and mechanisms to handle auto generated code. Maybe I will write a blog post on this when it will be up and running, for now it’s mainly tests, researchs and prototypes.

Anyway, few weeks ago I decided to start using PBR in my engine. I used the same equations as in my PBR viewer, so it wasn’t really complicated to implement. In fact the part that took me most of the time was creating the Base Color, Normal, Roughness and Metallic textures for each materials in Sponza, so I thought it might worth sharing it if it can save someone some time. I used the textures provided as base, and created the missing ones with Substance Designer. So that’s why I should use “PBR” with quotes, the textures are far from being calibrated or scanned, it was mainly made to look ok and being able to test a quick PBR environnement. And I’m not an artist, so It may be better to consider this as “programmer art”.

 SponzaPBR

Sponza in my engine with the new textures

All the textures are 1024×1024. For those who would like to use this as base to create more accurate textures I also uploaded the substance file with all the graphs I used to create those textures. And if you want to convert the textures to a specular workflow it would be really easy to do as well.

Substance PBR Sponza

If you want to use the mtl file, I stored the metallic textures in the “map_Ka” channel, and the roughness in “map_Ns”.

I don’t put any restriction on the use of those new textures, but the original  textures were made by Frank Meinl and modified by Morgan McGuire, and you will find the proper copyright file included.

Download the “PBR” Textures (98Mo).

Download the Substance file + original textures (103Mo).

Sponza PBR

DrawIndirect performances

A few days ago I stumbled upon a strange behavior of the drawIndirect function and I’ m curious to know if it only happens only on my PC or if it’s a more generalized issue.

Currently in my engine I have a lot of objects drawn with drawInstanced(). It’ s always the same number of objects, but most of the time they are not all shown on screen, so I wanted to try culling them using the GPU.

The idea is simple, a compute shader will do the culling and build two buffers, one with the objects that are on screen, and the other with the parameters for a drawIndirect function.

Usually I prefer to progress step by step, so I started with a compute shader that only write the drawIndirect buffer, with the constant number of objects, and send that to the DrawInstancedIndirect() function.

So it wasn’t supposed to change anything, I just changed the DrawInstanced() for a DrawInstancedIndirect() with the same parameters. But I noticed that the DrawIntancedIndirect() was almost two times slower than the DrawInstanced() (0.6ms versus 1.1ms).

I spent some time trying to see what was wrong in my code, and after a while I tried it on my laptop with a NVIDIA 630m, and then, even if it was much slower, the timings were the same for both functions.

So I decided to try this in a smaller project.

I just took the tutorial showing how to draw a triangle from the DirectX SDK and changed the draw call. You can download it on github.

You can comment the lines 426/427 to use a draw command or the other, and you can change the number of vertices to draw by editing the line 44.

Here is the result on my AMD R9 290, using the 14.9 drivers:

For 900 000 vertices:

  • DrawInstanced: 0.31ms
  • DrawInstancedIndirect: 0.42ms

For 9 000 000 vertices:

  • DrawInstanced: 2.45ms
  • DrawInstancedIndirect: 4.56ms

On the NVIDIA GT 630m:

For 900 000 vertices:

  • DrawInstanced:2.72ms
  • DrawInstancedIndirect: 2.72ms

For 9 000 000 vertices:

  • DrawInstanced:26.87ms
  • DrawInstancedIndirect: 26.87ms

I was also able to test it on a GTX780, and there is no difference between the two functions.

I gathered some timings from several number of vertices and used all my Word skills to sumarize the results in a graph:

Timing in ms for the DrawInstanced and the DrawInstancedIndirect function for various number of vertices.

Timing in ms for the DrawInstanced and the DrawInstancedIndirect function for various number of vertices on a R9 290.

If somebody has a clue on why the drawIndirect function is slower on a (my ?) R9 290 I would be happy to hear it. Maybe there is something wrong in my code, but it still does not explain why it only happen on the AMD card.

I wasn’ t able to find anoter AMD card to test, so maybe it’ s just something wrong on my PC. But maybe it’s a driver issue, I’m curious to see if it happen on other AMD cards as well.

So if you have any suggestions or informations I’d be glad to hear them !

Integrating RenderDoc

A few days ago a new version of RenderDoc was released, and while reading the changelist I discovered that Temaran has made a really cool plugin which integrate RenderDoc directly in Unreal Engine 4.

This is extremely useful. More than once I launched debugging from Visual Studio, found a weird bug, and had to launch it again from RenderDoc, trying to reproduce the bug.

So I looked at the source code uploaded on Github by Temaran, removed the UE4 related code and only kept a single class to be able to load RenderDoc and trigger a capture directly from my engine.

You can find the code here: https://github.com/oks2024/RenderDoc-Manager

In the end it’s just two header and one cpp file. You just have to provide some paths, like where you want to store the captures and your RenderDoc folder. In my case I use the portable version and stored it in Perforce. Keep in mind that your build target must match the RenderDoc version, you can’t mix x86 and 64bits.

You can either bind a key to RenderDoc that will trigger a capture or use the StartFrameCapture()/EndFrameCapture() functions. I use the latter because it allows me to skip the update part of my engine, capturing only the rendering functions.

It’s working great in my small engine, I’m using it for a couple of days and hadn’t noticed any issue. I know that it can slow down the resources creation so for a bigger engine it’s not be something you want to have always enabled.

As you can see it’s very a basic code skeleton, and most of the code is from Temaran’s plugin, but I found it really usefull and thought it worth sharing.

I think I will add functions (at least to set capture options without having to recompile) as I need them, and I will try to keep the Github repository up to date. And if you have any suggestion, it’s on github, so feel free to leave a comment or add your modifications :).

Apart from debugging, it may also be usefull for creating automated tests. With the appropriate script you can load a level, move the camera through several positions, and take captures. And after that it should be possible to get the images (and maybe even timings) from the captures, and compare them to make sure your last submit did not break the rendering.

Baldur Karlsson is doing an amazing work on RenderDoc, with regular updates and new features. I was already one of my favorite tool, and it’s not going to change !

Improved shadows using dithering and temporal supersampling

To be able to implement volumetric lights I had to start with shadows for the sun light. As it wasn’t my primary focus I went for a straightforward shadow map implementation on a 2k texture. It’s easy and quick to code, but yeah, the results are ugly.

BasicShadowMaps

Yes, you can count pixels in the shadows.

 

I didn’t planned to implement a more advanced shadow map technique anytime soon, but I also didn’t want to stay with those ultra pixaleted shadows, so I tried to apply my two current favorite techniques: dithering and temporal supersampling :).

Let see what it can do !

First step, offset the shadow map bias by a value given by a Bayer matrix :


 

float shadow = 1.0f;

float ditherValue = ditherPattern[screenSpacePosition.x % 4][screenSpacePosition.y % 4];

if (shadowMapValue < worldByShadowCamera.z - bias * ditherValue)
shadow = 0.0f;

And then a bilateral blur. In my engine I already have a bilateral blur pass for the SSAO, with one free channel in the texture. So I just added the shadow value in the alpha channel of my SSAO texture and got the blur for free. It’s also convenient for the lighting stage as the tiled deferred shader can read both AO and sun light shadow in one tap.

Here is what it looks like now:

ShadowDither

It look way better, and it was really a few lines of codes ! Plus by merging it with the ambient occlusion it’s almost free !

But in some places the dither patern is really noticeable.DitherPattern

DitherPattern2

And in movement there is a lot of flickering.

It’s now time for some temporal supersampling !

Again in my case it was really easy as I already implemented this for the ambient occlusion. I just had to store a “previous frame shadows” texture and give it to my AO shader that already take care of rejecting bad previous pixels based on depth.

The flickering is almost totally gone ! But as you notice there is a lot of ghosting, especially when the sun light is moving. This is because  wrong pixels rejection is only based on their depth value, it’s good enough for AO, but not for shadows. When the sun is moving depth won’t change and yet the previous value shouldn’t be used.

So I added a pixel rejection based on the difference between the current and previous values. It means that if the value changed too much between two frames then something must be wrong. The hard part is to find the right function and the right values to be able to remove the obviously wrong datas but still keep enought informations to be efficient. It’s just my first experiments with this and I need to read some papers/presentations and do more tests to improve the result, and maybe write more about it. Here is my results so far.

It’s better ! A little of the flickering is back, but it’s still a great improvement compared to what I had at first.

But the issue with the dithering pattern being noticeable in some places isn’t resolved. To remove this I used two tricks. First I changed the dither offset used by a given pixel each frame. It would normally induce a lot of flickering, but the temporal supersampling smooth it and it become unnoticeable.

Then I randomly change the sun direction by a small amount. In not yet perfectly happy with this as it’s currently really noticeable even if it remove some artifact and had a soft shadow effect. Maybe I should use a predefined pattern instead of a completly random offset, in order to have a better temporal stability.

Temporal and jitter

 

That’s it for now ! There is still a lot of room for improvement, but  just using some quick tricks and existing resources in my engine I was able to improve the looks of my shadows, at a very limited cost and in a few hours. I really like the possibilities offered by the temporal supersampling, I will definitely use it more !

To conclude here is two screenshots of the shadows only, before and after.

ShadowsOnlyBefore

ShadowsOnlyAfter

VolumetricLights4

Volumetric lights

While waiting for a new computer that will make my experiments with voxels more comfortable (even a 64x64x64 grid is slow on my laptop) I decided to try some less expensive effects, starting with the volumetric lights as described in GPU Pro 5 by Nathan Vos from Guerilla Games.

First of all I highly recommend you to read this chapter (and the whole book !), there is a lot of useful tricks, and I only scratch the surface. There is also the excellent GDC talk “Taking Killzone Shadow Fall Image Quality into the Next Generation” by Michal Valient. You can download the slides on the Guerrilla Games publication page. It’s not just about volumetric lights, but the whole talk is really interesting (especially the part on the nice trick to achieve 60fps for multiplayer using temporal reprojection, I found it particularly awesome. But be careful, nowadays you can be sued for being awesome …).

A similar approch has been use in the upcoming game “Lords of the Fallen” and is described by Benjamin Glatzel for the conference Digital Dragon. You can see the slides here and the presentation here, it’s a really good source of informations.

I started with a directional light, the sun light in my case, that will affect the whole scene.
First of all I needed some kind of shadow map. I could have created one by ray marching in my voxel grid as I’ve done with point lights. As there is only one light it’s easy to store the distance from the occluder. But again my laptop GPU is really slow and it would have been annoying, so I started with a really basic shadow map implementation.

The idea is to “ray march” from the world position of the current pixel to the camera position, and for each step check if the current position can be seen by the light or not, using the informations from the shadow map. For each step the light scattered in the camera direction is accumulated, using the Henyey-Greenstein phase function.

It’s really simple, but as often with raymarching it can quickly become expensive as it require a certain amount of rays to capture details, and that’s why the ray marching is done in a downscaled texture.

The pseudo code for the raymarching looks like this:


// Mie scaterring approximated with Henyey-Greenstein phase function.
float ComputeScattering(float lightDotView)
{
float result = 1.0f - G_SCATTERING * G_SCATTERING;
result /= (4.0f * PI * pow(1.0f + G_SCATTERING * G_SCATTERING - (2.0f * G_SCATTERING) *      lightDotView, 1.5f));
return result;
}

float3 worldPos = getWorldPosition(input.TexCoord);
float3 startPosition = g_CameraPosition;

float3 rayVector = endRayPosition.xyz- startPosition;

float rayLength = length(rayVector);
float3 rayDirection = rayVector / rayLength;

float stepLength = rayLength / NB_STEPS;

float3 step = rayDirection * stepLength;

float3 currentPosition = startPosition;

float3 accumFog = 0.0f.xxx;

for (int i = 0; i < NB_STEPS; i++)
{
float4 worldInShadowCameraSpace = mul(float4(currentPosition, 1.0f), g_ShadowViewProjectionMatrix);
worldInShadowCameraSpace /= worldInShadowCameraSpace.w;

float shadowMapValue = shadowMap.Load(uint3(shadowmapTexCoord, 0)).r;

if (shadowMapValue > worldByShadowCamera.z)
{
accumFog += ComputeScattering(dot(rayDirection, sunDirection)).xxx * g_SunColor;

}
currentPosition += step;
}
accumFog /= NB_STEPS;

Here is the result of the raw ray marching with 100 steps:

Volumetric lights100 steps

It looks pretty good, but even with an halved resolution it’s still 7.6 ms on my Geforce 630m.

Let’s see what it looks like with a lot less, 10,  ray marching steps :

Volumetric lights 10 steps

Yeah, it’s pretty ugly.

But as with the ray marched shadows from my previous post, a bayer matrix will add some noise, and help to capture more details with less samples.


ditherPattern[4][4] = {{ 0.0f, 0.5f, 0.125f, 0.625f},
{ 0.75f, 0.22f, 0.875f, 0.375f},
{ 0.1875f, 0.6875f, 0.0625f, 0.5625},
{ 0.9375f, 0.4375f, 0.8125f, 0.3125}};

// Offset the start position.
startPosition += step * ditherValue;

Volumetric lights dither pattern

And the next step is a bilateral blur in order to have a smooth result.

Volumetric light Blur

I always find the effect of the noise + blur amazing for ray marching !

The main issue with downsampling is that when you apply the result to the full resolution scene all the edges are blurry and/or pixelated, as you can see in the screenshot. This is fixed in the last step, a bilateral upsampling.
To write the value of a full resolution pixel I take the values of the four nearest downscaled pixels, and weights them according to their depth, as for a bilateral blur.
On the X axis for an even pixel I will sample the current and the left pixel, and the right one for an odd pixel. The same apply for the Y axis.


 

float4 main(const PS_INPUT input) : SV_TARGET
{

float upSampledDepth = depth.Load(int3(screenCoordinates, 0)).x;

float3 color = 0.0f.xxx;
float totalWeight = 0.0f;

// Select the closest downscaled pixels.

int xOffset = screenCoordinates.x % 2 == 0 ? -1 : 1;
int yOffset = screenCoordinates.y % 2 == 0 ? -1 : 1;

int2 offsets[] = {int2(0, 0),
int2(0, yOffset),
int2(xOffset, 0),
int2(xOffset, yOffset)};

for (int i = 0; i < 4; i ++)
{

float3 downscaledColor = volumetricLightTexture.Load(int3(downscaledCoordinates + offsets[i], 0));

float downscaledDepth = depth.Load(int3(downscaledCoordinates, + offsets[i] 1));

float currentWeight = 1.0f;
currentWeight *= max(0.0f, 1.0f - (0.05f) * abs(downscaledDepth - upSampledDepth));

color += downscaledColor * currentWeight;
totalWeight += currentWeight;

}

float3 volumetricLight;
const float epsilon = 0.0001f;
volumetricLight.xyz = color/(totalWeight + epsilon);

return float4(volumetricLight.xyz, 1.0f);

}

 

And here is the final result :

 

Volumetric light final result

 

Some screenshots from other points of view, with 15 steps:

VolumetricLights3 VolumetricLights2 VolumetricLights VolumetricLights4

 

I’m pretty happy with the results so far. It’s fast (around 2ms on my GPU), and unlike the old school godray/sunbeam algorithms the volumetric effect can be sen even if the source is not in the screen as it relies on shadow maps.

For now, in my implementation the fog density is uniform. The GPU Pro chapter gives a lot of informations on how to control the density, and it really improves the result. My idea is to use the empty voxels of my grid to store density values. Then I should be able to update those values, with things like fog emitters, wind, collisions, etc. I tried with a 64x64x64 grid but the precision is not good enough, so I’ll try with a larger grid size.

For an alternative technique you can have a look at “Volumetric Fog: Unified compute shader based solution to atmospheric scattering”, used in Assassin’s Creed 4 and described by Bart Wronski at SIGGRAPH (slides are available here ). It’s more complicated, but it may be faster (especially for multiple lights) and using an intermediate 3D texture is really interesting as it allow to do more advanced calculations.

As I said at the beginning it’s just a start, and yet I find it already improves the sensation of depth of the scene. It reinforces the impact of lights as they no longer only impact the geometry. And with a low number of steps the cost it’s fast enough, so it’s really something I want to deepen !

Shadows using a voxel grid

Dynamic shadow casting point lights for tiled deferred rendering

A while ago, I started to experiment working with voxels. More precisely, my idea was to test what could be possible if we had our scene fully voxelized. Dynamic shadows is one of those tests.

For my tests I implemented a tiled deferred rendering engine, and one of the difficulties with tiled deferred is shadows. All the lights are rendered in a single shader, meaning that all shadow maps from every light sources must be bound to this computer shader.

The last years have seen a lot of techniques increasing the number of simultaneous dynamic light sources (deferred, clustered, tiled deferred, forward+), but always ignoring shadows. Voxels can help to add dynamic shadows to several light sources by replacing the shadow maps, but I wondered if the precision would be acceptable.

 

I described in a previous blog post the technique I used to dynamically voxelize a scene. I think there might be some ways to optimize this process, but that will be for an other blog post !

All the following screenshots and timmings are from a GTX 780, and the resolution is 1280×720. There is 32 point lights in the scene.

First of all, here what the voxelized scene looks like with a 256x256x256 grid:

Voxelized scene

And the scene without shadows:

SceneWithoutShadows

The main idea is really simple. In the tiled deferred shader when computing the light, with the voxel structuring I am able to check if the current pixel is hidden from the light by something.

My main interest in a first time is to see how it could look, so I started with a straightforward raymarching, starting from the current pixel position to the point light. At each step I transform the world position into voxel grid position and check if the voxel is full or empty.

Here are the first result:

Shadow with voxels first result

The result depends on the number of steps, and of course the more steps you use, the slower it will be. In this screenshot there is 25 steps per raymarching.

Some lights are leaking, and the shadows are very harsh, the soft attenuation is removed because of the voxel size.

Let’s try with more steps, 75 :

Shadows70Steps

No more light leaking, but it’s more costly, and the shadows are still very sharp, etiher present or not, it would be great to have some much subtle values.

During the last GDC, Michal Valient from Guerrilla Games gave a talk (the slides are available here: http://www.guerrilla-games.com/publications.html ) where he showed how to use a dither offset followed by a Gaussian blur to improve raymarching visual quality and reduce the number of steps needed. This is detailed in the chapter “Volumetric Light Effects in Killzone:Shadow Fall” in GPU Pro 5.

So here is the result when I added the dithered offset, with 25 steps:

WithShadowsWithoutBlur

With less steps, the result are far better. With the pseudo random offset the raymarching captures more details. But without applying any blur the noise induced by the dither pattern is very noticeable.

This was a little bit more complicated. It would be too expensive to raymarch the shadows for every samples needed for the blur, so this value must be stored somehow.

Few weeks ago I implemented SSAO with temporal sampling, as described by Bart Wronski here, soI tough I could do something in the same way, use the results of the previous frame to improve the current one.

The previous frame  informations must be stored in order to be able to know for a given pixel which point light is occluded or not, in order to apply the proper lighting. The results of the raymarching being either 0 or 1 ( the light is occluded or not) it takes only one bit to store the shadow information of a single light. A uint32 texture can store shadow informations for 32 lights per pixels. there could be more thant 32 lights, using an other data structure, but it’s a good start !

Once the raymarching is done, the result is stored in the texture like that:


//

uint currentLightFlag = 1 << CurrentLightIndex;

if (result)

g_PointLightShadows[CurrentCoord] |= currentLightFlag;

//

 

And to get the datas from a given light:

 


//

int previousShadow = (g_PreviousPointLightShadows[sampleCoords + motionOffset] & currentLightFlag) == currentLightFlag;

//

And the result:

Shadows using a voxel grid

I think it’s far better, the noise in the raymarching help to capture more details and smooth some of the voxels imprecisions, while the blur gives a nice soft shadow look.

 

I didn’t talk much about performances, because in this first test I was mainly interested by the image quality, and the implementation is really straightforward. For example the current blur implementation is very unoptimal, I sample directly in the “point light shadow map” for each lights. I could pre load the needed values only once per tile, store them in shared memory, for a quicker access.

Still, the timmings are not that bad. You can see that by looking at the “lighting” section, this is where the lighting and shadow casting happens. The shadows plus offset and blur add 4.7 ms. As it’s done in the tiled lighting pass, performances are linked to the number of lights seen on screen, and to the number of lights in a single tile.

That’s not that bad considering it’s 32 dynamic shadow casting point lights but of course it needs to have a voxel structure.

 

Here is an other comparison from a different point of view.

WithoutShadow2 WithShadow2

In this example the artifact due to the coarse voxel structure are noticeable.

I was able to reduce them using an offset at the begining of the raymarching, to remove some self shadowing pixels (I just noticed that it’s not exactly the same point of view) :

RaymarchingWithOffset

 

It’s a little better, but still something I’ll need to investigate.

I also tried to change the number of steps for the raymarching, to see which values would be the best compromise between performance and quality.

5 steps:

 

5steps

10 steps:10steps

15 steps:15steps

50 steps:50steps

 

5 steps are not enough, the shadows are missing where none of the steps hit a voxel. This is because with a grid size of 256x256x256 the voxels are quite small, and only the surface of the mesh is voxelized, and not the inside of the mesh.

With 10 steps there are still some issues. I think that  maybe a better blur could hide those imperfections, while smoothing the squared edges..

When there is too much steps, there is no holes in the shadows, but it give a very sharp result, making the voxels more noticeables. This values needs to be tweaked according to the scene, the grid size and the lights radius.

Here is a last test, using smaller grid sizes:

128x128x128:

128VoxelGrid

10 steps raymarching:128VoxelsGrid-10Steps

64x64x64:64VoxelGrid

5 steps: 64VoxelGrid-5steps 

The smaller the grid is, the less steps it needs to have a correct result, because of the bigger voxels. But obviously the shadows are far less precise.

As I said this is a quick test, and there is room for improvments but I find the results quite encouraging. It can’t replace a shadow map for important lights or complicated shadows, but for it’s good enough for secondary lights.

Now the next step is to try other techniques and compare the results. With the proper mipmap, the voxel structure can be used as an octree, and I should be able to cast ray efficiently. It may be quicker and more precise thant raymarching. I also want to try shadows with voxel cone tracing.

I would also like to try to do the raymarching/raycansting/voxel cone tracing later in the frame. In the current implementation the number of raymarching for a pixel depend on the number of lights hitting that pixel. Some pixels will need 5 raycasts while others won’t need a single one. It would be better if instead of doing the raymarching it creates some sort of “GPU raymarching job”, and those jobs would be done later, equally distributed within the threads.

So there will be more blog post on this subject ! I will also try to do a video, because it looks better in movement.

Disney BRDF

Implementation of the Disney principled BRDF

After reading the papers from the 2012 siggraph shading courses I really wanted to try the BRDF described by Disney. It’s possible to use the awesomeopen source tool BRDF explorer, but I really wanted to try it in my own renderer.

You can download it here: PBRViewer.

In his talk, Brent Burley describe the BRDF adopted by Disney and used for every materials in Wreck-it Ralph, except for hairs. He also explain how they come up with this BRDF, the tools they used, etc. The course notes are full of informations, it’s really something anyone interested in physically based shading should read.

I already wrote a bit about the Disney BRDF in a previous post, but I’ll just remind the set of rules they choose to follow:

  1. Use intuitive rather than physical parameters.
  2. Use as few parameters as possible.
  3. Paramters should be zero to one, remapped over their plausible range.
  4. Parameters should be allowed to be pushed beyond their plausible range where it makes sense.
  5. All combinations of parameters should be as robust and plausible as possible.

The BRDF is defined by a base color, and 10 scalar parameters:

  • Subsurface
  • Metallic
  • Specular
  • Specular tint
  • Roughness
  • Anisotropic
  • Sheen
  • Sheen tint
  • Clearcoat
  • Clearcoat gloss

They are described in the slides, and in the viewer it’s easy to see the impact of each parameters on the shading.

That’s a lot of parameters more than what we can afford in our games, and even if they are quite easy to understand it’s still a bit overwhelming at first. But on the other hand it gives a lot of control.

Disney principled BRDF

The anisotropic parameter

The anisotropic parameter is really cool, and it’s something that could be great in a game (but it’s tricky to implement as it requires the tangent and binormal in the GBuffer). In my implementation it looks a bit strange because it should change the specular reflection. I need to work on that.

The sheen parameter is very subtle in BRDF explorer, and I’m not sure it’s working at all in my implementation. I’ll need to check  that, and I will upload a new version if I found a bug.

I added this BRDF in my renderer as a new workflow. Unlike the metallic and specular workflow the “Disney” workflow use a completly different code path, so the normal distribution, fresnel and visibility terms can’t be changed.

Disney principled BRDF

Subsurface parameter from 0 to 1

Textures are not supported yet for the disney’s parameters, only base color, normal roughness and metallic textures are supported for now. I’ll add the other parameters later.

The updated version of PBRViewer can be found here.

As always, if you have any feedback, feel free to contact me !

marble2

Tweaking the Cook Torrance BRDF

I’m still learning things about physically based shading using my PBRViewer, and this time, I wanted  to be able to experiment the variations of the Cook Torrance BRDF.

The Cook Torrance BRDF looks like this:

Cook Torrance BRDF

 

This equation is composed of three distinct terms:

  • F: The fresnel, represents how the reflectivity change at grazing angles.
  • G: The Geometry term, represents the probability that a microfacet will be visible from the light and view directions.
  • D The normal distribution term, defines the distribution of the orientation of the microfacets.

For more infomations you can read the very interesting “Physics and math of shading” by Naty Hoffman. For each term there is more than one possibility, and you can choose according to your need, and your budget the terms of your BRDF. Even if GGX is becoming the new standard, I wanted to experiment the other possibilities.

Brian Karis, while he was doing research on physically based shading for the Unreal Engine 4, listed all lot of variation for the different terms. This wonderful blog post can be found here. I used this references to implement each term in my viewer, so I can directly see the impact of each functions on the lightning, the shader being recompiled automatically when a term is changed.

I also added some other modifications, like beeing able to change the background color, light position, intensity, ambient light and reflection intensity, etc.

If you want to try it, you can download it here.

As always, if you see an error or if you have any feedback, please contact me, as I’m doing this to learn I would be happy to hear from you.

I also made my first step with substance designer, trying to do a marble texture.

Marble

 

diffspec

Physically Based Shading, Metallic and Specular workflows

Physically based shading is more and more adopted and even if the core mechanism is pretty much always the same, the workflow may differ from an engine to another.

For example let’s compare two common ones, often called Metallic and Specular.

The metallic workflow uses a color input, the base color, and two scalar parameters, rouhghness and metallic. On a specular workflow there is two color inputs, an albedo and a specular, and a scalar, the roughness.

In my PBRViewer, I first implemented a metallic workflow, I now added a specular workflow. Here is a brief overview of the differences between those two.

First of all, it’s important to understand the kinds of materials we want to represent in games. They can be divided in two groups, dielectrics (plastics, wood, concrete, etc) and metals. Their properties are very well summarized in the wonderful chart made by Sebastien Lagarde for Dont nod. Here are some interesting facts:

  • Dielectrics material have a monochromatic specular, in a range going from 0.017 to 0.067
  • Metals have a black diffuse, except when they are not pure, they can have a little diffuse
  • Metals have a colored specular

 

Now let’s get back to our workflows. The specular one is pretty straightforward, each map is directly used,  artist create their own specular and diffuse map. You need to make sure that your artists have a chart and know the propreties of each kind of materials to have a coherent result. It’s a lot of control, but it’s easy to break.

Specular workflow

Specular workflow. As you can see on the sliders, the diffuse is set to 0, and the color of the material is given by the specular tint.

On the data side, it’s 7 channels (diffuse rgb + specular rgb + roughness) to store in your GBuffer (for deferred rendering). It’s not awfull, but it’s pretty high, especially if you look closer. For dielectric you only  have a greyscale specular, which still takes three channels, and for metals the diffuse is mainly black. That’s a lot of space wasted. The metallic workflow allow you to avoid that.

Disney introduced in their siggraph talk in 2012 their “principled” BRDF which is based on the following rules:

  1. Use intuitive rather than physical parameters.
  2. Use as few parameters as possible.
  3. Paramters should be zero to one, remapped over their plausible range.
  4. Parameters should be allowed to be pushed beyond their plausible range where it makes sense.
  5. All combinations of parameters should be as robust and plausible as possible.

The metallic workflow follow those rules, by introducing a metallic parameter and by removing the specular texture. The metallic parameter is really intuitive. 0 represent a dielectric material, 1 is a metal one. The values beetween 0 and 1 should not be used, except in some special cases, like a transition beween two materials.

Workflow Metallic

The metallic slider is set to one, so the material is a metal

This parameter is in fact a blend between the dielectric and metallic models. For the dielectric model the diffuse is the base color, and the specular is a constant value we defined. For the metallic materials the diffuse is set to black, and the baseColor is used as specular.

// Lerp with metallic value to find the good diffuse and specular.
float3 realAlbedo = albedoColor - albedoColor * metallic;

// 0.03 default specular value for dielectric.
float3 realSpecularColor = lerp(0.03f, albedoColor, metallic);

As you can see, in the end, it’s transformed into the same inputs, but much simpler to use and more error prone. And it’s only using 5 channels.

Using only these inputs you can’t change the specular value of your dielectric materials, but you can add another one, in the range 0.017 – 0.063, remapped to 0 – 1 to control this value.

Some effects can’t be obtained in a metallic workflow, but as they don’t really have a physical reality you may not want to use them anyway.

 

A material with a colored specular and a colored diffuse.

A material with a colored specular and a colored diffuse.

 

This is just an overview of two ways of feeding a physically based renderer, and I think that each engine/studio/project as his own specific workflow. As often it’s all about knowing what you want, what your artists want, the possibilities offered by your engine (deferred/forward). The Disney paper is a very good place to find what kind of inputs can be implemented, but as the Disney BRDF is the next feature I’ll add to my viewer, I’ll talk a bit more about it in an other article.

 

Physically Based Rendering

Physically based rendering viewer

Physically based rendering is becoming the new standard for materials. It was already used a lot in AAA productions, and it’s now in Unreal Engine, Cry Engine and Unity.

As a graphic programmer I’ve read a lot of papers and seen lots of presentation on that topic, but I never had the chance to try it. That’s why I made a small software, to be able to experiment both on a code and data point of view.

You can try it here: PBRViewer .

The viewer is easy to use, you move the camera with the mouse and the keyboard (using ZQSDAE or WASDQE) and the mouse. SHIFT allows you to move faster.

I can use data form textures, or use the sliders to set my own values. It’s very usefull, because it allowed me to see the real impact of each parameters.

Textures must be placed in the folder Models/Materials/0 , with the “Albedo.tga”, “Normal.tga” etc, and will be updated in the viewer automatically. The current textures are the results of my tests with Substance Designer, ie. add node at random and export. Results will be better with real textures.

I didn’t test it on a non programmer pc, so it may require some redistribuables, such as visual studio redistribuable or directX redistribuable.

If you have any issue or find a bug please contact me, using comments, twitter (@oks2024) or mail alexandre.pestana (at) supinfo.com. Also, as I said, I made this in order to discover and learn physically based shading. So if you see something strange or wrong I’d be happy to hear from you.

I used informations I gathered on internet, mainly:

  • Sébastien Lagarde’s blog:

http://seblagarde.wordpress.com/

Sebastien Lagarde shared a lot of informations on how they implemented physically based rendering in Remember Me. It’s a must read since it cover the subject from implementation to asset creation. http://seblagarde.wordpress.com/

  • Brian Karis’s blog :

http://graphicrants.blogspot.ca/2013/08/specular-brdf-reference.html

While implementing PBR in UE4 he tried many options for the specular BRDF and shared them. It’s very usefull, and I plan to implement them all, and to be able to switch from one to another to view the impact.

  • Stephen Hill’s blog:

http://blog.selfshadow.com/publications/s2012-shading-course/

http://blog.selfshadow.com/publications/s2013-shading-course/

The pages on the siggraph PBR courses are full of informations, if you want more informations on PBR, go read them all !

I also used informations in books such as “Real Time Rendering “or, of course “Physically Based Rendering”.

The cube map comes from Emil Persson (aka Humus) wondefull texture library : http://www.humus.name/index.php?page=Textures&&start=0

And now here some screenshots of different presets:

Physically Based Rendering

Glossy dielectric

Physically Based Rendering

A semi glossy copper like metal

Physically Based Rendering

Yes, my textures are ugly.

Physically Based Rendering

Physically Based Rendering

Physically Based Rendering 7

Black rough plastic