Category Archives: Uncategorized

Voxel visualization using DrawIndexedInstancedIndirect

https://www.goedkoopvliegen.nl/uncategorized/huw9ec4z This week end I worked with the DrawIndexedInstancedIndirect function, and since I didn’t find that much informations I wanted to share my results.

https://www.jamesramsden.com/2024/03/07/b8rtgbd6k8m The next step for my voxel cone tracing project was to generate mip maps for my voxel grid. I implemented a first draft, but I needed a better way of displaying my voxel grid, to make sure that they all of them were correct.

https://wasmorg.com/2024/03/07/96r0f0rqfd I was using the depth map to compute the world position. Then I transformed it into voxel grid coordinates to find the color of the matching voxel.

https://giannifava.org/xzluf4v DrawIndexedInstancedIndirect

https://tankinz.com/e7kqzzgjk9s The problem is that, as shown in the screenshot, it doesn’t allow seeing the real voxelized geometry, and it’s hard to have a clear idea of the imprecision induced by the voxels.

Buy Cheap Tramadol With Mastercard That’s why I started to work on a way to draw all voxels, using the DrawIndexedInstancedIndirect function. Draw instanced allows to draw several times a unique object, here I just draw a simple cube, and to apply instance specific parameters on each of them.

The “indirect” functions are the same as the “non indirect” ones, except that the arguments are contained in a buffer. It means that the CPU doesn’t have to be aware of the arguments of the function, it can be created by a compute shader, and directly used to call another function.

I have a buffer containing all my voxels, and the first thing I wantto know  is how many of them are not empty (that will be the number of instances to draw), and their positions within my voxel grid.

Tramadol Online Coupons The first step is to create the buffer that will be used to feed the DrawIndexedInstancedIndirect function:

https://elisabethbell.com/1forye6pg8h D3D11_BUFFER_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(bufferDesc)); bufferDesc.ByteWidth = sizeof(UINT) * 5; bufferDesc.Usage = D3D11_USAGE_DEFAULT; bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS; bufferDesc.StructureByteStride = sizeof(float); hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

Discount Cheap Pills Tramadol The important flag here is D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS, to specify that the buffer will be used as a parameter for a draw indirect call.

https://www.mominleggings.com/dmk7ow70whr Next, the associated unordered access view to be able to write into it from a compute shader.

D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; ZeroMemory(&uavDesc, sizeof(uavDesc)); uavDesc.Format = DXGI_FORMAT_R32_UINT; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = 0; uavDesc.Buffer.NumElements = 5; hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

https://fotballsonen.com/2024/03/07/kia1x8qiq8j  

As I said early I need to be able to know the position of the voxels in the voxel grid, to be able to find their position in the world, and I’ll be able to find their color. For that I use an Append Buffer,  an other usefull type of buffer that behave pretty much like a stack. When you “Append” a data, it will be put at the end of the buffer, and an hidden counter of element will be incremented.

Here is how I created this buffer and the associated SRV and UAV:

void Engine::CreateAppendBuffer(ID3D11Buffer** pBuffer, ID3D11UnorderedAccessView** pBufferUAV, ID3D11ShaderResourceView** pBufferSRV, const UINT pElementCount, const UINT pElementSize)
{
    HRESULT hr;

    D3D11_BUFFER_DESC bufferDesc;
    ZeroMemory(&bufferDesc, sizeof(bufferDesc));
    unsigned int stride = pElementSize;
    bufferDesc.ByteWidth = stride * pElementCount;
    bufferDesc.Usage = D3D11_USAGE_DEFAULT;
    bufferDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
    bufferDesc.CPUAccessFlags = 0;
    bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
    bufferDesc.StructureByteStride = stride;

    hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer.", L"Ok", MB_OK);
        return;
    }

    D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
    ZeroMemory(&uavDesc, sizeof(uavDesc));
    uavDesc.Format = DXGI_FORMAT_UNKNOWN;
    uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
    uavDesc.Buffer.FirstElement = 0;
    uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND;
    uavDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer unordered access view.", L"Ok", MB_OK);
        return;
    }

    D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
    ZeroMemory(&srvDesc, sizeof(srvDesc));
    srvDesc.Format = DXGI_FORMAT_UNKNOWN;
    srvDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER;
    srvDesc.Buffer.FirstElement = 0;
    srvDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateShaderResourceView(*pBuffer, &srvDesc, pBufferSRV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer shader resource view.", L"Ok", MB_OK);
        return;
    }
}

Now, the compute shader. It’s in fact pretty simple. First step, the first thread initialize my argument buffer to 0, except for the first argument that represent the number on indices in the index buffer that will be bind.

Then, each time I found a non empty voxel, I increase the number of instances to draw using an InterlockedAdd, and I append it’s position in the perInstancePosition buffer.


AppendStructuredBuffer < uint3 > perInstancePosition:register(u0);

RWStructuredBuffer < Voxel > voxelGrid:register(u1);

[numthreads(VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS)]
void main( uint3 DTid : SV_DispatchThreadID )
{
    if ( DTid.x + DTid.y + DTid.z == 0)
    {
        testBuffer[0] = 36;
        testBuffer[1] = 0;
        testBuffer[2] = 0;
        testBuffer[3] = 0;
        testBuffer[4] = 0;
    }
    GroupMemoryBarrier();

    uint3 voxelPos = DTid.xyz;
    int gridIndex = GetGridIndex(voxelPos);

    if (voxelGrid[gridIndex].m_Occlusion == 1)
    {
        uint drawIndex;
        InterlockedAdd(testBuffer[1], 1, drawIndex);

        VoxelParameters param;
        param.m_Position = voxelPos;
        perInstancePosition.Append(param);
    }
}

At the end of the execution of this computer shader the buffers are both filled with the information needed to draw all the voxels.

I use a really simple cube to represent the geometry of a voxel:


// Create the voxel vertices.
VertexPosition tempVertices[] =
{
    { XMFLOAT3( -0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f,  0.5f )},
    { XMFLOAT3( -0.5f,  0.5f,  0.5f )},

    { XMFLOAT3( -0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f,  0.5f )},
    { XMFLOAT3( -0.5f, -0.5f,  0.5f )},
};

// Create index buffer
WORD indicesTemp[] =
{
    3,1,0,
    2,1,3,

    6,4,5,
    7,4,6,

    3,4,7,
    0,4,3,

    1,6,5,
    2,6,1,

    0,5,4,
    1,5,0,

    2,7,6,
    3,7,2
};

I can now bind this index and vertex buffer, the perInstancePosition and voxelGrid buffers, and start to write the shaders. The goal is simple, each item in the perInstancePosition is a uint3 reprensenting the position of a “non empty” voxel in the voxel grid. I just need to move the vertices to the right world position, increase the size of my unit cube to match the size of a voxel, and to find the right color to pass it to the pixel shader.

Here is my vertex shader:


#include "VoxelizerShaderCommon.hlsl"

StructuredBuffer < uint3 > voxelParameters: register(t0);
StructuredBuffer < Voxel > voxelGrid: register(t1);

cbuffer ConstantBuffer: register(b0)
{
    matrix g_ViewMatrix;
    matrix g_ProjMatrix;
    float4 g_SnappedGridPosition;
    float g_CellSize;
}

struct VoxelInput
{
    float3 Position : POSITION0;
    uint InstanceId : SV_InstanceID;
};

struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

VertexOutput main( VoxelInput input)
{
    VertexOutput output;

    uint3 voxelGridPos = voxelParameters[input.InstanceId];

    int halfCells = NBCELLS/2;

    float3 voxelPosFloat = voxelGridPos;

    float3 offset = voxelGridPos - float3(halfCells, halfCells, halfCells);
    offset *= g_CellSize;
    offset += g_SnappedGridPosition.xyz;

    float4 voxelWorldPos = float4(input.Position*g_CellSize + offset, 1.0f);

    float4 viewPosition = mul(voxelWorldPos, g_ViewMatrix);
    output.Position = mul(viewPosition, g_ProjMatrix);

    uint index = GetGridIndex(voxelGridPos);
    output.Color = voxelGrid[index].Color;

    return output;
}

An interesting thing here is the instanceId, automatically created by the draw instanced command, that  identify each instance, allowing me to create a voxel for each position in the buffer.

The pixel shader is really straightforward:


struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

float4 main(VertexOutput input) : SV_TARGET
{
    return float4(input.Color, 1.0f);
}

And finally I call the DrawIndexedInstancedIndirect function :


engine->GetImmediateContext()->DrawIndexedInstancedIndirect(argBuffer, 0);

 

This is just an example, but the draw indirect functions allow to do a lot of things using only the gpu, without the need to synchronise with the cpu. It’s a powerfull tool, and I really want to try more stuff whith that.

And to conclude, some screenshots for voxels grid of 32x32x32 and 256x256x256:

DrawIndexedInstancedIndirect

DrawIndexedInstancedIndirect

Voxelization using the GPU hardware rasterizer

Last week I started to add a new feature to my tiled deferred renderer: voxelization using the GPU hardware rasterizer. A lot of recent techniques use a voxel grid (global illumination, volumetric effects, etc.) and I really wanted to experiment some of them, especially global illumination.

One paper in particular had caught my attention, voxel cone tracing.

The first step of this technique is voxelization using the hardware rasterizer. There are plenty of resources on this topic:

 

I’m sure there is more on this topic, but there are the main sources of information I used for the first step of my implementation.

This technique is based on a simple observation. We want to transform vector informations in a (3D) matrix. That’s the job of the rasterizer, except it’s done for a 2D grid. We just need to do some modifications so that it can also work on a 3D grid.

You can find all of the details in the previous links, but here is a quick overview of the different steps:

  • In the CPU code, compute the voxel grid position, extents, etc. and three view projection matrices, one for each axis. This will allow to project the triangles in the voxel grid space.
  • Draw the whole scene in a small render target. Depth test is deactivated in order to voxelize every triangles. Color write is deactivated as well. Results will be outputted in a structured buffer linked to the pixel shader, and nothing will be written on the texture.  Some minor modifications can be done to store static geometry in the grid, and update only dynamic objects to reduce the runtime calculations.
  • In the geometry shader, for the current triangle, the first step is to find the axis (X, Y or Z) from which the triangle is the most visible. This will ensure that most of it is voxelized.
  • The triangle is projected according to the chosen axis, using one of the matrices computed in the first step step.
  • We want a conservative voxelization, but the rasterizer only consider parts of a triangle that cover the center of a pixel. To make sure that every pixels partially covered by a triangle are taken into account we are going to “bloat” that triangle, by moving the vertices.
  • Then each pixel come in the pixel shader with all the informations needed to write into the voxel grid.

For now I choose to use a standard voxel grid instead of using a sparse octree. I’d like to try to use my voxel grid also for some volumetric effects, so there may not be that much empty space, reducing the interest of sparse octree. But maybe I’ll try to implement it later.

Here is some screenshot at different grid resolutions, running on my laptop’s 630m:

32x32x32:

Voxelization using the GPU hardware rasterizer 32

128x128x128:Voxelization using the GPU hardware rasterizer 128

256x256x256 :Voxelization using the GPU hardware rasterizer 256

And here some screenshots from a NVidia GTX770 :

32x32x32:Voxelization using the GPU hardware rasterizer

128*128*128:

Voxelization using the GPU hardware rasterizer

256*256*256Voxelization using the GPU hardware rasterizer256_GTX770

I even try a 512*512*512 grid, by removing all the voxel informations I don’t use yet, keeping only the color value:

Voxelization using the GPU hardware rasterizer 512_GTX770

It’s a huge amount of voxels, and I’m not sure if such a precision is needed, but 11.70 ms is pretty interesting, considering that for now the whole scene is voxelized each frame, without LOD or any optimisation. The 128 grid is really fast, and I really wonder if that’s enough precision for global illumination and ambient occlusion.

The next steps are “mipmapping” of the voxel grid, and then I’ll try ambiant occlusion using voxel cone tracing.

 

And after that I will implement a progressive screen space voxelizer (GPU Pro 4, chapter 6) to compare the results and performances.

 

Visual Studio has triggered a breakpoint

Yesterday I wasted a lot of time tracking a bug which turned out to be pretty instructive.

I am working on a tiled deferred renderer, and after adding a bunch of features, and before starting new ones, I spent some time on cleaning the development mess.

The DirectX debug device was complaining about some objects I forgot to release, so I made sure that my destructors were all doing their job. But then each time I closed the program, this message showed up:

 Visual Studio has triggered a Breakpoint

Well, it’s not really self-explanatory. I had set no breakpoint, visual studio has triggered the breakpoint itself. Clicking break leads to a SAFE_RELEASE() in the destructor of one of my singletons. When I tried “continue” the program terminated without any other errors/messages.

I first tried to comment the supposed faulty line. No more errors, but some DirectX objects are still alive. I thought that maybe the device had to be released last. I tried that, but the message came back, and breaking stop to SAFE_RELEASE(m_pd3dDevice). In fact I understand that it will always break on the last D3D released object, but if I let an object alive the message will not pop up (but an object will not be destroyed, that’s not a solution).

Obviously the bug was memory related, so I tried some analysis tools. Each one pointed me a different direction, far from the real solution. So I begin a less subtle debugging method: comment everything!

Since it was a memory problem I let only creations and destructions of my objects, removing the calls to the update and renders functions. And surprisingly it worked, no error, everything was destroyed. I then called the update functions, ok, and the render functions, and the breakpoint is back.

I continued to isolate the problem inside the rendering functions, until I have only the following code left:

 

// Set render targets.

ID3D11RenderTargetView* RTViews[2] = { ppAlbedo, ppNormal};

engine->GetImmediateContext()->OMSetRenderTargets(2, RTViews, 0);

// Clear render targets

quadRenderer->SetPixelShader(m_pClearPixelShader);

quadRenderer->Render();

 

I started to think that I had messed up with my quad rendering or render targets, but in fact it turns out that the evil one was the apparently innocent SetPixelShader function. It’s a simple (yet horrible) setter:

void QuadRenderer::SetPixelShader(ID3D11PixelShader* ppPixelShader)
{
    m_pQuadPixelShader = ppPixelShader;
}

 

This is wrong, (I’ll come back to that later), but not harmful per se.  The true horror lies in the QuadRenderer destructor:

 

QuadRenderer::~QuadRenderer(void)
{
    SAFE_RELEASE(m_pQuadVertexLayout);
    SAFE_RELEASE(m_pQuadVertexShader);
    SAFE_RELEASE(m_pQuadVertexBuffer);
    SAFE_RELEASE(m_pQuadIndexBuffer);
    SAFE_RELEASE(m_pQuadPixelShader);
}

 

The pixel shader is released, but which pixel shader ? It wasn’t created by the QuadRenderer, his owner must have already released it. SAFE_RELEASE check that the pointer is not null, but in that case the pointer still points to something, something that had already been released, leading to the land of unknown behaviors, or worse.

I learned several lessons thanks to that bug.

–        Destructors are not the funniest part of the code, but it needs to be done carefully, you can’t just look at the member variables and delete them all, it can be trickier than that.

–        Despite of his name, the macro SAFE_RELEASE is not that safe (obvious, but it’s easy to forget).

–        Poor design can lead to annoying bugs. When I implemented the QuadRenderer class I was thinking: “Ok, I’ll need a vertex and pixel shader, but I want to be able set the pixel shader I want”. This is wrong. I don’t want my QuadRenderer to have a pixel shader, I want it to https://www.lcclub.co.uk/23ev41het04 render with a particular pixel shader. There is no need to save it. This is something to be careful, so that your destructors can be trivial.

Well, now everything is clean and I’ve learned much more than I would have thought, I can start adding new features !

“Physical” midi user interface for lazy programmers

It’s been a while since my last post, but I’ve been busy, new job, new continent, etc…

This summer I was working at Ubisoft Paris, and for one of my tasks I needed to create a sample program, to implement and an effect.  It takes some time to rewrite tools you usally have in the engine like shader builder, DX entities creators, camera class, etc, but what I really found time consuming, and annoying was everything UI related. There were a lot of settings, and the goal being to explore all of the possibilities, almost all of them had to be exposed to the user. It’s such a pain, creating your sliders/buttons/whatever, setting the position, width, height, initializing, drawing and updating, etc. I used DXUT’s UI components, I’m sure there is better tools out there, but I have to admit,  I don’t like UI programming, so I wanted to find a better way for my future projects.

So one day I grabbed my midi keyboard, wondering how hard it was to get the inputs. Turns out it’s super easy. Using the library RtMidi, and I was able to get my inputs in no time. The following week I bought a small midi controller, the Korg NanoKontrol2.

Midi user interface

 

 

Look at that, it’s a physical “G”UI!

There is everything I could want on this controller, sliders, knobs and buttons with light feedbacks. So I started to write a small midi input manager for my current project, trying to make it easy to use. There is just a simple Update() function that will receive/send all the midi messages, and all I have to do is MidiInputManager::Instance()->GetMidiValue(NKI_F1) to get the current value of the first fader. And that’s it !

Midi values are in the range 0 – 127, so I had to transform them, an annoying and error prone, so I added an initialization function: MidiInputManager::Instance()->SetMinAndMaxValues(NKI_F1, 50, 500), and the results of the GetMidiValue function are already in the correct range.

 

Of course there are some drawbacks. The faders and knobs are not motorized, meaning that if you have saved a default value, you can’t see it on the fader, and you will lose it as soon as you move the fader.

You have only 128 different values so you really need to set the correct range to have a good precision.

You have only 8 knobs and 8 faders. It’s not really a problem since you can define multiple configurations using the buttons. For example for each faders there are 3 buttons, (S)olo, (M)ute and (R)ecord. I’ve linked them has a group, meaning that only one of the three can be active at a time. So the associated fader can control a red channel while S is on, blue when M is on and so on. I’m also thinking to use multiple global configurations if I really need more buttons. Using the “track” buttons, I would be able to press next and be in “configuration 2” and have a different mappings.

I also notice some lag sometime, I’ll have to take a look at that.

 

I’ve uploaded a first version on github, and I’ll update it as soon as I had features. For now it’s only for the nanokontrol2, but it’s easy to port to other controllers.

I’ve made a quick video of my tiled deferred renderer to show how it can be used:

 

I’m sure I’m not the only one to do that, but I hope it will help or inspire someone!

Udacity Introduction to Parallel Programming CS344 VS 2012 Solution

I’m currently folowing the great Udacity lessons about parallel programming and CUDA.

I made a Visual Studio 2012 solution to do some quick test for the assignement and I was thinking that it could be usefull to someone, so I put it on GitHub. I’ll try to update it for each lessons.

You can find it there.

Before being able to launch it you may need to do some steps.

First, you need to download openCV and unzip it to C:/Program Files/opencv, or change the directory accordingly in the “VC++ Directories” of the projet properties. Of course, you also need the CUDA SDK.

You also may need to change the Build Customizations in the Visual Studio solution. Right click on the project and select Buil Customization. If you don’t see a CUDA configuration file, click on “Find existing” and add the CUDA target files located at “C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\BuildCustomizations”

Right click on the .cu file, and make sure that the Item Type is set to “CUDA C/C++”

Hope it helps, and let me know if there is any trouble.

Particles – Collision and Flow Maps

Un nouveau post sur les particules pour présenter deux features, les collisions et l’utilisation de flow maps.

Tout d’abord les collisions avec une petite vidéo:

 

 

Ces particules sont ce que l’on appelle statefull, ce qui signifie qu’elles conservent leur état précédent, et peuvent l’utiliser pour réagir en fonction de l’environnement.
Cela me permet de leur appliquer une physique rudimentaire (la gravité et l’attraction par exemple), et de réagir aux collisions avec les bords de l’écran. Mais cela permet aussi de réagir avec un environnement potentiellement dynamique. J’utilise une texture de collision. Pour chaque particule, je regarde si à sa nouvelle position se trouve une information dans cette texture. Si c’est le cas il y a une collision et la particule réagis en conséquence.
Ici j’écris du texte dans la texture de collision, mais cela peut être n’importe quoi, et peut même être dynamique.

La vidéo suivante montre l’utilisation de flow map pour diriger simultanément toutes les particules:

 

 

J’ai tout d’abord utilisé Flow Map Painter pour créer la flow map, qui est en fait un ensemble de vecteurs.

 

Flow Map Painter

 

Cette map étant créé, je l’utilise dans la passe de physique pour influencer la direction des particules. Cela permet de coordonner facilement le mouvement d’un million de particules.