Tag Archives: Graphics

Integrating RenderDoc

Buying Tramadol A few days ago a new version of RenderDoc was released, and while reading the changelist I discovered that Temaran has made a really cool plugin which integrate RenderDoc directly in Unreal Engine 4.

Order Tramadol With Cod This is extremely useful. More than once I launched debugging from Visual Studio, found a weird bug, and had to launch it again from RenderDoc, trying to reproduce the bug.

Order Tramadol So I looked at the source code uploaded on Github by Temaran, removed the UE4 related code and only kept a single class to be able to load RenderDoc and trigger a capture directly from my engine.

https://fotballsonen.com/2024/03/07/nrkl7h0 You can find the code here: https://github.com/oks2024/RenderDoc-Manager

In the end it’s just two header and one cpp file. You just have to provide some paths, like where you want to store the captures and your RenderDoc folder. In my case I use the portable version and stored it in Perforce. Keep in mind that your build target must match the RenderDoc version, you can’t mix x86 and 64bits.

You can either bind a key to RenderDoc that will trigger a capture or use the StartFrameCapture()/EndFrameCapture() functions. I use the latter because it allows me to skip the update part of my engine, capturing only the rendering functions.

It’s working great in my small engine, I’m using it for a couple of days and hadn’t noticed any issue. I know that it can slow down the resources creation so for a bigger engine it’s not be something you want to have always enabled.

https://elisabethbell.com/xg9o0ze As you can see it’s very a basic code skeleton, and most of the code is from Temaran’s plugin, but I found it really usefull and thought it worth sharing.

Order Tramadol Canada I think I will add functions (at least to set capture options without having to recompile) as I need them, and I will try to keep the Github repository up to date. And if you have any suggestion, it’s on github, so feel free to leave a comment or add your modifications :).

Apart from debugging, it may also be usefull for creating automated tests. With the appropriate script you can load a level, move the camera through several positions, and take captures. And after that it should be possible to get the images (and maybe even timings) from the captures, and compare them to make sure your last submit did not break the rendering.

https://giannifava.org/x8xrdvo3 Baldur Karlsson is doing an amazing work on RenderDoc, with regular updates and new features. I was already one of my favorite tool, and it’s not going to change !

Physically Based Shading, Metallic and Specular workflows

https://ncmm.org/ckh02jeyhg Physically based shading is more and more adopted and even if the core mechanism is pretty much always the same, the workflow may differ from an engine to another.

https://asperformance.com/uncategorized/cg4iw6xyuw For example let’s compare two common ones, often called Metallic and Specular.

https://worthcompare.com/zghxtvs55tu The metallic workflow uses a color input, the base color, and two scalar parameters, rouhghness and metallic. On a specular workflow there is two color inputs, an albedo and a specular, and a scalar, the roughness.

https://www.lcclub.co.uk/ysk0moq8b In my PBRViewer, I first implemented a metallic workflow, I now added a specular workflow. Here is a brief overview of the differences between those two.

First of all, it’s important to understand the kinds of materials we want to represent in games. They can be divided in two groups, dielectrics (plastics, wood, concrete, etc) and metals. Their properties are very well summarized in the wonderful chart made by Sebastien Lagarde for Dont nod. Here are some interesting facts:

  • Dielectrics material have a monochromatic specular, in a range going from 0.017 to 0.067
  • Metals have a black diffuse, except when they are not pure, they can have a little diffuse
  • Metals have a colored specular

https://www.worldhumorawards.org/uncategorized/vg9ioxbectk  

Now let’s get back to our workflows. The specular one is pretty straightforward, each map is directly used,  artist create their own specular and diffuse map. You need to make sure that your artists have a chart and know the propreties of each kind of materials to have a coherent result. It’s a lot of control, but it’s easy to break.

Specular workflow
Specular workflow. As you can see on the sliders, the diffuse is set to 0, and the color of the material is given by the specular tint.

https://ncmm.org/hn5zkk9kl0z On the data side, it’s 7 channels (diffuse rgb + specular rgb + roughness) to store in your GBuffer (for deferred rendering). It’s not awfull, but it’s pretty high, especially if you look closer. For dielectric you only  have a greyscale specular, which still takes three channels, and for metals the diffuse is mainly black. That’s a lot of space wasted. The metallic workflow allow you to avoid that.

https://elisabethbell.com/mfq3i8ez Disney introduced in their siggraph talk in 2012 their “principled” BRDF which is based on the following rules:

  1. Use intuitive rather than physical parameters.
  2. Use as few parameters as possible.
  3. Paramters should be zero to one, remapped over their plausible range.
  4. Parameters should be allowed to be pushed beyond their plausible range where it makes sense.
  5. All combinations of parameters should be as robust and plausible as possible.

The metallic workflow follow those rules, by introducing a metallic parameter and by removing the specular texture. The metallic parameter is really intuitive. 0 represent a dielectric material, 1 is a metal one. The values beetween 0 and 1 should not be used, except in some special cases, like a transition beween two materials.

Workflow Metallic
The metallic slider is set to one, so the material is a metal

https://www.jamesramsden.com/2024/03/07/xmabnjkeumx This parameter is in fact a blend between the dielectric and metallic models. For the dielectric model the diffuse is the base color, and the specular is a constant value we defined. For the metallic materials the diffuse is set to black, and the baseColor is used as specular.

https://tankinz.com/ufzc7xli7rb // Lerp with metallic value to find the good diffuse and specular. float3 realAlbedo = albedoColor - albedoColor * metallic; // 0.03 default specular value for dielectric. float3 realSpecularColor = lerp(0.03f, albedoColor, metallic);

https://www.lcclub.co.uk/4wkbgqp9j2 As you can see, in the end, it’s transformed into the same inputs, but much simpler to use and more error prone. And it’s only using 5 channels.

Buying Tramadol Online Uk Using only these inputs you can’t change the specular value of your dielectric materials, but you can add another one, in the range 0.017 – 0.063, remapped to 0 – 1 to control this value.

https://worthcompare.com/ja846g5yx Some effects can’t be obtained in a metallic workflow, but as they don’t really have a physical reality you may not want to use them anyway.

https://musiciselementary.com/2024/03/07/w5c5g9p  

A material with a colored specular and a colored diffuse.
A material with a colored specular and a colored diffuse.

 

Order Tramadol Online Europe This is just an overview of two ways of feeding a physically based renderer, and I think that each engine/studio/project as his own specific workflow. As often it’s all about knowing what you want, what your artists want, the possibilities offered by your engine (deferred/forward). The Disney paper is a very good place to find what kind of inputs can be implemented, but as the Disney BRDF is the next feature I’ll add to my viewer, I’ll talk a bit more about it in an other article.

https://fotballsonen.com/2024/03/07/hwvvuel8a  

Physically Based Rendering

Physically based rendering viewer

https://www.lcclub.co.uk/glxorlw Physically based rendering is becoming the new standard for materials. It was already used a lot in AAA productions, and it’s now in Unreal Engine, Cry Engine and Unity.

As a graphic programmer I’ve read a lot of papers and seen lots of presentation on that topic, but I never had the chance to try it. That’s why I made a small software, to be able to experiment both on a code and data point of view.

Tramadol Prescriptions Online You can try it here: PBRViewer .

https://musiciselementary.com/2024/03/07/v3vzipyte The viewer is easy to use, you move the camera with the mouse and the keyboard (using ZQSDAE or WASDQE) and the mouse. SHIFT allows you to move faster.

https://www.worldhumorawards.org/uncategorized/rgouivnpdpv I can use data form textures, or use the sliders to set my own values. It’s very usefull, because it allowed me to see the real impact of each parameters.

Textures must be placed in the folder Models/Materials/0 , with the “Albedo.tga”, “Normal.tga” etc, and will be updated in the viewer automatically. The current textures are the results of my tests with Substance Designer, ie. add node at random and export. Results will be better with real textures.

I didn’t test it on a non programmer pc, so it may require some redistribuables, such as visual studio redistribuable or directX redistribuable.

Tramadol Online United States If you have any issue or find a bug please contact me, using comments, twitter (@oks2024) or mail alexandre.pestana (at) supinfo.com. Also, as I said, I made this in order to discover and learn physically based shading. So if you see something strange or wrong I’d be happy to hear from you.

https://tankinz.com/i56a67w I used informations I gathered on internet, mainly:

  • Sébastien Lagarde’s blog:

http://seblagarde.wordpress.com/

Sebastien Lagarde shared a lot of informations on how they implemented physically based rendering in Remember Me. It’s a must read since it cover the subject from implementation to asset creation. http://seblagarde.wordpress.com/

  • Brian Karis’s blog :

https://elisabethbell.com/4xr69rzi5 http://graphicrants.blogspot.ca/2013/08/specular-brdf-reference.html

https://wasmorg.com/2024/03/07/1fca9uoho While implementing PBR in UE4 he tried many options for the specular BRDF and shared them. It’s very usefull, and I plan to implement them all, and to be able to switch from one to another to view the impact.

  • Stephen Hill’s blog:

https://www.goedkoopvliegen.nl/uncategorized/xbahmnv http://blog.selfshadow.com/publications/s2012-shading-course/

http://blog.selfshadow.com/publications/s2013-shading-course/

The pages on the siggraph PBR courses are full of informations, if you want more informations on PBR, go read them all !

I also used informations in books such as “Real Time Rendering “or, of course “Physically Based Rendering”.

The cube map comes from Emil Persson (aka Humus) wondefull texture library : http://www.humus.name/index.php?page=Textures&&start=0

And now here some screenshots of different presets:

Physically Based Rendering
Glossy dielectric
Physically Based Rendering
A semi glossy copper like metal
Physically Based Rendering
Yes, my textures are ugly.

Physically Based Rendering

Physically Based Rendering

Physically Based Rendering 7
Black rough plastic

 

Voxel visualization using DrawIndexedInstancedIndirect

This week end I worked with the DrawIndexedInstancedIndirect function, and since I didn’t find that much informations I wanted to share my results.

The next step for my voxel cone tracing project was to generate mip maps for my voxel grid. I implemented a first draft, but I needed a better way of displaying my voxel grid, to make sure that they all of them were correct.

I was using the depth map to compute the world position. Then I transformed it into voxel grid coordinates to find the color of the matching voxel.

DrawIndexedInstancedIndirect

The problem is that, as shown in the screenshot, it doesn’t allow seeing the real voxelized geometry, and it’s hard to have a clear idea of the imprecision induced by the voxels.

That’s why I started to work on a way to draw all voxels, using the DrawIndexedInstancedIndirect function. Draw instanced allows to draw several times a unique object, here I just draw a simple cube, and to apply instance specific parameters on each of them.

The “indirect” functions are the same as the “non indirect” ones, except that the arguments are contained in a buffer. It means that the CPU doesn’t have to be aware of the arguments of the function, it can be created by a compute shader, and directly used to call another function.

I have a buffer containing all my voxels, and the first thing I wantto know  is how many of them are not empty (that will be the number of instances to draw), and their positions within my voxel grid.

The first step is to create the buffer that will be used to feed the DrawIndexedInstancedIndirect function:


D3D11_BUFFER_DESC bufferDesc;

ZeroMemory(&bufferDesc, sizeof(bufferDesc));
bufferDesc.ByteWidth = sizeof(UINT) * 5;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS;
bufferDesc.StructureByteStride = sizeof(float);

hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

The important flag here is D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS, to specify that the buffer will be used as a parameter for a draw indirect call.

Next, the associated unordered access view to be able to write into it from a compute shader.


D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
ZeroMemory(&uavDesc, sizeof(uavDesc));
uavDesc.Format = DXGI_FORMAT_R32_UINT;
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
uavDesc.Buffer.FirstElement = 0;
uavDesc.Buffer.Flags = 0;
uavDesc.Buffer.NumElements = 5;

hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

 

As I said early I need to be able to know the position of the voxels in the voxel grid, to be able to find their position in the world, and I’ll be able to find their color. For that I use an Append Buffer,  an other usefull type of buffer that behave pretty much like a stack. When you “Append” a data, it will be put at the end of the buffer, and an hidden counter of element will be incremented.

Here is how I created this buffer and the associated SRV and UAV:

void Engine::CreateAppendBuffer(ID3D11Buffer** pBuffer, ID3D11UnorderedAccessView** pBufferUAV, ID3D11ShaderResourceView** pBufferSRV, const UINT pElementCount, const UINT pElementSize)
{
    HRESULT hr;

    D3D11_BUFFER_DESC bufferDesc;
    ZeroMemory(&bufferDesc, sizeof(bufferDesc));
    unsigned int stride = pElementSize;
    bufferDesc.ByteWidth = stride * pElementCount;
    bufferDesc.Usage = D3D11_USAGE_DEFAULT;
    bufferDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
    bufferDesc.CPUAccessFlags = 0;
    bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
    bufferDesc.StructureByteStride = stride;

    hr = m_pd3dDevice->CreateBuffer(&bufferDesc, NULL, pBuffer);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer.", L"Ok", MB_OK);
        return;
    }

    D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
    ZeroMemory(&uavDesc, sizeof(uavDesc));
    uavDesc.Format = DXGI_FORMAT_UNKNOWN;
    uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
    uavDesc.Buffer.FirstElement = 0;
    uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND;
    uavDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateUnorderedAccessView(*pBuffer, &uavDesc, pBufferUAV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer unordered access view.", L"Ok", MB_OK);
        return;
    }

    D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
    ZeroMemory(&srvDesc, sizeof(srvDesc));
    srvDesc.Format = DXGI_FORMAT_UNKNOWN;
    srvDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER;
    srvDesc.Buffer.FirstElement = 0;
    srvDesc.Buffer.NumElements = pElementCount;

    hr = m_pd3dDevice->CreateShaderResourceView(*pBuffer, &srvDesc, pBufferSRV);

    if(FAILED(hr))
    {
        MessageBox(NULL, L"Error creating the append buffer shader resource view.", L"Ok", MB_OK);
        return;
    }
}

Now, the compute shader. It’s in fact pretty simple. First step, the first thread initialize my argument buffer to 0, except for the first argument that represent the number on indices in the index buffer that will be bind.

Then, each time I found a non empty voxel, I increase the number of instances to draw using an InterlockedAdd, and I append it’s position in the perInstancePosition buffer.


AppendStructuredBuffer < uint3 > perInstancePosition:register(u0);

RWStructuredBuffer < Voxel > voxelGrid:register(u1);

[numthreads(VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS, VOXEL_CLEAN_THREADS)]
void main( uint3 DTid : SV_DispatchThreadID )
{
    if ( DTid.x + DTid.y + DTid.z == 0)
    {
        testBuffer[0] = 36;
        testBuffer[1] = 0;
        testBuffer[2] = 0;
        testBuffer[3] = 0;
        testBuffer[4] = 0;
    }
    GroupMemoryBarrier();

    uint3 voxelPos = DTid.xyz;
    int gridIndex = GetGridIndex(voxelPos);

    if (voxelGrid[gridIndex].m_Occlusion == 1)
    {
        uint drawIndex;
        InterlockedAdd(testBuffer[1], 1, drawIndex);

        VoxelParameters param;
        param.m_Position = voxelPos;
        perInstancePosition.Append(param);
    }
}

At the end of the execution of this computer shader the buffers are both filled with the information needed to draw all the voxels.

I use a really simple cube to represent the geometry of a voxel:


// Create the voxel vertices.
VertexPosition tempVertices[] =
{
    { XMFLOAT3( -0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f, -0.5f )},
    { XMFLOAT3(  0.5f,  0.5f,  0.5f )},
    { XMFLOAT3( -0.5f,  0.5f,  0.5f )},

    { XMFLOAT3( -0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f, -0.5f )},
    { XMFLOAT3(  0.5f, -0.5f,  0.5f )},
    { XMFLOAT3( -0.5f, -0.5f,  0.5f )},
};

// Create index buffer
WORD indicesTemp[] =
{
    3,1,0,
    2,1,3,

    6,4,5,
    7,4,6,

    3,4,7,
    0,4,3,

    1,6,5,
    2,6,1,

    0,5,4,
    1,5,0,

    2,7,6,
    3,7,2
};

I can now bind this index and vertex buffer, the perInstancePosition and voxelGrid buffers, and start to write the shaders. The goal is simple, each item in the perInstancePosition is a uint3 reprensenting the position of a “non empty” voxel in the voxel grid. I just need to move the vertices to the right world position, increase the size of my unit cube to match the size of a voxel, and to find the right color to pass it to the pixel shader.

Here is my vertex shader:


#include "VoxelizerShaderCommon.hlsl"

StructuredBuffer < uint3 > voxelParameters: register(t0);
StructuredBuffer < Voxel > voxelGrid: register(t1);

cbuffer ConstantBuffer: register(b0)
{
    matrix g_ViewMatrix;
    matrix g_ProjMatrix;
    float4 g_SnappedGridPosition;
    float g_CellSize;
}

struct VoxelInput
{
    float3 Position : POSITION0;
    uint InstanceId : SV_InstanceID;
};

struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

VertexOutput main( VoxelInput input)
{
    VertexOutput output;

    uint3 voxelGridPos = voxelParameters[input.InstanceId];

    int halfCells = NBCELLS/2;

    float3 voxelPosFloat = voxelGridPos;

    float3 offset = voxelGridPos - float3(halfCells, halfCells, halfCells);
    offset *= g_CellSize;
    offset += g_SnappedGridPosition.xyz;

    float4 voxelWorldPos = float4(input.Position*g_CellSize + offset, 1.0f);

    float4 viewPosition = mul(voxelWorldPos, g_ViewMatrix);
    output.Position = mul(viewPosition, g_ProjMatrix);

    uint index = GetGridIndex(voxelGridPos);
    output.Color = voxelGrid[index].Color;

    return output;
}

An interesting thing here is the instanceId, automatically created by the draw instanced command, that  identify each instance, allowing me to create a voxel for each position in the buffer.

The pixel shader is really straightforward:


struct VertexOutput
{
    float4 Position: SV_POSITION;
    float3 Color: COLOR0;
};

float4 main(VertexOutput input) : SV_TARGET
{
    return float4(input.Color, 1.0f);
}

And finally I call the DrawIndexedInstancedIndirect function :


engine->GetImmediateContext()->DrawIndexedInstancedIndirect(argBuffer, 0);

 

This is just an example, but the draw indirect functions allow to do a lot of things using only the gpu, without the need to synchronise with the cpu. It’s a powerfull tool, and I really want to try more stuff whith that.

And to conclude, some screenshots for voxels grid of 32x32x32 and 256x256x256:

DrawIndexedInstancedIndirect

DrawIndexedInstancedIndirect

“Physical” midi user interface for lazy programmers

It’s been a while since my last post, but I’ve been busy, new job, new continent, etc…

This summer I was working at Ubisoft Paris, and for one of my tasks I needed to create a sample program, to implement and an effect.  It takes some time to rewrite tools you usally have in the engine like shader builder, DX entities creators, camera class, etc, but what I really found time consuming, and annoying was everything UI related. There were a lot of settings, and the goal being to explore all of the possibilities, almost all of them had to be exposed to the user. It’s such a pain, creating your sliders/buttons/whatever, setting the position, width, height, initializing, drawing and updating, etc. I used DXUT’s UI components, I’m sure there is better tools out there, but I have to admit,  I don’t like UI programming, so I wanted to find a better way for my future projects.

So one day I grabbed my midi keyboard, wondering how hard it was to get the inputs. Turns out it’s super easy. Using the library RtMidi, and I was able to get my inputs in no time. The following week I bought a small midi controller, the Korg NanoKontrol2.

Midi user interface

 

 

Look at that, it’s a physical “G”UI!

There is everything I could want on this controller, sliders, knobs and buttons with light feedbacks. So I started to write a small midi input manager for my current project, trying to make it easy to use. There is just a simple Update() function that will receive/send all the midi messages, and all I have to do is MidiInputManager::Instance()->GetMidiValue(NKI_F1) to get the current value of the first fader. And that’s it !

Midi values are in the range 0 – 127, so I had to transform them, an annoying and error prone, so I added an initialization function: MidiInputManager::Instance()->SetMinAndMaxValues(NKI_F1, 50, 500), and the results of the GetMidiValue function are already in the correct range.

 

Of course there are some drawbacks. The faders and knobs are not motorized, meaning that if you have saved a default value, you can’t see it on the fader, and you will lose it as soon as you move the fader.

You have only 128 different values so you really need to set the correct range to have a good precision.

You have only 8 knobs and 8 faders. It’s not really a problem since you can define multiple configurations using the buttons. For example for each faders there are 3 buttons, (S)olo, (M)ute and (R)ecord. I’ve linked them has a group, meaning that only one of the three can be active at a time. So the associated fader can control a red channel while S is on, blue when M is on and so on. I’m also thinking to use multiple global configurations if I really need more buttons. Using the “track” buttons, I would be able to press next and be in “configuration 2” and have a different mappings.

I also notice some lag sometime, I’ll have to take a look at that.

 

I’ve uploaded a first version on github, and I’ll update it as soon as I had features. For now it’s only for the nanokontrol2, but it’s easy to port to other controllers.

I’ve made a quick video of my tiled deferred renderer to show how it can be used:

 

I’m sure I’m not the only one to do that, but I hope it will help or inspire someone!

GPU Particles

English version is coming soon !

Une première vidéo pour montrer et expliquer le fonctionnement de base de mon moteur de particules.

Tous les calculs de mise à jour, physique et collisions s’exécutent sur le GPU, ce qui permet d’avoir de bonnes performances pour un grand nombre de particules (ici 1 000 000 de particules, locké à 30 fps pour les besoin de l’enregistrement).

Toutes les informations dynamiques des particules (position X et Y dans les canaux RG et velocité X et Y dans les canaux BA) sont stockées dans une texture (ici 1024×1024) Chaque particule est identifié par un ensemble de trois vertices. A la place de leur position est stocké une coordonnée de texture, qui permet de retrouvé les informations dans la texture contenant les données.

La mise à jour se déroule en deux temps. Tout d’abord il y a une phase de mise à jour de la physique. En dessinant un quad fullscreen, pour chaque pixel de la texture de données on extrait les informations de la frame précédente afin d’en déduire celles de la frame courante, en fonction de la gravité, des collisions, des forces externes, etc. Ensuite vient la phase d’affichage. On envoie à la carte les vertices représentant chaque particule, et dans le vertex shader, grâce au Vertex Texture Fetching et aux UVs, on retrouve la position réelle ce qui permet d’afficher un triangle au bon endroit.

On peut voir dans la vidéo l’influence d’une force d’attraction contrôlée par la souris et celle de la gravité. Il n’y a de collisions qu’avec le bord de l’écran. La couleur des particules peut être soit fixe, soit influencée par leur vélocité. On voit aussi un post process qui dessine une couleur en fonction de la densité des particules, donnant un aspect “fluide”.

Dans la prochaine vidéo je montrerai les collisions avec des objets dynamiques, ainsi que l’utilisation de flowmaps pour influencer le mouvement de toutes les particules.

Le code source est disponible sur github.

Le setup du projet est téléchargeable ici.