Tag: directx

Entries for tag "directx", ordered from most recent. Entry count: 91.

Pages: > 1 2 3 4 ... 12 >

# States and Barriers of Aliasing Render Targets

Tue
22
Dec 2020

In my recent post “Initializing DX12 Textures After Allocation and Aliasing” I said that when you use Direct3D 12, you have a render-target or depth-stencil texture “B” aliasing with some other resources “A” in memory and want to use it, you need to initialize it properly every time: 1. issue a barrier of type D3D12_RESOURCE_BARRIER_TYPE_ALIASING, 2. fully initialize its content using Clear, Discard, or Copy, so that garbage data left from previous usage of the memory as overwritten.

That is actually not the full story. There is a third thing that you need to take care of: a transition barrier for this resource. This is because functions: DiscardResource, ClearRenderTargetView, ClearDepthStencilView require a texture to be in a correct state – D3D12_RESOURCE_STATE_RENDER_TARGET or D3D12_RESOURCE_STATE_DEPTH_WRITE. A question arises here: How does resource state tracking interact with memory aliasing?

If you look at error and warning messages from D3D Debug Layer or PIX, you notice that states are tracked per resource, no matter if they alias in memory. This gives us some clue of how we should handle it correctly to tame the Debug Layer and make things formally correct, but it is still not that obvious where to put the barrier. Let's assume the last usage of our texture B is D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE – the texture is used for sampling. Where should we put the barrier to transition it to D3D12_RESOURCE_STATE_RENDER_TARGET, required to do the initializing Discard or Clear after aliasing? I can see 3 solutions here:

1. None at all – skip this barrier, just do the Discard/Clear as if the texture was in the correct RT/DS state. It obviously generates Debug Layer error, but it seems to work fine. My thinking about why this may be fine is: If the Discard/Clear requires the texture to be in the correct RT/DS state and it completely overwrites its content, then it shouldn’t care what initial state the texture is in. Its data are garbage anyway. It will most probably leave the texture initialized properly as for RT/DS state, no matter what.

2. Between the aliasing barrier and Discard/Clear. This would please the Debug Layer, but I have some doubts whether it is correct or necessary. It may not be correct because if the barrier does some transformations of the data, it would transform garbage data left after aliasing, which could be an undefined behavior leading to some persisting corruptions or even to GPU crash. It may not be necessary because we are going to re-initialize the whole content of the texture in a moment anyway with Discard/Clear.

3. After finished working with the texture in the previous frame. The idea is to leave each RT/DS texture that can alias with some other resource in a state that makes it ready for the initializing Discard/Clear next time we want to grab it from the common memory. This will work and will be formally correct, but it also sounds like a well-known and sub-optimal idea of reverting textures to some “base state” after each use. Additional problem appears when your last usage of the texture occurs on the compute queue, because there you cannot transition it to RENDER_TARGET or DEPTH_WRITE state.

Conclusion: While this is a tricky question that has no one simple answer, I would recommend to use [3] if you want to be 100% formally correct or [1] if you want maximum efficiency.

Update 2023-11-16: New Enhanced barriers in D3D12 solve this whole problem by offering discard operation as part of a state transition barrier, similarly to Vulkan.

Comments | #rendering #directx Share

# CasCmdLine - Few Technical Details

Sun
15
Nov 2020

As part of my job, I've written a small console program CasCmdLine with a purpose of testing AMD's FidelityFX Contrast Adaptive Sharpening (CAS) shader on an image from disk, e.g. a screenshot from your game. You can find binary and source code on github.com/GPUOpen-Effects/FidelityFX-CAS/, CasCmdLine subdirectory. See also the blog post and tutorial about it to learn about its features and the syntax of supported command line parameters.

Update 2021-07-25: Links above are broken, as the tool became a standalone repository, which now supports CAS as well as the new FidelityFX Super Resolution (FSR) shader: FidelityFX-CLI.

Here I would like to point to three aspects of its implementation that allowed me to make it small and simple. They might interest you if you are a C++/Windows/graphics programmer.

1. To execute a compute shader like CAS, I needed to use a graphics API - Direct3D 11, 12, or Vulkan, as all of them are supported by the effect. I chose D3D11 as the easiest one. What’s interesting is that the API is used without creating a window or swap chain. There are no render frames, no calls to Present, no depth-stencil texture, no message loop. D3D11CreateDevice is used to initialize DirectX rather than D3D11CreateDeviceAndSwapChain. The program just initializes all necessary machinery, does its job, and exits. It is perfectly possible to write a program this way, which may be a good idea for any application that needs to do some GPU-accelerated computations rather than interactive graphics like games do. I suspect this mode of operation would work even on server systems that have no monitor attached, as long as there is a GPU and graphics driver installed. See file “CasCmdLine.cpp” to find out how this is implemented.

2. There is always a question in every graphics app about how to load shaders. Surely, compiling them from HLSL/GLSL source code is the worst option, as it requires the user to have shader compiler installed or attach the compiler to your program. It also takes more time than loading shaders precompiled to the intermediate binary format. But even in this format they need to be loaded from somewhere, whether individual files or some custom compressed archive, like games tend to do. In CasCmdLine I did it differently. I attached precompiled shaders directly to the program binary. To do that, I used command line parameter /Fh of the "fxc.exe" shader compiler, like this:

fxc.exe /T cs_5_0 /E mainCS /O3 /Fh CompiledShader.h ShaderSource.hlsl

Instead of a binary file, the compiler called with this parameter generates a text file in a format compatible with C/C++ that contains the data of the compiled shader in form of an array, like this:

#if 0
Shader metadata and assembly is put here, as commented out code...
#endif

const BYTE g_mainCS[] =
{
     68,  88,  66,  67,   8, 233, 
     11,  94, 141, 165,  83, 251, 
     50, 166, 219, 219,  84, 109, 
    128,  23,   1,   0,   0,   0, 
    (...)
};

Such file can be #include-d in a C++ code and used to create a D3D shader directly from this data. See files "Shaders/CompiledShader_*.h" to find out how they really look like.

3. The program needs to load and save image files in JPEG, PNG, and preferably other formats. Of course, these formats are very complex, support various pixel formats, involve some compression algorithms etc., so handling them manually would require an enormous amount of work. There are libraries for this, like the official libpng and libjpeg for handling PNG/JPEG formats, respectively, or a multi-format, multi-platform library DevIL.

If the developed program is intended only for Windows, it turns out that no third-party libraries are needed. Native Windows API contains a part called Windows Imaging Component (WIC) that can load and save image files in many formats, including BMP, PNG, JPEG, TIFF, GIF, ICO, WMP, DDS. It can also do some image operations, like rescaling. It is a COM API that involves interfaces like IWICImagingFactory, IWICBitmapDecoder, IWICBitmapFrameDecode, and many more. This is what I used in the program described here. I might write a tutorial about WIC someday... For now, I would just say if you figure out its API, it looks quite powerful. It might be useful for any graphics Windows app that needs to load textures. It is also what Microsoft's DirectXTex library uses under the hood.

Comments | #rendering #directx Share

# A Better Way to Scalarize a Shader

Tue
20
Oct 2020

This will be an advanced article. It assumes you not only know how to write shaders but also how they work on a low level (like vector versus scalar registers) and how to optimize them using scalarization. It all starts from a need to index into an array of texture or buffer descriptors, where the index is dynamic – it may vary from pixel to pixel. This is useful e.g. when doing bindless-style rendering or blending various layers of textures e.g. on a terrain. To make it working properly in a HLSL shader, you need to surround the indexing operation with a pseudo-function NonUniformResourceIndex. See also my old blog post “Direct3D 12 - Watch out for non-uniform resource index!”.

Texture2D g_Textures[] : register(t1);
...
return g_Textures[NonUniformResourceIndex(textureIndex)].Load(pos);

In many cases, it is enough. The driver will do its magic to make things working properly. But if your logic dependent on textureIndex is more complex than a single Load or SampleGrad, e.g. you sample multiple textures or do some calculations (let's call it MyDynamicTextureIndexing), then it might be beneficial to scalarize the shader manually using a loop and wave functions from HLSL Shader Model 6.0.

I learned how to do scalarization from the 2-part article “Intro to GPU Scalarization” by Francesco Cifariello Ciardi and the presentation “Improved Culling for Tiled and Clustered Rendering” by Michał Drobot, linked from it. Both sources propose an implementation like the following HLSL snippet:

// WORKING, TRADITIONAL
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
   currThreadIndex < 32 ? 1u << currThreadIndex : 0,
   currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
while(any(currThreadMask & activeThreadsMask) != 0)
{
   uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
   uint2 scalarTextureIndexThreadMask = WaveActiveBallot(scalarTextureIndex == textureIndex).xy;
   activeThreadsMask &= ~scalarTextureIndexThreadMask;
   [branch]
   if(scalarTextureIndex == textureIndex)
   {
       color = MyDynamicTextureIndexing(textureIndex);
   }
}
return color;

It involves a bit mask of active threads. From the moment I first saw this code, I started wondering: Why is it needed? A mask of threads that still want to continue spinning the loop is already maintained implicitly by the shader compiler. Couldn't we just break; from the loop when done with the textureIndex of the current thread?! So I wrote this short piece of code:

// BAD, CRASHES
float4 color = float4(0.0, 0.0, 0.0, 0.0);
while(true)
{
   uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
   [branch]
   if(scalarTextureIndex == textureIndex)
   {
       color = MyDynamicTextureIndexing(textureIndex);
       break;
   }
}
return color;

…and it crashed my GPU. At first I thought it may be a bug in the shader compiler, but then I recalled footnote [2] in part 2 of the scalarization tutorial, which mentions an issue with helper lanes. Let me elaborate on this. When a shader is executed in SIMT fashion, individual threads (lanes) may be active or inactive. Active lanes are these that do their job. Inactive lanes may be inactive from the very beginning because we are at the edge of a triangle so there are not enough pixels to make use of all the lanes or may be disabled temporarily because e.g. we are executing an if section that some threads didn't want to enter. But in pixel shaders there is a third kind of lanes – helper lanes. These are used instead of inactive lanes to make sure full 2x2 quads always execute the code, which is needed to calculate derivatives ddx/ddy, also done explicitly when sampling a texture to calculate the correct mip level. A helper lane executes the code (like an active lane), but doesn't export its result to the render target (like an inactive lane).

As it turns out, helper lanes also don't contribute to wave functions – they work like inactive lanes. Can you already see the problem here? In the loop shown above, it may happen than a helper lane has its textureIndex different from any active lanes within a wave. It will then never get its turn to process it in a scalar fashion, so it will fall into an infinite loop, causing GPU crash (TDR)!

Then I thought: What if I disable helper lanes just once, before the whole loop? So I came up with the following shader. It seems to work fine. I also think it is better than the first solution, as it operates on the thread bit mask only once at the beginning and so uses fewer variables to be stored in GPU registers and does fewer calculations in every loop iteration. Now I'm thinking whether there is something wrong with my idea that I can't see now? Or did I just invent a better way to scalarize shaders?

// WORKING, NEW
float4 color = float4(0.0, 0.0, 0.0, 0.0);
uint currThreadIndex = WaveGetLaneIndex();
uint2 currThreadMask = uint2(
   currThreadIndex < 32 ? 1u << currThreadIndex : 0,
   currThreadIndex < 32 ? 0 : 1u << (currThreadIndex - 32));
uint2 activeThreadsMask = WaveActiveBallot(true).xy;
[branch]
if(any((currThreadMask & activeThreadsMask) != 0))
{
   while(true)
   {
       uint scalarTextureIndex = WaveReadLaneFirst(textureIndex);
       [branch]
       if(scalarTextureIndex == textureIndex)
       {
           color = MyDynamicTextureIndexing(textureIndex);
           break;
       }
   }
}
return color;

UPDATE 2020-10-28: There are some valuable comments under my tweet about this topic that I recommend to check out.

UPDATE 2021-12-03: As my colleague pointed out today, the code I showed above as "BAD" is perfectly fine for compute shaders. Only in pixel shaders we have problems with helper lanes. Thank you Steffen!

Comments | #directx #optimization #gpu Share

# Which Values Are Scalar in a Shader?

Wed
14
Oct 2020

GPUs are highly parallel processors. Within one draw call or a compute dispatch there might be thousands or millions of invocations of your shader. Some variables in such a shader have constant value for all invocations in the draw call / dispatch. We can call them constant or uniform. A literal constant like 23.0 is surely such a value and so is a variable read from a constant (uniform) buffer, let’s call it cbScaleFactor, or any calculation on such data, like (cbScaleFactor.x + cbScaleFactor.y) * 2.0 - 1.0.

Other values may vary from thread to thread. These will surely be vertex attributes, as well as system value semantics like SV_Position in a pixel shader (denoting the position of the current pixel on the screen), SV_GroupThreadID in a compute shader (identifier of the current thread within a thread group), and any calculations based on them. For example, sampling a texture using non-constant UV coordinates will result in a non-constant color value.

But there is another level of grouping of threads. GPU cores (Compute Units, Execution Units, CUDA Cores, however we call them) execute a number of threads at once in a SIMD fashion. It would be more correctly to say SIMT. For the explanation of the difference see my old post: “How Do Graphics Cards Execute Vector Instructions?” It’s usually something like 8, 16, 32, 64 threads executing on one core, together called a wave in HLSL and a subgroup in GLSL.

Normally you don’t need to care about this fact. However, recent versions of HLSL and GLSL added intrinsic functions that allow to exchange data between lanes (threads/invocations within a wave/subgroup) - see “HLSL Shader Model 6.0” or “Vulkan Subgroup Tutorial”. Using them may allow to optimize shader performance.

This another level of grouping yields a possibility for a variable to be or not to be uniform (to have the same value) across a single wave, even if it’s not constant across the entire draw call or dispatch. We can also call it scalar, as it tends to go to scalar registers (SGPRs) rather than vector registers (VGPRs) on AMD architecture, which is overall good for performance. Simple cases like the ones I mentioned above still apply. What’s constant across the entire draw call is also scalar within a wave. What varies from thread to thread is not scalar. Some wave functions like WaveReadLaneFirst, WaveActiveMax, WaveActiveAllTrue return the same value for all threads, so it’s always scalar.

Knowing which values are scalar and which ones may not be is necessary in some cases. For example, indexing buffer or texture array requires special keyword NonUniformResourceIndex if the index is not uniform across the wave. I warned about it in my blog post “Direct3D 12 - Watch out for non-uniform resource index!”. Back then I was working on shader compiler at Intel, helping to finish DX12 implementation before the release of Windows 10. Now, 5 years later, it is still a tricky thing to get right.

Another such case is a function WaveReadLaneAt which “returns the value of the expression for the given lane index within the specified wave”. The index of the lane to fetch was required to be scalar, but developers discovered it actually works fine to use a dynamically varying value for it, like Ken Hu in his blog post “HLSL pitfalls”. Now Microsoft formally admitted that it is working and allowed LaneIndex to be any value by making this GitHub commit to their documentation.

If this is so important to know where an argument needs to be scalar and which values are scalar, you should also know about some less obvious, tricky ones.

SV_GroupID in compute shader – identifier of the group within a compute dispatch. This one surely is uniform across the wave. I didn’t search specifications for this topic, but it seems obvious that if a groupshared memory is private to a thread group and a synchronization barrier can be issued across a thread group, threads from different groups cannot be assigned to a single wave. Otherwise everything would break.

SV_InstanceID in vertex shader – index of an instance within an instanced draw call. It looks similar, but the answer is actually opposite. I’ve seen discussions about it many times. It is not guaranteed anywhere that threads in one wave will calculate vertices of the same instance. While inconvenient for those who would like to optimize their vertex shader using wave functions, it actually gives a graphics driver an opportunity to increase utilization by packing vertices from multiple instances into one wave.

SV_GroupThreadID.xyz in compute shader – identifier of the thread within a thread group in a particular dimension. Article “Porting Detroit: Become Human from PlayStation® 4 to PC – Part 2” on GPUOpen.com suggests that by using [numthreads(64,2,1)], you can be sure that waves will be dispatched as 32x1x1 or 64x1x1, so that SV_GroupThreadID.y will be scalar across a wave. It may be true for AMD architecture and other GPUs currently on the market, so relying on this may be a good optimization opportunity on consoles with a known fixed hardware, but it is not formally correct to assume this on any PC. Neither D3D nor Vulkan specification says that threads from a compute thread group are assigned to waves in row-major order. The order is undefined, so theoretically a driver in a new version may decide to spawn waves of 16x2x1. It is also not guaranteed that some mysterious new GPU couldn’t appear in the future that is 128-lane wide. WaveGetLaneCount function says “the result will be between 4 and 128”. Such GPU would execute entire 64x2x1 group as a single wave. In both cases, SV_GroupThreadID.y wouldn’t be scalar.

Long story short: Unless you can prove otherwise, always assume that what is not uniform (constant) across the entire draw call or dispatch is also not uniform (scalar) across the wave.

Comments | #gpu #directx #vulkan #optimization Share

# System Value Semantics in Compute Shaders - Cheat Sheet

Tue
29
Sep 2020

After compute shaders appeared, programmers no longer need to pretend they do graphics and render pixels when they want to do some general-purpose computations on a GPU (GPGPU). They can just dispatch a shader that reads and writes memory in a custom way. Such shader is a short (or not so short) program to be invoked thousands or millions of times to process a piece of data. To work correctly, it needs to know which is the current thread. Threads (invocations) of a compute shader are not just indexed linearly as 0, 1, 2, ... It's more complex than that. Their indexing can use up to 3 dimensions, which simplifies operation on some data like images or matrices. They also come in groups, with the number of threads in one group declared statically as part of the shader code and the number of groups to execute passed dynamically in CPU code when dispatching the shader.

This raises a question of how to identify the current thread. HLSL offers a number of system-value semantics for this purpose and so does GLSL by defining equivalent built-in variables. For long time I couldn't remember their names, as the ones in HLSL are quite misleading. If GroupID is an ID of the entire group, and GroupThreadID is an ID of the thread within a group, GroupIndex should be a flattened index of the entire group, right? Wrong! It's actually an index of a single thread within a group. GLSL is more consistent in this regard, clearly stating "WorkGroup" versus "Invocation" and "Local" versus "Global". So, although Microsoft provides a great explanation of their SVs with a picture on pages like SV_DispatchThreadID, I thought it would be nice to gather all this in form of a table, a small cheat sheet:

HLSL SemanticsGLSL VariableType (Dimension)UnitReference
SV_GroupIDgl_WorkGroupIDuint3 (3D)Entire groupGlobal in dispatch
SV_GroupThreadIDgl_LocalInvocationIDuint3 (3D)Single threadLocal in group
SV_DispatchThreadIDgl_GlobalInvocationIDuint3 (3D)Single threadGlobal in dispatch
SV_GroupIndexgl_LocalInvocationIndexuint (flattened)Single threadLocal in group

Update 2023-08-30: There is another article about this topic that I recommend: "Dispatch IDs and you".

Comments | #vulkan #opengl #directx #gpu Share

# Why Not Use Heterogeneous Multi-GPU?

Wed
22
Jul 2020

There was an interesting discussion recently on one Slack channel about using integrated GPU (iGPU) together with discrete GPU (dGPU). Many sound ideas were said there, so I think it's worth writing them down. But because I probably never blogged about multi-GPU before, few words of introduction first:

The idea to use multiple GPUs in one program is not new, but not very widespread either. In old graphics APIs like Direct3D 11 it wasn't easy to implement. Doing it right in a complex game often involved engaging driver engineers from the GPU manufacturer (like AMD, NVIDIA) or using custom vendor extensions (like AMD GPU Services - see for example Explicit Crossfire API).

New generation of graphics APIs – Direct3D 12 and Vulkan – are lower level, give more direct access to the hardware. This includes the possibility to implement multi-GPU support on your own. There are two modes of operation. If the GPUs are identical (e.g. two graphics cards of the same model plugged to the motherboard), you can use them as one device object. In D3D12 you then index them as Node 0, Node 1, ... and specify NodeMask bit mask when allocating GPU memory, submitting commands and doing all sorts of GPU things. Similarly, in Vulkan you have VK_KHR_device_group extension available that allows you to create one logical device object that will use multiple physical devices.

But this post is about heterogeneous/asymmetric multi-GPU, where there are two different GPUs installed in the system, e.g. one integrated with the CPU and one discrete. A common example is a laptop with "switchable graphics", which may have an Intel CPU with their integrated “HD” graphics plus a NVIDIA GPU. There may even be two different GPUs from the same manufacturer! My new laptop (ASUS TUF Gaming FX505DY) has AMD Radeon Vega 8 + Radeon RX 560X. Another example is a desktop PC with CPU-integrated graphics and a discrete graphics card installed. Such combination may still be used by a single app, but to do that, you must create and use two separate Device objects. But whether you could, doesn't mean you should…

First question is: Are there games that support this technique? Probably few… There is just one example I heard of: Ashes of the Singularity by Oxide Games, and it was many years ago, when DX12 was still fresh. Other than that, there are mostly tech demos, e.g. "WITCH CHAPTER 0 [cry]" by Square Enix as described on DirectX Developer Blog (also 5 years old).

iGPU typically has lower computational power than dGPU. It could accelerate some pieces of computations needed each frame. One idea is to hand over the already rendered 3D scene to the iGPU so it can finish it with screen-space postprocessing effects and present it, which sounds even better if the display is connected to iGPU. Another option is to accelerate some computations, like occlusion culling, particles, or water simulation. There are some excellent learning materials about this technique. The best one I can think of is: Multi-Adapter with Integrated and Discrete GPUs by Allen Hux (Intel), GDC 2020.

However, there are many drawbacks of this technique, which were discussed in the Slack chat I mentioned:

Conclusion: Supporting heterogeneous multi-GPU in a game engine sounds like an interesting technical challenge, but better think twice before doing it in a production code.

BTW If you just want to use just one GPU and worry about the selection of the right one, see my old post: Switchable graphics versus D3D11 adapters.

Comments | #rendering #directx #vulkan #microsoft Share

# Secrets of Direct3D 12: Resource Alignment

Sun
19
Apr 2020

In the new graphics APIs - Direct3D 12 and Vulkan - creation of resources (textures and buffers) is a multi-step process. You need to allocate some memory and place your resource in it. In D3D12 there is a convenient function ID3D12Device::CreateCommittedResource that lets you do it in one go, allocating the resource with its own, implicit memory heap, but it's recommended to allocate bigger memory blocks and place multiple resources in them using ID3D12Device::CreatePlacedResource.

When placing a resource in the memory, you need to know and respect its required size and alignment. Size is basically the number of bytes that the resource needs. Alignment is a power-of-two number which the offset of the beginning of the resource must be multiply of (offset % alignment == 0). I'm thinking about writing a separate article for beginners explaining the concept of memory alignment, but that's a separate topic...

Direct3D 12 resource alignment

Back to graphics, in Vulkan you first need to create your resource (e.g. vkCreateBuffer) and then pass it to the function (e.g. vkGetBufferMemoryRequirements) that will return required size of alignment of this resource (VkMemoryRequirements::size, alignment). In DirectX 12 it looks similar at first glance or even simpler, as it's enough to have a structure D3D12_RESOURCE_DESC describing the resource you will create to call ID3D12Device::GetResourceAllocationInfo and get D3D12_RESOURCE_ALLOCATION_INFO - a similar structure with SizeInBytes and Alignment. I've described it briefly in my article Differences in memory management between Direct3D 12 and Vulkan.

But if you dig deeper, there is more to it. While using the mentioned function is enough to make your program working correctly, applying some additional knowledge may let you save some memory, so read on if you want to make your GPU memory allocator better. First interesting information is that alignments in D3D12, unlike in Vulkan, are really fixed constants, independent of a particular GPU or graphics driver that the user may have installed.

So, we have these constants and we also have a function to query for actual alignment. To make things even more complicated, structure D3D12_RESOURCE_DESC contains Alignment member, so you have one alignment on the input, another one on the output! Fortunately, GetResourceAllocationInfo function allows to set D3D12_RESOURCE_DESC::Alignment to 0, which causes default alignment for the resource to be returned.

Now, let me introduce the concept of "small textures". It turns out that some textures can be aligned 4 KB and some MSAA textures can be aligned to 64 KB. They call this "small" alignment (as opposed to "default" alignment) and there are also constants for it:

 DefaultSmall
Buffer64 KB 
Texture64 KB4 KB
MSAA texture4 MB64 KB

Using this smaller alignment allows to save some GPU memory that would otherwise be unused as padding between resources. Unfortunately, it's unavailable for buffers and available only for small textures, with a very convoluted definition of "small". The rules are hidden in the description of Alignment member of D3D12_RESOURCE_DESC structure:

Could GetResourceAllocationInfo calculate all this automatically and just return optimal alignment for a resource, like Vulkan function does? Possibly, but this is not what happens. You have to ask for it explicitly. When you pass D3D12_RESOURCE_DESC::Alignment = 0 on the input, you always get the default (larger) alignment on the output. Only when you set D3D12_RESOURCE_DESC::Alignment to the small alignment value, this function returns the same value if the small alignment has been "granted".

There are two ways to use it in practice. First one is to calculate the eligibility of a texture to use small alignment on your own and pass it to the function only when you know the texture fulfills the conditions. Second is to try the small alignment always. When it's not granted, GetResourceAllocationInfo returns some values other than expected (in my test it's Alignment = 64 KB and SizeInBytes = 0xFFFFFFFFFFFFFFFF). Then you should call it again with the default alignment. That's the method that Microsoft shows in their "Small Resources Sample". It looks good, but a problem with it is that calling this function with an alignment that is not accepted generates D3D12 Debug Layer error #721 CREATERESOURCE_INVALIDALIGNMENT. Or at least it used to, because on one of my machines the error no longer occurs. Maybe Microsoft fixed it in some recent update of Windows or Visual Studio / Windows SDK?

Here comes the last quirk of this whole D3D12 resource alignment topic: Alignment is applied to offset used in CreatePlacedResource, which we understand as relative to the beginning of an ID3D12Heap, but the heap itself has an alignment too! D3D12_HEAP_DESC structure has Alignment member. There is no equivalent of this in Vulkan. Documentation of D3D12_HEAP_DESC structure says it must be 64 KB or 4 MB. Whenever you predict you might create an MSAA textures in a heap, you must choose 4 MB. Otherwise, you can use 64 KB.

Thank you, Microsoft, for making this so complicated! ;) This article wouldn't be complete without the advertisement of open source library: D3D12 Memory Allocator (and similar Vulkan Memory Allocator), which automatically handles all this complexity. It also implements both ways of using small alignment, selectable using a preprocessor macro.

Comments | #directx #rendering #microsoft Share

# Links to GDC 2020 Talks and More

Sat
28
Mar 2020

March is an important time of year for game developers, as that's when Game Developers Conference (GDC) takes place - the most important conference of the industry. This year's edition has been cancelled because of coronavirus pandemic, just like all other events, or rather postponed to a later date. But many companies prepared their talks anyway. Surely, they had to submit their talks long time ago, plus any preparation, internal technical and legal reviews... The time spent on this shouldn't be wasted. That's why many of them shared their talks online as videos and/or slides. Below, I try to gather links to these materials with a full list of titles, with special focus on programming talks.

GDC

They organized multi-day "Virtual Talks" event presented on Twitch, with replays now available to watch and slides accessible on their website.

GDC Videos @ Twitch
GDC 2020 Virtual Talks (agenda)
GDC Vault - GDC 2020 (slides)

Monday, March 16

The 'Kine' Postmortem
Storytelling with Verbs: Integrating Gameplay and Narrative
Intrinsically Motivated Teams: The Manager's Toolbox
From 'Assassin's Creed' to 'The Dark Eye': The Importance of Themes
Representing LGBT+ Characters in Games: Two Case Studies
The Sound of Anthem
Is Your Game Cross-Platform Ready?
Forgiveness Mechanics: Reading Minds for Responsive Gameplay
Experimental AI Lightning Talk: Hyper Realistic Artificial Voices for Games

Tuesday, March 17

What to Write So People Buy: Selling Your Game Without Feeling Sleazy
Failure Workshop: FutureGrind: How To Make A 6-Month Game In Only 4.5 Years
Stress-Free Game Development: Powering Up Your Studio With DevOps
Baked in Accessibility: How Features Were Approached in 'Borderlands 3'
Matchmaking for Engagement: Lessons from 'Halo 5'
Forget CPI: Dynamic Mobile Marketing
Integrating Sound Healing Methodologies Into Your Workflow
From 0-1000: A Test Driven Approach to Tools Development
Overcoming Creative Block on 'Super Crush KO'
When Film, Games, and Theatre Collide

Wednesday, March 18

Bringing Replays to 'World of Tanks: Mercenaries'
Developing and Running Neural Audio in Constrained Environments
Mental Health State of the Industry: Past, Present & Future
Empathizing with Steam: How People Shop for Your Game
Scaling to 10 Concurrent Users: Online Infrastructure as an Indie
Crafting A Tiny Open World: 'A Short Hike' Postmortem
Indie Soapbox: UI design is fun!
Don't Ship a Product, Ship Value: Start Your Minimum Viable Product With a Solution
Day of the Devs: GDC Edition Direct
Independent Games Festival & Game Developers Choice Awards

Thursday, March 19

Machine Learning for Optimal Matchmaking
Skill Progression, Visual Attention, and Efficiently Getting Good at Esports
Making Your Game Influencer Ready: A Marketing Wishlist for Developers
How to Run Your Own Career Fair on a Tiny Budget
Making a Healthy Social Impact in Commercial Games
'Forza' Monthly: Live Streaming a Franchise
Aesthetic Driven Development: Choosing Your Art Before Making a Game
Reading the Rules of 'Baba Is You'

Friday, March 20

Beyond Games as a Service with Live Ops
Kill the Hero, Save the (Narrative) World
'Void Bastards' Art Style Origin Story
Writing Tools Faster: Design Decisions to Accelerate Tool Development
Face-to-Parameter Translation via Neural Network Renderer
The Forest Paths Method for Accessible Narrative Design
'Gears 5' Real-Time Character Dynamics
Stop & Think: Teaching Players About Media Manipulation in 'Headliner'

Microsoft

They organized "DirectX Developer Day" where they announced DirectX 12 Ultimate - a fancy name for the updated Direc3D 12_2 with new major features including DXR (Ray Tracing), Variable Rate Shading, and Mesh Shaders.

DirectX Developer Blog
Microsoft DirectX 12 and Graphics Education @ YouTube
DirectX Developer Day 2020 #DXDevDay @ Mixer (talks as one long stream)

DXR 1.1 Inline Raytracing
Advanced Mesh Shaders
Reinventing the Geometry Pipeline: Mesh Shaders in DirectX 12
DirectX 12 Sampler Feedback
PIX on Windows
HLSL Compiler

NVIDIA

That's actually GPU Technology Conference (GTC) - a separate event. Their biggest announcement this month was probably DLSS 2.0.

NVIDIA GTC Sessions

RTX-Accelerated Hair Brought to Life with NVIDIA Iray
Material Interoperability Using MaterialX, Standard Surface, and MDL
The Future of GPU Raytracing
Visuals as a Service (VaaS): How Amazon and Others Create and Use Photoreal On-Demand Product Visuals with RTX Real-Time Raytracing and the Cloud
Next-Gen Rendering Technology at Pixar
New Features in OptiX 7
Production-Quality, Final-Frame Rendering on the GPU
Latest Advancements for Production Rendering with V-Ray GPU and Real-Time Raytracing with Project Lavina
Accelerated Light-Transport Simulation using Neural Networks
Bringing the Arnold Renderer to the GPU
Supercharging Adobe Dimension with RTX-Enabled GPU Raytracing
Sharing Physically Based Materials Between Renderers with MDL
Real-Time Ray-Traced Ambient Occlusion of Complex Scenes using Spatial Hashing

I also found some other videos on Google:

DLSS - Image Reconstruction for Real-time Rendering with Deep Learning
NVIDIA Vulkan Features Update – including Vulkan 1.2 and Ray Tracing
3D Deep Learning in Function Space
Unleash Computer Vision at the Edge with Jetson Nano and Always AI
Optimized Image Classification on the Cheap
Cisco and Patriot One Technologies Bring Machine Learning Projects from Imagination to Realization (Presented by Cisco)
AI @ The Network Edge
Animation, Segmentation, and Statistical Modeling of Biological Cells Using Microscopy Imaging and GPU Compute
Improving CNN Performance with Spatial Context
Weakly Supervised Training to Achieve 99% Accuracy for Retail Asset Protection
Combating Problems Like Asteroid Detection, Climate Change, Security, and Disaster Recovery with GPU-Accelerated AI
Condensa: A Programming System for DNN Model Compression
AI/ML with vGPU on Openstack or RHV Using Kubernetes
CTR Inference Optimization on GPU
NVIDIA Tools to Train, Build, and Deploy Intelligent Vision Applications at the Edge
Leveraging NVIDIA’s Technology for the Ultimate Industrial Autonomous Transport Robot
How to Create Generalizable AI?
Isaac Sim 2020 Deep Dive

But somehow I can't find their full list with links to them anywhere on their website. More talks are accessible after free registration on the event website.

Intel

GDC 2020. A Repository for all Intel Technical Content prepared for GDC
Intel Software @ YouTube

Multi-Adapter with Integrated and Discrete GPUs
Optimizing World of Tanks*: from Laptops to High-End PCs
Intel® oneAPI Rendering Toolkit and its Application to Games
Intel® ISPC in Unreal Engine 4: A Peek Behind the Curtain
Variable Rate Shading with Depth of Field
For the Alliance! World of Warcraft and Intel discuss an Optimized Azeroth
Intel® Open Image Denoise in Blender - GDC 2020
Variable Rate Shading Tier 1 with Microsoft DirectX* 12 from Theory to Practice
Does Your Game's Performance Spark Joy? Profiling with Intel® Graphics Performance Analyzers
Boost CPU performance with Intel® VTune Profiler
DeepMotion | Optimize CPU Performance with Intel VTune Profiler

Google

Google for Games Developer Summit 2020 @ YouTube (a collection of playlists)

Mobile Track

Google for Games Developer Summit Keynote
What's new in Android game development tools
What's new in Android graphics optimization tools
Android memory tools and best practices
Deliver higher quality games on more devices
Google Play Asset Delivery for games: Product deep dive and case studies
Protect your game's integrity on Google Play
Accelerate your business growth with leading ad strategies
Firebase games SDK news
Cloud Firestore for Game Developers

Clouds Track

Google for Games Developer Summit Keynote
Scaling globally with Game Servers and Agones (Google Games Dev Summit)
How to make multiplayer matchmaking easier and scalable with Open Match (Google Games Dev Summit)
Unity Game Simulation: Find the perfect balance with Unity and GCP (Google Games Dev Summit)
How Dragon Quest Walk handled millions of players using Cloud Spanner (Google Games Dev Summit)
Building gaming analytics online services with Google Cloud and Improbable (Google Games Dev Summit)

Stadia Track

Google for Games Developer Summit Keynote
Bringing Destiny to Stadia: A postmortem (Google Games Dev Summit)
Stadia Games & Entertainment presents: Creating for content creators (Google Games Dev Summit)
Empowering game developers with Stadia R&D (Google Games Dev Summit)
Stadia Games & Entertainment presents: Keys to a great game pitch (Google Games Dev Summit)
Supercharging discoverability with Stadia (Google Games Dev Summit)

Ubisoft

Ubisoft’s GDC 2020 Talks Online Now

Online Game Technology Summit: Start-And-Discard: A Unified Workflow for Development and Live
Finding Space for Sound: Environmental Acoustics
Game Server Performance
NPC Voice Design
Machine Learning Summit: Ragdoll Motion Matching
Machine Learning, Physics Simulation, Kolmogorov Complexity, and Squishy Bunnies

Khronos: I can't find any information about individual talks from them. There is only a note about GDC 2020 Live Streams pointing to GDC Twitch channel.

AMD: No information.

Sony: No information.

Consoles: Last but not least, March 2020 was also the time when the details of the upcoming new generation of consoles have been announced - Xbox Series S and PlayStation 5. You can easily find information about them by searching the Internet, so I won't recommend any links.

If you know about any more GDC 2020 or other important talks related to programming that have been released recently, please contact me or leave a comment below and I will add them!

Maybe there a positive side of this pandemic? With GDC taking place, developers had to pay $1000+ entrance fee for the event. They had to book a flight to California and a hotel in San Francisco, which was prohibitively expensive for many. They had to apply for ESTA or a vista to the US, which not everyone could get. And the talks eventually landed behind a paywall, scoring even more money to the organizers. Now we can educate ourselves for free from the safety and convenience of our offices and homes.

Comments | #gdc #intel #nvidia #google #events #directx #rendering #microsoft #rendering Share

Pages: > 1 2 3 4 ... 12 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2023