Tag: directx

Entries for tag "directx", ordered from most recent. Entry count: 93.

Pages: 1 2 3 ... 12 >

# Shapes and forms of DX12 root signatures

Tue
14
May 2024

This article is for you if you are a programmer using Direct3D 12. We will talk about a specific part of the API: root signatures. I will provide a comprehensive description of various formats in which they can be specified, stored, and ways to convert between them. The difficulty of this article is intermediate. You are expected to know at least some basics of D3D12. I think that advanced developers will can also learn something new, as some of the topics shown here are not what we typically use in a day-to-day development with D3D12.

Tools of the trade

I will use C++ as the programming language. Wherever possible, I will also try to use standalone command-line tools instead of writing a custom code. To repeat my experiments demonstrated in this article, you will need two of these:

You don't need to know the command-line syntax of these tools to understand the article. I will describe everything step-by-step.

Warning about DXC: If you also have Vulkan SDK installed, very likely your PATH environmental variable points to "dxc.exe" in that SDK instead of Windows SDK, which can cause problems. To check this, type command: where dxc. If you find Vulkan SDK listed first, make sure you call "dxc.exe" from Windows SDK, e.g. by explicitly specifying full path to the executable file.

Warning about RGA: If you want to repeat command-line experiments presented here, make sure to use Radeon GPU Analyzer in the latest version, at least 2.9.1. In older versions, the commands I present wouldn't work.

Shader compilation

A side note about shader compilation: Native CPU code, like the one we create when compiling our C++ programs, is saved in .exe files. I contains instructions in a common format called x86, which is sent directly to CPU for execution. It works regardless if you have an AMD or Intel processor in your computer, because they comply to the same standard. With programs written for the GPU (which we call shaders), things are different. Every GPU vendor (AMD, Nvidia, Intel) has its own instruction set, necessitating a two-step process for shader compilation:

  1. As graphics programmers, we write shaders in high-level languages like HLSL or GLSL. We then compile them using a shader compiler like "dxc.exe" to a binary format. It is actually an intermediate format common to all GPU vendors, defined by Microsoft for Direct3D (called DXIL) or by Khronos for Vulkan (called SPIR-V). We are encouraged to compile our shaders offline and only ship these compiled binaries to end users.
  2. When our application uses a graphics API (like Direct3D 12, Vulkan) and creates a pipeline state object (PSO), it specifies these shaders as inputs. This intermediate code then goes to the graphics driver, which performs second stage of the compilation - translates it to instructions valid for the specific GPU (also called Instruction Set Architecture - ISA). We typically don't see this assembly code and we never write it directly, although inspecting it can be useful for optimizations. Nvidia's ISA is secret, but AMD and Intel publish documents describing theirs. RGA tool mentioned below can show the AMD ISA.

What is a root signature?

In Direct3D 12, a root signature is a data structure that describes resource bindings used by a pipeline on all the shader stages. Let's see an example. Let's work with file "Shader1.hlsl": a very simple HLSL code that contains 2 entry points: function VsMain for vertex shader and function PsMain for pixel shader:

struct VsInput
{
 float3 pos : POSITION;
 float2 tex_coord : TEXCOORD;
};
struct VsOutput
{
 float4 pos : SV_Position;
 float2 tex_coord : TEXCOORD;
};

struct VsConstants
{
 float4x4 model_view_proj;
};
ConstantBuffer<VsConstants> vs_constant_buffer : register(b4);

VsOutput VsMain(VsInput i)
{
 VsOutput o;
 o.pos = mul(float4(i.pos, 1.0), vs_constant_buffer.model_view_proj);
 o.tex_coord = i.tex_coord;
 return o;
}

Texture2D<float4> color_texture : register(t0);
SamplerState color_sampler : register(s0);

float4 PsMain(VsOutput i) : SV_Target
{
 return color_texture.Sample(color_sampler, i.tex_coord);
}

I assume you already know that a shader is a program executed on a GPU that processes a single vertex or pixel with clearly defined inputs and outputs. To perform the work, it can also reach out to video memory to access additional resources, like buffers and textures. In the code shown above:

A root signature is a data structure that describes what I said above - what resources should be bound to the pipeline at individual shader stages. In this specific example, it will be a constant buffer at register b4, a texture at t0, and a sampler at s0. It can also be shown in form of a table:

Root param index Register Shader stage
0 b4 VS
1 t0 PS
2 s0 PS

I am simplifying things here, because this article is not about teaching you the basics of root signatures. For more information about them, you can check:

To prepare for our experiments, let's compile the shaders shown above using commands:

dxc -T vs_6_0 -E VsMain -Fo Shader1.vs.bin Shader1.hlsl
dxc -T ps_6_0 -E PsMain -Fo Shader1.ps.bin Shader1.hlsl

Note that a single HLSL source file can contain multiple functions (VsMain, PsMain). When we compile it, we need to specify one function as an entry point. For example, the first command compiles "Shader1.hlsl" file using VsMain function as the entry point (-E parameter) treated as a vertex shader in Shader Model 6.0 (-T parameter). Similarly, the second command compiles PsMain function as a pixel shader. Compiled shaders are saved in two separate files: "Shader1.vs.bin" and "Shader1.ps.bin".

 

 

 

Comments | #directx #rendering Share

# How to programmatically check graphics driver version

Sat
16
Dec 2023

This article is for you if you are a graphics programmer who develops for Windows using Direct3D 11, 12, or Vulkan, and you want to fetch the version of the graphics driver currently installed in your user's system programmatically. If you are in a hurry, you can jump straight to the recommended solution in section "DXGI way" below. However, because this topic is non-trivial, I invite you to read the entire article, where I explain it comprehensively.

 

 

 

Comments | #rendering #directx #vulkan #windows #winapi Share

# Secrets of Direct3D 12: Do RTV and DSV descriptors make any sense?

Sun
12
Nov 2023

This article is intended for programmers who use Direct3D 12. We will explore the topic of descriptors, especially Render Target View (RTV) and Depth Stencil View (DSV) descriptors. To understand the article, you should already know what they are and how to use them. For learning the basics, I recommend my earlier article “Direct3D 12: Long Way to Access Data” where I described resource binding model in D3D12. Current article is somewhat a follow-up to that one. I also recommend checking the official “D3D12 Resource Binding Functional Spec”.

Descriptors in general

What is a “descriptor”? My personal definition would be that generally in computing, a descriptor is a small data structure that points to some larger data and describes its parameters. While a “pointer”, “identifier”, or “key” is typically just a single number that points or identifies the main object, a “descriptor” is typically a structure that also carries some parameters describing the object.

Descriptors in D3D12

Descriptors in D3D12 are also called “views”. They mean the same thing. Functions like ID3D12Device::CreateShaderResourceView or CreateRenderTargetView setup a descriptor. Note this is different from Vulkan, where a “view” and a “descriptor” are different entities. The concept of “view” is also present in relational databases. Just like in databases, a “view” points to the target data, but also specifies a way to look at them. In D3D12 it means, for example, that an SRV descriptor pointing to a texture can reinterpret its pixel format (e.g. with or without _SRGB), limit access to only selected range of mip levels or array slices.

Let’s talk about Constant Buffer View (CBV), Shader Resource View (SRV), or Unordered Access View (UAV) descriptors first. If created inside GPU-accessible descriptor heaps (class ID3D12DescriptorHeap, flag D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE), they can be bound to the graphics pipeline, as I described in details in my previously mentioned article. Being part of GPU memory has some implications:

 

 

 

Comments | #directx #rendering Share

# Doing dynamic resolution scaling? Watch out for texture memory size!

Sun
22
Oct 2023

This article is intended for graphics programmers, mostly those who use Direct3D 12 or Vulkan and implement dynamic resolution scaling. Before we go to the main topic, some introduction first…

Nowadays, more and more games offer some kind resolution scaling. It means rendering the 3D scene in a resolution lower than the display resolution and then upscaling it using some advanced shader, often combined with temporal antialiasing and sharpening. It may be one of the solutions provided by GPU vendors (FSR from AMD, XeSS from Intel, DLSS from NVIDIA) or a custom solution (like TSR in Unreal Engine). It is an attractive option for gamers to have a good FPS increase with only minor image quality degradation. It is becoming more important as monitor resolutions increase to 4K or even more, high-end graphics cards are still expensive, and advanced rendering techniques like ray tracing encourage to favor “better pixels” over “more pixels”. See also my old article: “Scaling is everywhere, pixel-perfect is the past”.

Dynamic resolution scaling is an extension to this idea that allows rendering each frame in a different resolution, lower or higher, as a trade-off between quality and performance, to maintain desired framerate even in more complex scenes with many objects, characters, and particle effects visible on the screen. If you are interested in this technique, I strongly recommend checking a recent article from Martin Fuller from Microsoft: “Dynamic Resolution Scaling (DRS) Implementation Best Practice”, which provides many practical implementation tips.

One of the topics we need to handle when implementing dynamic resolution scaling is the creation and usage of textures that need different resolution every frame, especially render target, depth-stencil, and UAV, used temporarily between render passes. One solution could be to create these textures in the maximum resolution and use only part of them when necessary using a limited viewport. However, Martin gives multiple reasons why this option may cause some problems. A simpler and safer solution is to create a separate texture for each possible resolution, with a certain step. In modern graphics APIs (Direct3D 12 and Vulkan) they can be placed in the same memory, which we call memory aliasing.

Here comes the main question I want to answer in this article: What size of the memory heap should we use when allocating memory for these textures? Can we just take maximum dimensions of a texture (e.g. 4K resolution: 3840 x 2160), call device->GetResourceAllocationInfo(), inspect returned D3D12_RESOURCE_ALLOCATION_INFO::SizeInBytes and use it as D3D12_HEAP_DESC::SizeInBytes? A texture with less pixels should always require less memory, right?

WRONG! Direct3D 12 doesn’t define such a requirement and graphics drivers from some GPU vendors really return smaller size required for a texture with larger dimensions, for some specific dimensions and pixel formats. For example, on AMD Radeon RX 7900 XTX, a render target with format DXGI_FORMAT_R16G16B16A16_FLOAT, returns:

Why does this happen? It is because textures are not necessarily stored in the GPU memory in a way we imagine them: pixel-after-pixel, row major order. They often use some optimization techniques like pixel swizzling or compression. By “compression”, I don’t mean texture formats like BC or ASTC, which we must use explicitly. I also don’t mean compression like in ZIP file format or zlib/deflate algorithm that decrease data size. Quite the opposite: this kind of compression increases texture size by adding extra metadata, which allow to speed things up by saving memory bandwidth in certain cases. This is done mostly on render target and depth-stencil textures. For more information about it, see my old article: “Texture Compression: What Can It Mean?”. I’m talking about the meaning of the word “compression” number 4 from that article – compression formats that are internal, specific to certain graphics cards, and opaque for us – programmers who just use the graphics API. Problem is that a specific compression format for a texture is selected by the driver based on various heuristics (like render target / depth-stencil / UAV / other flags, pixel format, and… dimensions). This is why a texture with larger dimensions may unexpectedly require less memory.

To research this problem in details, I’ve written a small testing program and I performed tests on graphics cards from various vendors. It was a modification of my small Windows console app D3d12info that goes through the list of all DXGI_FORMAT enum values, calls CheckFeatureSupport to check which ones are supported as a render target or depth-stencil. For those that do, I called GetResourceAllocationInfo to get memory requirements for a texture with this pixel format, with increasing dimensions, where height goes from 32 to 2160 with a step of 8, and width is calculated using a formula for 16:9 aspect ratio: width = height * 16 / 9.

Here are the results. Please remember these are just 3 specific graphics cards. The results may be different on a different GPU and even with a different version of the graphics driver.

On NVIDIA GeForce RTX 3080 with driver 545.84, I found no cases where a texture with larger dimensions requires less memory, so NVIDIA (or at least this specific card) is not affected by the problem described in this article.

On AMD Radeon RX 7900 XTX with driver 23.9.3, I found following data points where memory requirements are non-monotonic – one for each of the following formats:

On Intel Arc A770, with driver 31.0.101.4887, almost every format used as a render target (but none of depth-stencil formats) has multiple steps where the size decreases, and it has them at larger dimensions than AMD. For example, the most “traditional” one – DXGI_FORMAT_R8G8B8A8_UNORM returns:

What to do with this knowledge? The conclusion is that if we implement dynamic resolution scaling and we want to create textures with different dimensions aliasing in memory, required size of this memory is not necessarily the size of the largest texture in terms of dimensions. To be safe, we should query for memory requirements of all texture sizes we may want to use and calculate their maximum. In practice, it should be enough to query resolutions starting from e.g. 75% of the maximum. Because tested GPUs always have only a single step down, an even more efficient, but not fully future-proof solution could be to start from the full resolution, go down until we find a different memory size (no matter if higher or lower), and take maximum of these two.

So far, I focused only on DirectX 12. Is Vulkan also affected by this problem? In the past, it could be. Vulkan has similar concept of querying for memory requirements of a texture using function vkGetImageMemoryRequirements. It used to have an even bigger problem. To understand it, we must recall that in D3D12, we query for memory requirements (size and alignment) given structure D3D12_RESOURCE_DESC which describes parameters of a texture to be created. In (the initial) Vulkan API, on the other hand, we need to first create the actual VkImage object, and then query for its memory requirements. Question is: Given two textures created with exactly same parameters (width, height, pixel format, number of mip levels, flags, etc.), do they always return the same memory requirements?

In the past, it wasn’t required by the Vulkan specification and I saw some drivers for some GPUs that really returned different sizes for two identical textures! It could cause problems, e.g. when defragmenting video memory in Vulkan Memory Allocator library. Was it a bug, or another internal optimization done by the driver, e.g. to avoid some memory bank conflicts? I don’t know. Good news is that since then, Vulkan specification was clarified to require that functions like vkGetImageMemoryRequirements always return the same size and alignment for images created with the same parameters, and new drivers comply with that, so the problem is gone now. Vulkan 1.3 also got a new function vkGetDeviceImageMemoryRequirements that takes VkImageCreateInfo with image creation parameters instead of an already created image object, just like D3D12 does from the beginning.

Going back to the main question of this article: When VK_KHR_maintenance4 extension is enabled (which has been promoted to core Vulkan 1.3), the problem does not occur, as Vulkan specification says: "For a VkImage, the size memory requirement is never greater than that of another VkImage created with a greater or equal value in each of extent.width, extent.height, and extent.depth; all other creation parameters being identical.", and the same for buffers.

Big thanks to my friends: Bartek Boczula for discussions about this topic and inspiration to write this article, as well as Szymon Nowacki for testing on the Intel card! Also thanks to Constantine Shablia from Collabora for pointing me to the answer on Vulkan.

Comments | #rendering #gpu #vulkan #directx Share

# ShaderCrashingAssert - a New Small Library

Sun
20
Aug 2023

Last Thursday (August 17th) AMD released a new tool for post-mortem analysis of GPU crashes: Radeon GPU Detective. I participated in this project, but because this is my personal blog and because it is weekend now, I am wearing my hobby developer hat and I want to present a small library that I developed yesterday:

ShaderCrashingAssert provides an assert-like macro for HLSL shaders that triggers a GPU memory page fault. Together with RGD, it can help with shader debugging.

Comments | #rendering #directx #productions #libraries #gpu #tools Share

# D3d12info - Printing D3D12 GPU Information to Console

Wed
27
Jul 2022

My next little hobby project is D3d12info. It is a Windows console program that prints all the information it can get about the current GPU installed in the system, as seen through Direct3D 12 API. It also fetches additional information through AMD GPU Services (on AMD cards), NVAPI (on NVIDIA cards), Vulkan, and WinAPI, mostly to identify the current version of the graphics driver and Windows system. I will try to keep it updated to the latest Agility SDK, to query it for support for the latest hardware features of the graphics card.

I share it under open-source MIT license. You can see full source code in the GitHub repository and download compiled binary from the Releases tab.

The tool can be compared to DirectX Caps Viewer you can find in your Windows SDK installation under path "c:\Program Files (x86)\Windows Kits\10\bin\*\x64\dxcapsviewer.exe" in terms of the information extracted from DX12. However, instead of GUI, it provides a command-line interface, which makes it similar to the "vulkaninfo" tool. Information is printed in a human-readable text format by default, but JSON format can be selected by providing -j parameter, making it suitable for automated processing. Additional command-line parameters are supported, including a choice of the GPU if there are many installed in the system. Launch it with parameter -h to see the command-line syntax.

In the future, I would like to extend it with a web back-end that would gather a database of various GPUs and driver versions, like Vulkan Hardware Database does for Vulkan, and to make it browsable online. As far as I know, there is no such database for D3D12 at the moment. Best we have right now are the tables about Direct3D Feature Levels on Wikipedia. But that will require a lot of learning from me, as I am not a good web developer, so I will think about it after my vacation :)

Comments | #productions #tools #directx #gpu Share

# Vulkan Memory Allocator 3.0.0 and D3D12 Memory Allocator 2.0.0

Sat
26
Mar 2022

Yesterday we released new major version of Vulkan Memory Allocator 3.0.0 and D3D12 Memory Allocator 2.0.0, so if you are coding with Vulkan or Direct3D 12, I recommend to take a look at these libraries. Because coding them is part of my job, I won't describe them in detail here, but just refer to my article published on GPUOpen.com: "Announcing Vulkan Memory Allocator 3.0.0 and Direct3D 12 Memory Allocator 2.0.0". Direct links:

Vulkan Memory Allocator

D3D12 Memory Allocator

Comments | #rendering #directx #vulkan #gpu #libraries #productions Share

# Untangling Direct3D 12 Memory Heap Types and Pools

Sat
26
Feb 2022

Those of you who follow my blog can say that I am boring, but I can't help it - somehow GPU memory allocation became my thing, rather than shaders and effects, like most graphics programmers do. Some time ago I've written an article "Vulkan Memory Types on PC and How to Use Them" explaining what memory heaps and types are available on various types of PC GPUs, as visible through Vulkan API. This article is a Direct3D 12 equivalent, in a way.

With expressing memory types as they exist in hardware, D3D12 differs greatly from Vulkan. Vulkan defines a 2-level hierarchy of memory "heaps" and "types". A heap represents a physical piece of memory of a certain size, while a type is a "view" of a specific heap with certain properties, like cached versus uncached. This gives a great flexibility in how different GPUs can express their memory, which makes it hard for the developer to ensure he selects the optimal one on any kind of GPU. Direct3D 12 offers a fixed set of memory types. When creating a buffer or a texture, it usually means selecting one of the 3 standard "heap types":

So far, so good... D3D12 seems to simplify things compared to Vulkan. You can stop here and still develop a decent graphics program, but if you make a game with an open world and want to stream your content in runtime, so you need to check what memory budget is available to your app, or you want to take advantage of integrated graphics where memory is unified, you will find out that things are not that simple in this API. There are 4 different ways that D3D12 calls various memory types and they are not so obvious when we compare systems with discrete versus integrated graphics. The goal of this article is to explain and untangle all this complexity.

 

 

 

Comments | #directx Share

Pages: 1 2 3 ... 12 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2024