Sun
22
Oct 2023
This article is intended for graphics programmers, mostly those who use Direct3D 12 or Vulkan and implement dynamic resolution scaling. Before we go to the main topic, some introduction first…
Nowadays, more and more games offer some kind resolution scaling. It means rendering the 3D scene in a resolution lower than the display resolution and then upscaling it using some advanced shader, often combined with temporal antialiasing and sharpening. It may be one of the solutions provided by GPU vendors (FSR from AMD, XeSS from Intel, DLSS from NVIDIA) or a custom solution (like TSR in Unreal Engine). It is an attractive option for gamers to have a good FPS increase with only minor image quality degradation. It is becoming more important as monitor resolutions increase to 4K or even more, high-end graphics cards are still expensive, and advanced rendering techniques like ray tracing encourage to favor “better pixels” over “more pixels”. See also my old article: “Scaling is everywhere, pixel-perfect is the past”.
Dynamic resolution scaling is an extension to this idea that allows rendering each frame in a different resolution, lower or higher, as a trade-off between quality and performance, to maintain desired framerate even in more complex scenes with many objects, characters, and particle effects visible on the screen. If you are interested in this technique, I strongly recommend checking a recent article from Martin Fuller from Microsoft: “Dynamic Resolution Scaling (DRS) Implementation Best Practice”, which provides many practical implementation tips.
One of the topics we need to handle when implementing dynamic resolution scaling is the creation and usage of textures that need different resolution every frame, especially render target, depth-stencil, and UAV, used temporarily between render passes. One solution could be to create these textures in the maximum resolution and use only part of them when necessary using a limited viewport. However, Martin gives multiple reasons why this option may cause some problems. A simpler and safer solution is to create a separate texture for each possible resolution, with a certain step. In modern graphics APIs (Direct3D 12 and Vulkan) they can be placed in the same memory, which we call memory aliasing.
Here comes the main question I want to answer in this article: What size of the memory heap should we use when allocating memory for these textures? Can we just take maximum dimensions of a texture (e.g. 4K resolution: 3840 x 2160), call device->GetResourceAllocationInfo()
, inspect returned D3D12_RESOURCE_ALLOCATION_INFO::SizeInBytes
and use it as D3D12_HEAP_DESC::SizeInBytes
? A texture with less pixels should always require less memory, right?
WRONG! Direct3D 12 doesn’t define such a requirement and graphics drivers from some GPU vendors really return smaller size required for a texture with larger dimensions, for some specific dimensions and pixel formats. For example, on AMD Radeon RX 7900 XTX, a render target with format DXGI_FORMAT_R16G16B16A16_FLOAT
, returns:
Why does this happen? It is because textures are not necessarily stored in the GPU memory in a way we imagine them: pixel-after-pixel, row major order. They often use some optimization techniques like pixel swizzling or compression. By “compression”, I don’t mean texture formats like BC or ASTC, which we must use explicitly. I also don’t mean compression like in ZIP file format or zlib/deflate algorithm that decrease data size. Quite the opposite: this kind of compression increases texture size by adding extra metadata, which allow to speed things up by saving memory bandwidth in certain cases. This is done mostly on render target and depth-stencil textures. For more information about it, see my old article: “Texture Compression: What Can It Mean?”. I’m talking about the meaning of the word “compression” number 4 from that article – compression formats that are internal, specific to certain graphics cards, and opaque for us – programmers who just use the graphics API. Problem is that a specific compression format for a texture is selected by the driver based on various heuristics (like render target / depth-stencil / UAV / other flags, pixel format, and… dimensions). This is why a texture with larger dimensions may unexpectedly require less memory.
To research this problem in details, I’ve written a small testing program and I performed tests on graphics cards from various vendors. It was a modification of my small Windows console app D3d12info that goes through the list of all DXGI_FORMAT
enum values, calls CheckFeatureSupport
to check which ones are supported as a render target or depth-stencil. For those that do, I called GetResourceAllocationInfo
to get memory requirements for a texture with this pixel format, with increasing dimensions, where height goes from 32 to 2160 with a step of 8, and width is calculated using a formula for 16:9 aspect ratio: width = height * 16 / 9.
Here are the results. Please remember these are just 3 specific graphics cards. The results may be different on a different GPU and even with a different version of the graphics driver.
On NVIDIA GeForce RTX 3080 with driver 545.84, I found no cases where a texture with larger dimensions requires less memory, so NVIDIA (or at least this specific card) is not affected by the problem described in this article.
On AMD Radeon RX 7900 XTX with driver 23.9.3, I found following data points where memory requirements are non-monotonic – one for each of the following formats:
DXGI_FORMAT_R16G16B16A16_FLOAT/UNORM/UINT/SNORM/SINT
: 256x144 = 458,752 B, 270x152 = 393,216 BDXGI_FORMAT_R32G32_FLOAT/UINT/SINT
: 256x144 = 458,752 B, 270x152 = 393,216 BDXGI_FORMAT_R8G8_UNORM/UINT/SNORM/SINT
: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_R16_FLOAT/UNORM/UINT/SNORM/SINT
: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_R8_UNORM/UINT/SNORM/SINT
: 256x144 = 131,072 B, 270x152 = 65,536 BDXGI_FORMAT_A8_UNORM
: 256x144 = 131,072 B, 270x152 = 65,536 BDXGI_FORMAT_B5G6R5_UNORM
: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_B5G5R5A1_UNORM
: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_B4G4R4A4_UNORM
: 512x288 = 458,752 B, 526x296 = 393,216 BOn Intel Arc A770, with driver 31.0.101.4887, almost every format used as a render target (but none of depth-stencil formats) has multiple steps where the size decreases, and it has them at larger dimensions than AMD. For example, the most “traditional” one – DXGI_FORMAT_R8G8B8A8_UNORM
returns:
What to do with this knowledge? The conclusion is that if we implement dynamic resolution scaling and we want to create textures with different dimensions aliasing in memory, required size of this memory is not necessarily the size of the largest texture in terms of dimensions. To be safe, we should query for memory requirements of all texture sizes we may want to use and calculate their maximum. In practice, it should be enough to query resolutions starting from e.g. 75% of the maximum. Because tested GPUs always have only a single step down, an even more efficient, but not fully future-proof solution could be to start from the full resolution, go down until we find a different memory size (no matter if higher or lower), and take maximum of these two.
So far, I focused only on DirectX 12. Is Vulkan also affected by this problem? In the past, it could be. Vulkan has similar concept of querying for memory requirements of a texture using function vkGetImageMemoryRequirements
. It used to have an even bigger problem. To understand it, we must recall that in D3D12, we query for memory requirements (size and alignment) given structure D3D12_RESOURCE_DESC
which describes parameters of a texture to be created. In (the initial) Vulkan API, on the other hand, we need to first create the actual VkImage
object, and then query for its memory requirements. Question is: Given two textures created with exactly same parameters (width, height, pixel format, number of mip levels, flags, etc.), do they always return the same memory requirements?
In the past, it wasn’t required by the Vulkan specification and I saw some drivers for some GPUs that really returned different sizes for two identical textures! It could cause problems, e.g. when defragmenting video memory in Vulkan Memory Allocator library. Was it a bug, or another internal optimization done by the driver, e.g. to avoid some memory bank conflicts? I don’t know. Good news is that since then, Vulkan specification was clarified to require that functions like vkGetImageMemoryRequirements
always return the same size and alignment for images created with the same parameters, and new drivers comply with that, so the problem is gone now. Vulkan 1.3 also got a new function vkGetDeviceImageMemoryRequirements
that takes VkImageCreateInfo
with image creation parameters instead of an already created image object, just like D3D12 does from the beginning.
Going back to the main question of this article: When VK_KHR_maintenance4 extension is enabled (which has been promoted to core Vulkan 1.3), the problem does not occur, as Vulkan specification says: "For a VkImage, the size memory requirement is never greater than that of another VkImage created with a greater or equal value in each of extent.width, extent.height, and extent.depth; all other creation parameters being identical.", and the same for buffers.
Big thanks to my friends: Bartek Boczula for discussions about this topic and inspiration to write this article, as well as Szymon Nowacki for testing on the Intel card! Also thanks to Constantine Shablia from Collabora for pointing me to the answer on Vulkan.
Comments | #rendering #gpu #vulkan #directx Share