Entries for tag "gpu", ordered from most recent. Entry count: 29.
# State of GPU Hardware (End of Year 2025) - New Article
Mon
29
Dec 2025
I published a guest article by Dmytro “Boolka” Bulatov, providing an overview of the current GPU market in context of what features are supported on end-users machines:
» "State of GPU Hardware (End of Year 2025)" «
Comments | #gpu #hardware #directx Share
# Debugging AMD-Specific Issues with Driver Experiments Tool
Wed
30
Jul 2025
If you’re programming graphics using modern APIs like DirectX 12 or Vulkan and you're working with an AMD GPU, you may already be familiar with the Radeon Developer Tool Suite. In this article, I’d like to highlight one of the tools it includes - Driver Experiments - and specifically focus on two experiments that can help you debug AMD-specific issues in your application, such as visual glitches.

Not an actual screenshot from a game, just an illustration.
Comments | #rendering #gpu #amd Share
# Fixing Godot 4.3 Hang on ASUS TUF Gaming Laptop
Wed
26
Mar 2025
In January 2025, I participated in PolyJam - a Global Game Jam site in Warsaw, Poland. I shared my experiences in a blog post: Global Game Jam 2025 and First Impressions from Godot. This post focuses on a specific issue I encountered during the jam: Godot 4.3 frequently hanging on my ASUS TUF Gaming laptop. If you're in a hurry, you can SCROLL DOWN to skip straight to the solution that worked for me.
The laptop I used was an ASUS TUF Gaming FX505DY. Interestingly, it has two different AMD GPUs onboard - a detail that becomes important later:
The game we developed wasn’t particularly complex or demanding - it was a 2D pixel art project. Yet, the Godot editor kept freezing frequently, even without running the game. The hangs occurred at random moments, often while simply navigating the editor UI. Each time, I had to force-close and restart the process. I was using Godot 4.3 Stable at the time.
I needed a quick solution. My first step was verifying that both Godot 4.3 and my AMD graphics drivers were up to date (they were). Then, I launched Godot via "Godot_v4.3-stable_win64_console.exe", which displays a console window with debug logs alongside the editor. That’s when I noticed an error message appearing every time the hang occurred:
ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
This suggested the issue might be GPU-related, specifically involving the Vulkan API. However, I wasn’t entirely sure - the same error message occasionally appeared even when the engine wasn’t hanging, so it wasn’t a definitive indicator.
To investigate further, I decided to enable the Vulkan validation layer, hoping it would reveal more detailed error messages about what the engine was doing wrong. Having Vulkan SDK installed in my system, I launched the Vulkan Configurator app that comes with it ("Bin\vkconfig.exe"), I selected Vulkan Layers Management = Layers Controlled by the Vulkan Configurator, and selected Validation.
Unfortunately, when I launched Godot again, no new error messages appeared in the console. (Looking back, I’m not even sure if that console window actually captured the process’s standard output.) For a brief moment, I thought enabling the Vulkan validation layer had fixed the hangs - but they soon returned. Maybe they were less frequent, or perhaps it was just wishful thinking.
Next, I considered forcing Godot to use the integrated GPU (Radeon Vega 8) instead of the more powerful discrete GPU (RX 560X). To test this, I adjusted Windows power settings to prioritize power saving over maximum performance. However, this didn’t work - Godot still reported using the Radeon RX 560X.
THE SOLUTION: What finally worked was forcing Godot to use the integrated GPU by launching it with a specific command-line parameter. Instead of running the editor normally, I used:
Godot_v4.3-stable_win64_console.exe --verbose --gpu-index 1
This made Godot use the second GPU (index 1) - the slower Radeon Vega 8 - instead of the default RX 560X. The result? No more hangs. While the integrated GPU is less powerful, it was more than enough for our 2D pixel art game.
I am not sure why it helped, considering that both GPUs on my laptop are from AMD and they are supported by one driver. I also didn't check whether the new Godot 4.4 that was released since then has this bug fixed. I am just leaving this story here, in case someone stumbles upon the same problem in the future.
# Doing dynamic resolution scaling? Watch out for texture memory size!
Sun
22
Oct 2023
This article is intended for graphics programmers, mostly those who use Direct3D 12 or Vulkan and implement dynamic resolution scaling. Before we go to the main topic, some introduction first…
Nowadays, more and more games offer some kind resolution scaling. It means rendering the 3D scene in a resolution lower than the display resolution and then upscaling it using some advanced shader, often combined with temporal antialiasing and sharpening. It may be one of the solutions provided by GPU vendors (FSR from AMD, XeSS from Intel, DLSS from NVIDIA) or a custom solution (like TSR in Unreal Engine). It is an attractive option for gamers to have a good FPS increase with only minor image quality degradation. It is becoming more important as monitor resolutions increase to 4K or even more, high-end graphics cards are still expensive, and advanced rendering techniques like ray tracing encourage to favor “better pixels” over “more pixels”. See also my old article: “Scaling is everywhere, pixel-perfect is the past”.
Dynamic resolution scaling is an extension to this idea that allows rendering each frame in a different resolution, lower or higher, as a trade-off between quality and performance, to maintain desired framerate even in more complex scenes with many objects, characters, and particle effects visible on the screen. If you are interested in this technique, I strongly recommend checking a recent article from Martin Fuller from Microsoft: “Dynamic Resolution Scaling (DRS) Implementation Best Practice”, which provides many practical implementation tips.
One of the topics we need to handle when implementing dynamic resolution scaling is the creation and usage of textures that need different resolution every frame, especially render target, depth-stencil, and UAV, used temporarily between render passes. One solution could be to create these textures in the maximum resolution and use only part of them when necessary using a limited viewport. However, Martin gives multiple reasons why this option may cause some problems. A simpler and safer solution is to create a separate texture for each possible resolution, with a certain step. In modern graphics APIs (Direct3D 12 and Vulkan) they can be placed in the same memory, which we call memory aliasing.
Here comes the main question I want to answer in this article: What size of the memory heap should we use when allocating memory for these textures? Can we just take maximum dimensions of a texture (e.g. 4K resolution: 3840 x 2160), call device->GetResourceAllocationInfo(), inspect returned D3D12_RESOURCE_ALLOCATION_INFO::SizeInBytes and use it as D3D12_HEAP_DESC::SizeInBytes? A texture with less pixels should always require less memory, right?
WRONG! Direct3D 12 doesn’t define such a requirement and graphics drivers from some GPU vendors really return smaller size required for a texture with larger dimensions, for some specific dimensions and pixel formats. For example, on AMD Radeon RX 7900 XTX, a render target with format DXGI_FORMAT_R16G16B16A16_FLOAT, returns:
Why does this happen? It is because textures are not necessarily stored in the GPU memory in a way we imagine them: pixel-after-pixel, row major order. They often use some optimization techniques like pixel swizzling or compression. By “compression”, I don’t mean texture formats like BC or ASTC, which we must use explicitly. I also don’t mean compression like in ZIP file format or zlib/deflate algorithm that decrease data size. Quite the opposite: this kind of compression increases texture size by adding extra metadata, which allow to speed things up by saving memory bandwidth in certain cases. This is done mostly on render target and depth-stencil textures. For more information about it, see my old article: “Texture Compression: What Can It Mean?”. I’m talking about the meaning of the word “compression” number 4 from that article – compression formats that are internal, specific to certain graphics cards, and opaque for us – programmers who just use the graphics API. Problem is that a specific compression format for a texture is selected by the driver based on various heuristics (like render target / depth-stencil / UAV / other flags, pixel format, and… dimensions). This is why a texture with larger dimensions may unexpectedly require less memory.
To research this problem in details, I’ve written a small testing program and I performed tests on graphics cards from various vendors. It was a modification of my small Windows console app D3d12info that goes through the list of all DXGI_FORMAT enum values, calls CheckFeatureSupport to check which ones are supported as a render target or depth-stencil. For those that do, I called GetResourceAllocationInfo to get memory requirements for a texture with this pixel format, with increasing dimensions, where height goes from 32 to 2160 with a step of 8, and width is calculated using a formula for 16:9 aspect ratio: width = height * 16 / 9.
Here are the results. Please remember these are just 3 specific graphics cards. The results may be different on a different GPU and even with a different version of the graphics driver.
On NVIDIA GeForce RTX 3080 with driver 545.84, I found no cases where a texture with larger dimensions requires less memory, so NVIDIA (or at least this specific card) is not affected by the problem described in this article.
On AMD Radeon RX 7900 XTX with driver 23.9.3, I found following data points where memory requirements are non-monotonic – one for each of the following formats:
DXGI_FORMAT_R16G16B16A16_FLOAT/UNORM/UINT/SNORM/SINT: 256x144 = 458,752 B, 270x152 = 393,216 BDXGI_FORMAT_R32G32_FLOAT/UINT/SINT: 256x144 = 458,752 B, 270x152 = 393,216 BDXGI_FORMAT_R8G8_UNORM/UINT/SNORM/SINT: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_R16_FLOAT/UNORM/UINT/SNORM/SINT: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_R8_UNORM/UINT/SNORM/SINT: 256x144 = 131,072 B, 270x152 = 65,536 BDXGI_FORMAT_A8_UNORM: 256x144 = 131,072 B, 270x152 = 65,536 BDXGI_FORMAT_B5G6R5_UNORM: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_B5G5R5A1_UNORM: 512x288 = 458,752 B, 526x296 = 393,216 BDXGI_FORMAT_B4G4R4A4_UNORM: 512x288 = 458,752 B, 526x296 = 393,216 BOn Intel Arc A770, with driver 31.0.101.4887, almost every format used as a render target (but none of depth-stencil formats) has multiple steps where the size decreases, and it has them at larger dimensions than AMD. For example, the most “traditional” one – DXGI_FORMAT_R8G8B8A8_UNORM returns:

What to do with this knowledge? The conclusion is that if we implement dynamic resolution scaling and we want to create textures with different dimensions aliasing in memory, required size of this memory is not necessarily the size of the largest texture in terms of dimensions. To be safe, we should query for memory requirements of all texture sizes we may want to use and calculate their maximum. In practice, it should be enough to query resolutions starting from e.g. 75% of the maximum. Because tested GPUs always have only a single step down, an even more efficient, but not fully future-proof solution could be to start from the full resolution, go down until we find a different memory size (no matter if higher or lower), and take maximum of these two.
So far, I focused only on DirectX 12. Is Vulkan also affected by this problem? In the past, it could be. Vulkan has similar concept of querying for memory requirements of a texture using function vkGetImageMemoryRequirements. It used to have an even bigger problem. To understand it, we must recall that in D3D12, we query for memory requirements (size and alignment) given structure D3D12_RESOURCE_DESC which describes parameters of a texture to be created. In (the initial) Vulkan API, on the other hand, we need to first create the actual VkImage object, and then query for its memory requirements. Question is: Given two textures created with exactly same parameters (width, height, pixel format, number of mip levels, flags, etc.), do they always return the same memory requirements?
In the past, it wasn’t required by the Vulkan specification and I saw some drivers for some GPUs that really returned different sizes for two identical textures! It could cause problems, e.g. when defragmenting video memory in Vulkan Memory Allocator library. Was it a bug, or another internal optimization done by the driver, e.g. to avoid some memory bank conflicts? I don’t know. Good news is that since then, Vulkan specification was clarified to require that functions like vkGetImageMemoryRequirements always return the same size and alignment for images created with the same parameters, and new drivers comply with that, so the problem is gone now. Vulkan 1.3 also got a new function vkGetDeviceImageMemoryRequirements that takes VkImageCreateInfo with image creation parameters instead of an already created image object, just like D3D12 does from the beginning.
Going back to the main question of this article: When VK_KHR_maintenance4 extension is enabled (which has been promoted to core Vulkan 1.3), the problem does not occur, as Vulkan specification says: "For a VkImage, the size memory requirement is never greater than that of another VkImage created with a greater or equal value in each of extent.width, extent.height, and extent.depth; all other creation parameters being identical.", and the same for buffers.
Big thanks to my friends: Bartek Boczula for discussions about this topic and inspiration to write this article, as well as Szymon Nowacki for testing on the Intel card! Also thanks to Constantine Shablia from Collabora for pointing me to the answer on Vulkan.
Comments | #rendering #gpu #vulkan #directx Share
# ShaderCrashingAssert - a New Small Library
Sun
20
Aug 2023
Last Thursday (August 17th) AMD released a new tool for post-mortem analysis of GPU crashes: Radeon GPU Detective. I participated in this project, but because this is my personal blog and because it is weekend now, I am wearing my hobby developer hat and I want to present a small library that I developed yesterday:
ShaderCrashingAssert provides an assert-like macro for HLSL shaders that triggers a GPU memory page fault. Together with RGD, it can help with shader debugging.
Comments | #rendering #directx #productions #libraries #gpu #tools Share
# D3d12info - Printing D3D12 GPU Information to Console
Wed
27
Jul 2022
My next little hobby project is D3d12info. It is a Windows console program that prints all the information it can get about the current GPU installed in the system, as seen through Direct3D 12 API. It also fetches additional information through AMD GPU Services (on AMD cards), NVAPI (on NVIDIA cards), Vulkan, and WinAPI, mostly to identify the current version of the graphics driver and Windows system. I will try to keep it updated to the latest Agility SDK, to query it for support for the latest hardware features of the graphics card.
I share it under open-source MIT license. You can see full source code in the GitHub repository and download compiled binary from the Releases tab.
The tool can be compared to DirectX Caps Viewer you can find in your Windows SDK installation under path "c:\Program Files (x86)\Windows Kits\10\bin\*\x64\dxcapsviewer.exe" in terms of the information extracted from DX12. However, instead of GUI, it provides a command-line interface, which makes it similar to the "vulkaninfo" tool. Information is printed in a human-readable text format by default, but JSON format can be selected by providing -j parameter, making it suitable for automated processing. Additional command-line parameters are supported, including a choice of the GPU if there are many installed in the system. Launch it with parameter -h to see the command-line syntax.
In the future, I would like to extend it with a web back-end that would gather a database of various GPUs and driver versions, like Vulkan Hardware Database does for Vulkan, and to make it browsable online. As far as I know, there is no such database for D3D12 at the moment. Best we have right now are the tables about Direct3D Feature Levels on Wikipedia. But that will require a lot of learning from me, as I am not a good web developer, so I will think about it after my vacation :)
Comments | #productions #tools #directx #gpu Share
# A Metric for Memory Fragmentation
Wed
06
Apr 2022
In this article, I would like to discuss the problem of memory fragmentation and propose a formula for calculating a metric telling how badly the memory is fragmented.
The problem can be stated like this:
So it is a standard memory allocation situation. Now, I will explain what do I mean by fragmentation. Fragmentation, for this article, is an unwanted situation where free memory is spread across many small regions in between allocations, as opposed to a single large one. We want to measure it and preferably avoid it because:
VirtualAlloc) and sub-allocating them for the user's allocation requests, high fragmentation my require allocating another block to satisfy the request, making the program using more system memory than really needed.A solution to this problem is to perform defragmentation - an operation that moves the allocations to arrange them next to each other. This may require user involvement, as pointers to the allocations will change then. It may also be a time-consuming operation to calculate better places for the allocations and then to copy all their data. It is thus desirable to measure the fragmentation to decide when to perform the defragmentation operation.
Comments | #gpu #algorithms #optimization Share
# Vulkan Memory Allocator 3.0.0 and D3D12 Memory Allocator 2.0.0
Sat
26
Mar 2022
Yesterday we released new major version of Vulkan Memory Allocator 3.0.0 and D3D12 Memory Allocator 2.0.0, so if you are coding with Vulkan or Direct3D 12, I recommend to take a look at these libraries. Because coding them is part of my job, I won't describe them in detail here, but just refer to my article published on GPUOpen.com: "Announcing Vulkan Memory Allocator 3.0.0 and Direct3D 12 Memory Allocator 2.0.0". Direct links:
Vulkan Memory Allocator
D3D12 Memory Allocator
Comments | #rendering #directx #vulkan #gpu #libraries #productions Share