All Blog Entries

All blog entries, ordered from most recent. Entry count: 1094.

Pages: 1 2 3 ... 137 >

# Vulkan with DXGI - experiment results

Mon
19
Nov 2018

In my previous post, I’ve described a way to get GPU memory usage in Windows Vulkan app by using DXGI. This API, designed for Direct3D, seems to work with Vulkan as well. In this post I would like to share detailed results of my experiment on two different platforms with two graphics cards from different vendors. But before that, a disclaimer:

Read full entry > | Comments | #windows #graphics #directx #vulkan Share

# There is a way to query GPU memory usage in Vulkan - use DXGI

Thu
15
Nov 2018

In my GDC 2018 talk “Memory management in Vulkan and DX12” (slides freely available, video behind GDC Vault paywall) I said that in Direct3D 12 you can query for the exact amount of GPU memory used and available, while in Vulkan there is no way to do that, so I recommend to just query for memory capacity (VkMemoryHeap::​size) and limit your usage to around 80% of it. It turns out that I wasn’t quite right. If you code for Windows, there is a way to do this. I assumed that the mentioned function IDXGIAdapter3::​QueryVideoMemoryInfo is part of Direct3D 12 interface, while it is actually part of DirectX Graphics Infrastructure (DXGI). This is a more generic, higher level Windows API that allows you to enumerate adapters (graphics cards) installed in the system, query for their parameters and outputs (monitors) connected to them. Direct3D refers to this API, but it’s not the same.

Key question is: Can you use DXGI to query for GPU memory usage while doing graphics using Vulkan, not D3D11 or D3D12? Would it return some reasonable data and not all zeros? Short answer is: YES! I’ve made an experiment - I wrote a simple app that creates various Vulkan objects and queries DXGI for memory usage. Results look very promising. But before I move on to the details, here is a short primer of how to use this DXGI interface, for all non-DirectX developers:

1. Use C++ in Visual Studio. You may also use some other compiler for Windows or other programming language, but it will be probably harder to setup.

2. Install relatively new Windows SDK.

3. #include <dxgi1_4.h> and <atlbase.h>

4. Link with “dxgi.lib”.

5. Create Factory object:

IDXGIFactory4* dxgiFactory = nullptr;
CreateDXGIFactory1(IID_PPV_ARGS(&dxgiFactory));

Don’t forget to release it at the end:

dxgiFactory->Release();

6. Write a loop to enumerate available adapters. Choose and remember suitable one.

IDXGIAdapter3* dxgiAdapter = nullptr;
IDXGIAdapter1* tmpDxgiAdapter = nullptr;
UINT adapterIndex = 0;
while(m_DxgiFactory->EnumAdapters1(adapterIndex, &tmpDxgiAdapter) != DXGI_ERROR_NOT_FOUND)
{
    DXGI_ADAPTER_DESC1 desc;
    tmpDxgiAdapter>GetDesc1(&desc);
    if(!dxgiAdapter && desc.Flags == 0)
    {
        tmpDxgiAdapter->QueryInterface(IID_PPV_ARGS(&dxgiAdapter));
    }
    tmpDxgiAdapter->Release();
    ++adapterIndex;
}

At the end, don’t forget to release it:

dxgiAdapter->Release();

Please note that using new version of DXGI interfaces like DXGIFactory4 and DXGIAdapter3 requires some relatively new version (I’m not sure which one) of both Windows SDK on developer’s side (otherwise it won’t compile) and updated Windows system on user’s side (otherwise function calls with fail with appropriate returned HRESULT).

7. To query for GPU memory usage at the moment, use this code:

DXGI_QUERY_VIDEO_MEMORY_INFO info = {};
dxgiAdapter->QueryVideoMemoryInfo(0, DXGI_MEMORY_SEGMENT_GROUP_LOCAL, &info);

There are two possible options:

Among members of the returned structure, the most interesting is CurrentUsage. It seems to precisely reflect the use of GPU memory - it increases when I allocate a new VkDeviceMemory object, as well as when I use some implicit memory by creating other Vulkan resources, like a swap chain, descriptor pools and descriptor sets, command pools and command buffers, query pools etc.

Other DXGI features for video memory - callback for budget change notification (IDXGIAdapter3::​RegisterVideoMemoryBudgetChangeNotificationEvent) and reservation (IDXGIAdapter3::​SetVideoMemoryReservation) may also work with Vulkan, but I didn’t check them.

As an example, on my system with GPU = AMD Radeon RX 580 8 GB and 16 GB of system RAM, on program startup and before any Vulkan or D3D initialization, DXGI reports following data:

Local:
 Budget=7252479180 CurrentUsage=0
 AvailableForReservation=3839547801 CurrentReservation=0
Nonlocal:
 Budget=7699177267 CurrentUsage=0
 AvailableForReservation=4063454668 CurrentReservation=0

8. You may want to choose correct DXGI adapter to match the physical device used in Vulkan. Even on the system with just one discrete GPU there are two adapters reported, one of them being software renderer. I exclude it by comparing desc.Flags == 0, which means this is a real, hardware-accelerated GPU, not DXGI_ADAPTER_FLAG_REMOTE or DXGI_ADAPTER_FLAG_SOFTWARE.

Good news is that even when there are more such adapters in the system, there is a way to match them between DXGI and Vulkan. Both APIs return something called Locally Unique Identifier (LUID). In DXGI it’s in DXGI_ADAPTER_DESC1::​AdapterLuid. In Vulkan it’s in VkPhysicalDeviceIDProperties::​deviceLUID. They are of different types - two 32-bit numbers versus array of bytes, but it seems that simple, raw memory compare works correctly. So the way to find DXGI adapter matching Vulkan physical device is:

// After obtaining VkPhysicalDevice of your choice:
VkPhysicalDeviceIDProperties physDeviceIDProps = {
    VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_ID_PROPERTIES };
VkPhysicalDeviceProperties2 physDeviceProps = {
    VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2 };
physDeviceProps.pNext = &physDeviceIDProps;
vkGetPhysicalDeviceProperties2(physicalDevice, &physDeviceProps);

// While enumerating DXGI adapters, replace condition:
// if(!dxgiAdapter && desc.Flags == 0)
// With this:
if(memcmp(&desc.AdapterLuid, physDeviceIDProps.deviceLUID, VK_LUID_SIZE) == 0)

Please note that function vkGetPhysicalDeviceProperties2 requires Vulkan 1.1, so set VkApplicationInfo::​apiVersion = VK_API_VERSION_1_1. Otherwise the call results in “Access Violation” error.

In my next blog post, I present detailed results of my experiment with DXGI used with Vulkan application, tested on 2 different GPUs.

Comments | #windows #graphics #vulkan #directx Share

# Two most obscure bugs in my life

Wed
14
Nov 2018

Fixing bugs is a significant part of software development, and a very emotional one - from frustration when you have no idea what is happening to euphoria when you find the cause and you know how to fix it. There are many different kinds of bugs and reasons why they are introduced to the code. (Conducting a deep investigation of how each bug happened might be an interesting idea - I will write a separate blog post about it someday...) But the most frustrating ones are probably these that occur only in some specific conditions, like only on one of many supported platforms or only in "Release" and not in "Debug" configuration. Below you will find description of the two most obscure bugs I've found and fixed in my life, for your education and amusement :) Both were in large, multiplatform, C++ codebases.

1. There was this bug reported causing incorrect program behavior, occurring on only one of many platforms where the program was built and used. It was probably Mac, but I can't remember exactly. It was classified as a regression - it used to work before and stopped working, caught by automated tests, after some specific code change. The strange thing was that the culprit submit to the code repository introduced only new comments and nothing else. How can change in a code comment introduce a new bug?!

It turned out that the author of this change wanted to draw a nice "ASCII art" in his comment, so he's put something like this:

// HERE STARTS AN IMPORTANT SECTION
//==================================================================\\
Function1();
Function2();
(...)

Have you noticed anything suspicious?...

A backslash '\' at the end of line in C/C++ means that the logical line is continued to next line. This feature is used mostly when writing complex preprocessor macros. But a code starting from two slashes '//' begins a comment that spans until the end of line. Now the question is: Does the single-line comment span to next line as well? In other words: Is the first function call commented out, or not?

I don't know and I don't really care that much about what would "language lawyers" read from the C++ specification and define as a proper behavior. The real situation was that compilers used on some platforms considered the single-line comment to span to the next line, thus commenting out call to Function1(), while others didn't. That caused the bug to occur on some platforms only.

The solution was obviously to change the comment not to contain backslash at the end of line, even at the expense of aesthetics and symmetry of this little piece of art :)

2. I was assigned a "stack overflow" bug occurring only in "Debug" and not in "Release" configuration of a Visual Studio project. At first, I was pretty sure it would be easy to find. After all, "stack overflow" usually means infinite recursion, right? The stack is the piece of memory where local function variables are allocated, along with return addresses from nested function calls. They tend not to be too big, while the stack is 1 MB by default, so there must be unreasonable call depth involved to hit that error.

It turned out not to be true. After few debugging sessions and reading the code involved, I understood that there was no infinite recursion there. It was a traversal of a tree structure, but the depth of its hierarchy was not large enough to be of a concern. It took me a while to realize that the stack was bloated to the extent that exceeded its capacity by one function. It was a very long function - you know, this kind of function that you can see in corporate environment which defies any good practices, but no one feels responsible to refactor. It must have grown over time with more and more code added gradually until it reached many hundreds of lines. It was just one big switch with lots of code in each case.

void Function(Option op)
{
  switch(op)
  {
  case OPTION_FIRST:
    // Lots of code, local variables, and stuff...
    break;
  case OPTION_SECOND:
    // Lots of code, local variables, and stuff...
    break;
  case OPTION_THIRD:
    // Lots of code, local variables, and stuff...
    break;
  ...
  }
}

What really caused the bug was the number and size of local variables used in this function. Each of the cases involved many variables, some big like fixed-size arrays or objects of some classes, defined by value on the stack. It was enough to call this function recursively just few times to exhaust the stack capacity.

Why "stack overflow" occurred in "Debug" configuration only and not in "Release"? Apparently Visual Studio compiler can lazily allocate or alias local variables used in different cases of a switch instruction, while "Debug" configuration has all the optimizations disabled and so it allocates all these variables with every function call.

The solution was just to refactor this long function with a switch - to place the code from each case in a separate, new function.

What are the most obscure bugs you've met in your coding practice? Let me know in the comments below.

Comments | #debugger #visual studio #c++ Share

# Technical debt is a good metaphor

Mon
12
Nov 2018

There is this term "technical debt" used in the world of software development as a metaphor to describe poor quality of source code (or the lack of documentation, tests etc.), whether it's a bad overall architecture, quick and dirty hacks in some places, or any kinds of software anti-patterns. It makes the code harder, slower, and less pleasant to maintain, fix, and expand. It's called "debt" because, just as the financial debt, it causes some "interest" that needs to be paid, in form of increased maintenance cost and more bugs to fix, until it's repaid (refactoring or rewrite of bad code, completing work that was missing and left "TODO"). It is most frequently caused by lack of time due to deadlines imposed by business people - lack of time to fully understand the requirements and design good solution, to read and understand the existing codebase, to refactor or rewrite if the code is already bad.

Some people say that "technical debt" is a bad metaphor. There was this one article I can't find now which argued that bad code is unlike real-world debt because we don't need to pay "interest" until we actually need to go back to and modify the specific piece of code where we made a dirty hack. I don't agree with it. Sure things in software development are harder to measure and estimate quantitatively than in the business world, but I think that "technical debt" is a good metaphor. Just like in real life, there are different kinds of debt. It depends on who do you borrow money from:

- Leaving bad code in a place where you and everyone else in the team may never need to look again is like borrowing money from you family or best friend. You know the debt is there, but you may postpone paying it off indefinitely and you will still be fine.

- Leaving bad code in a place where you periodically need to make fixes and other changes is like borrowing money from a bank. You have to periodically pay interest until you repay it all.

- Finally, making ugly hack in a critical piece of code that you touch constantly is like being in so desperate need for money to borrow from mafia. You better repay it quickly or you will get in trouble.

Comments | #philosophy #software engineering Share

# Scaling is everywhere, pixel-perfect is the past

Thu
04
Oct 2018

Long time ago, when computers were slow and screen resolutions were low, everything had to be pixel-perfect. For example, Atari 2600 game console could display only 160x192 pixels. During that time, game characters and all the graphics had to be drawn pixel by pixel, to include all the intended details, like Mario's moustache. This is known as pixel art.



Source: The Evolution of Mario

Years later, with higher screen resolutions, game sprites could be drawn using different methods, or even rendered from 3D models, but icons and other GUI elements were still prepared to be shown pixel-by-pixel. Same applied to web pages.

Nowadays, even GUI icons are scaled. They can be enlarged smoothly and they can be displayed on various monitors, where 4K monitor has 4x more pixels than FullHD. Setting desktop DPI scaling other than 100% scales all the apps in Windows. Modern web pages created according to "responsive design" principles have to look good on all kinds of devices, from little smartphones to huge monitors. Scaling is everywhere.

When programmable cellphones first appeared, making apps and games for them was like going back in time. Just like on retro platforms and first PCs, screens had very low resolutions and pixel art was the way to go when drawing game characters. Now mobile games have to work on all sorts of smartphones, many of them having resolutions like our PC monitors - FullHD or even higher.

What seems like the last relic of pixel-perfection is the rendering of 3D scenes. Since the introduction of 3D graphics, we tend to rasterize and shade our triangles in the same resolution as the image to be displayed on screen, which is ideally equal to native resolution of the monitor. Otherwise, every gamer who cares about image quality would call it looking bad. Or wouldn't he?

Some things can be rendered in lower resolution. There are games that render the layer with alpha-blended, translucent objects (especially particle effects like fire, smoke, clouds) to a 4x smaller texture and then upscale it while compositing with main, opaque geometry. Such elements tend not to have too many high-frequency (small) details anyway, so quality degradation due to lower resolution is not very noticeable, while smaller number of pixels that need to be shaded and blended saves a lot of rendering time.

But that's not the full story. Regardless of resolution, antialiasing is, and always will be, necessary to blur jaggy edges. Ideal solution for it is known as Super Sampling Anti-Aliasing (SSAA), which is nothing else but rendering the scene in higher resolution and then downscaling it to e.g. average 2x2 rendered pixels into a single output pixel. It could be done by a game, or introduced by graphics driver. AMD has this feature in driver under the name "Virtual Super Resolution".

This of course is a slow method, because rendering 4x more pixels requires a lot of computations and memory bandwidth. Various methods exist that provide more efficient antialiasing. Multisample Anti-Aliasing (MSAA), which is supported by GPUs in hardware, lets you shade a pixel (calculate RGB color in a pixel shader) only once, but store it in multiple per-pixel samples, depending on the shape of the edge being rendered. Numerous screen-space postprocessing algorithms exist that intelligently blur already rendered image to smooth the edges, e.g. FXAA, MLAA.

This interchangeability between rendering in higher resolution and higher quality antialiasing, as well as the possibility to do some filtering of the rendered image, is probably best exploited by the engine behind Call of Duty. Jorge Jimenez (Graphics R&D Technical Director at Activision Blizzard) explained it in his talks: "Dynamic temporal antialiasing and upsampling in Call of Duty" (Digital Dragons 2017), "Dynamic Temporal Antialiasing in Call of Duty: Infinite Warfare" (SIGGRAPH 2017). They dynamically scale rendering resolution depending on current game load to maintain sufficient framerate. The scene is then scaled to screen resolution. Their technique "combines dynamic resolution with temporal upsampling". Such techniques are especially useful where high FPS and smooth gameplay is important, even at the expense of graphics quality - in fast-paced games, professional e-sport, and VR.

Screen resolutions become higher, but performance of GPUs don't necessarily scale at the same rate. Single-pixel details are harder to notice. That's why it can make sense to render at resolutions even smaller than output resolution and then interpolate missing pixels. Of course, no interpolation algorithm is perfect and using just bilinear filter would look horrible. That's why techniques are being developed which try to minimize quality loss in this process, e.g. temporal methods (that use image from the previous frame), checkerboard rendering, or new Deep Learning Super Sampling (DLSS) from NVIDIA.

It also makes sense to shade pixels at lower rate in some parts of the image where details are hard to notice, e.g. where the player is not looking (peripheral vision in VR, especially if eye tracking is available), objects are moving fast (based on screen-space motion vectors) or where there are not many high-frequency details (based on analysis of the previous frame). Shading per pixel or per sample is just one option. NVIDIA cards support techniques like Multi-Res Shading or their latest invention - Variable Rate Shading (VFR), where helper texture can locally control shading rate from once per 16 pixels all the way to 8 times per pixel.

Finally, the rate of shading (lighting calculation) can be completely decoupled from the rate of rendering of the final image (rasterization) and done in different space, at different framerate or even completely asynchronously. This is known as Object-Space Shading/Texture-Space Shading. It has successfully been used by Oxide Games in their Ashes of the Singularity and may soon become more widespread.

I think we could say that scaling is everywhere, pixel-perfect is the past. It is not necessarily a bad thing. If the goal of advancements in 3D rendering in games is to look photorealistically like movies, then we should realize that movies are never pixel-perfect - there is always scaling and filtering involved at various stages. Even at the very beginning, camera sensors have some pattern of R, G, B pixels that must be interpolated to fit them into (RGB) triplets.

Then they are often encoded using chroma subsampling (like 4:2:2) and compressed using some video compression codecs. Interpolation and filtering may be involved at many stages of processing, e.g. frame rate conversion, deinterlace, noise reduction, or finally, sharpening commonly applied by modern smart TVs (which I'm very allergic to, but there must be some reason behind it). Recorded videos are never pixel perfect. Rendered 3D games don't have to be as well.

Comments | #graphics Share

# Efficient way of using std::vector

Sat
22
Sep 2018

Some people say that C++ STL is slow. I would rather say it's the way we use it. Sure, there are many potential sources of slowness when using STL. For example, std::list or std::map tends to allocate many small objects and dynamic allocation is a time-consuming operation. Making many copies of your objects like std::string is also costly - that's why I created str_view project. But std::vector is just a wrapper over a dynamically allocated array that also remembers its size and can reallocate when you add new elements. Without STL, you would need to implement the same functionality yourself.

When it comes to traversing elements of the vector (e.g. to sum the numerical values contained in it), there are many ways to do it. STL is notoriously known for working very slow in Debug project configuration, but as it turns out, this heavily depends on what method do you choose for using it.

Here is a small experiment that I've just made. In this code, I create a vector of 100,000,000 integers, then sum its elements using 5 different methods, calculating how much time does it take for each of them. Results (averaged over 5 iterations for each method) are as follows. Notice logarithmic scale on horizontal axis.

Here is the full source code of my testing program:

#include <cstdio>
#include <cstdint>
#include <vector>
#include <chrono>
#include <numeric>

typedef std::chrono::high_resolution_clock::time_point time_point;
typedef std::chrono::high_resolution_clock::duration duration;
inline time_point now() { return std::chrono::high_resolution_clock::now(); }
inline double durationToMilliseconds(duration d) { return std::chrono::duration<double, std::milli>(d).count(); }

int main()
{
    printf("Iteration,Method,Sum,Time (ms)\n");
    
    for(uint32_t iter = 0; iter < 5; ++iter)
    {
        std::vector<int> numbers(100000000ull);
        numbers[0] = 1; numbers[1] = 2; numbers.back() = 3;

        {
            time_point timeBeg = now();

            // Method 1: Use STL algorithm std::accumulate.
            int sum = std::accumulate(numbers.begin(), numbers.end(), 0);

            printf("%u,accumulate,%i,%g\n", iter, sum, durationToMilliseconds(now() - timeBeg));
        }

        {
            time_point timeBeg = now();

            // Method 2: Use the new C++11 range-based for loop.
            int sum = 0;
            for(auto value : numbers)
                sum += value;

            printf("%u,Range-based for loop,%i,%g\n", iter, sum, durationToMilliseconds(now() - timeBeg));
        }

        {
            time_point timeBeg = now();

            // Method 3: Use traditional loop, traverse vector using its iterator.
            int sum = 0;
            for(auto it = numbers.begin(); it != numbers.end(); ++it)
                sum += *it;

            printf("%u,Loop with iterator,%i,%g\n", iter, sum, durationToMilliseconds(now() - timeBeg));
        }

        {
            time_point timeBeg = now();

            // Method 4: Use traditional loop, traverse using index.
            int sum = 0;
            for(size_t i = 0; i < numbers.size(); ++i)
                sum += numbers[i];

            printf("%u,Loop with indexing,%i,%g\n", iter, sum, durationToMilliseconds(now() - timeBeg));
        }

        {
            time_point timeBeg = now();

            // Method 5: Get pointer to raw array and its size, then use a loop to traverse it.
            int sum = 0;
            int* dataPtr = numbers.data();
            size_t count = numbers.size();
            for(size_t i = 0; i < count; ++i)
                sum += dataPtr[i];

            printf("%u,Loop with pointer,%i,%g\n", iter, sum, durationToMilliseconds(now() - timeBeg));
        }
    }
}

As you can see, some methods are slower than the others in Debug configurations by more than 3 orders of magnitude! The difference is so big that if you write your program or game like this, it may not be possible to use its Debug version with any reasonably-sized input data. But if you look at disassembly, it should be no surprise. For example, method 4 calls vector methods size() and operator[] in every iteration of the loop. We know that in Debug configuration functions are not inilined and optimized, so these are real function calls:

On the other hand, method 5 that operates on raw pointer to the vector's underlying data is not that much slower in Debug configuration comparing to Release. Disassembly from Debug version:

So my conclusion is: Using std::vector to handle memory management and reallocation and using raw pointer to access its data is the best way to go.

My testing environment was:

CPU: Intel Core i7-6700K 4.00 GHz
RAM: DDR4, Dual-Channel, current memory clock 1066 MHz
OS: Windows 10 Version 1803 (OS Build 17134.285)
Compiler: Microsoft Visual Studio Community 2017 Version 15.4.8
Configuration options: x64 Debug/Release
Windows SDK Version 10.0.16299.0

Comments | #stl #c++ #optimization Share

# Debugging D3D12 driver crash

Wed
12
Sep 2018

New generation, explcit graphics APIs (Vulkan and DirectX 12) are more efficient, involve less CPU overhead. Part of it is that they don't check most errors. In old APIs (Direct3D 9, OpenGL) every function call was validated internally, returned success of failure code, while driver crash indicated a bug in driver code. New APIs, on the other hand, rely on developer doing the right thing. Of course, some functions still return error code (especially ones that allocate memory or create some resource), but those that record commands into a command list just return void. If you do something illegal, you can expect undefined behavior. You can use Validation Layers / Debug Layer to do some checks, but otherwise everything may work fine on some GPUs, you may get incorrect result, or you may experience driver crash or timeout (called "TDR"). Good thing is that (contrary to old Windows XP), crash inside graphics driver doesn't cause "blue screen of death" or machine restart. System just restarts graphics hardware and driver, while your program receives DXGI_ERROR_DEVICE_REMOVED code from one of functions like IDXGISwapChain::​Present. Unfortunately, you then don't know which specific draw call or other command caused the crash.

NVIDIA proposed solution for that: they created NVIDIA Aftermath library. It lets you (among other things) record commands that write custom "marker" data to a buffer that survives driver crash, so you can later read it and see which command was successfully executed last. Unfortunately, this library works only with NVIDIA graphics cards.

Some time ago I showed a portable solution for Vulkan in my post: "Debugging Vulkan driver crash - equivalent of NVIDIA Aftermath". Now I'd like to present a solution for Direct3D 12. It turns out that this API also provides a standardized way to achieve this, in form of a method ID3D12GraphicsCommandList2::​WriteBufferImmediate. One caveat: This new version of the interface requires:

I created a simple library that implements all the required logic under easy interface, which I called D3d12AfterCrash. You can find all the details and instruction for how to use it in file "D3d12AfterCrash.h".

I guess it would be better to allocate the buffer using WinAPI function VirtualAlloc(NULL, bufferSize, MEM_COMMIT, PAGE_READWRITE), then call ID3D12Device3::​OpenExistingHeapFromAddress and ID3D12Device::​CreatePlacedResource, but my simple way of just doing ID3D12Device::​CreateCommittedResource seems to work - buffer survives driver crash and preserves its content. I checked it on AMD as well as NVIDIA card.

Comments | #directx #graphics #libraries #productions Share

# Macro with current function name - __func__ vs __FUNCTION__

Tue
11
Sep 2018

Today, while programming in C++, I wanted to write an assert-like macro that would throw an exception when given condition is not satisfied. I wanted to include as much information as possible in the message string. I know that condition expression, which is argument of my macro, can be turned into a string by using # preprocessor operator.

Next, I searched for a way to also obtain name of current function. At first, I found __func__, as described here (C++11) and here (C99). Unfortunately, following code fails to compile:

#define CHECK(cond) if(!(cond)) { \
    throw std::runtime_error("ERROR: Condition " #cond " in function " __func__);

void ProcessData()
{
    CHECK(itemCount > 0); // Compilation error!
    // (...)
}

This is because this identifier is actually an implicit local variable static const char __func__[] = "...".

Then I recalled that Visual Studio defines __FUNCTION__ macro, as custom Microsoft extension. See documentation here. This one works as I expected - it can be concatenated with other strings, because it's a string literal. Following macro definition fixes the problem:

#define CHECK(cond) if(!(cond)) \
    { throw std::runtime_error("ERROR: Condition " #cond " in function " __FUNCTION__); }

When itemCount is 0, exception is thrown and ex.what() returns following string:

ERROR: Condition itemCount > 0 in function ProcessData

Well... For any experienced C++ developer, it should be no surprise that C++ standard committee comes up with solutions that are far from being useful in practice :)

Comments | #c++ Share

Pages: 1 2 3 ... 137 >

STAT NO AD
[Stat] [STAT NO AD] [Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2018