# Two Shader Compilers of Direct3D 12

Dec 2019

If we write a game or other graphics application using Direct3D 12, we also need to write some shaders. We author these in high-level language called HLSL and compile them before passing to the DirectX API while creating pipeline state objects (ID3D12Device::CreateGraphicsPipelineState). There are currently two shader compilers available, both from Microsoft, each outputting different binary format:

  1. old compiler “FXC”
  2. new compiler “DXC”

Which one to choose? The new compiler, called DirectX Shader Compiler, is more modern, based on LLVM/Clang, and open source. We must use it if we want to use Shader Model 6 or above. On the other hand, shaders compiled with it require relatively recent version of Windows and graphics drivers installed, so they won’t work on systems not updated for years.

Shaders can be compiled offline using a command-line program (standalone executable compiler) and then bundled with your program in compiled binary form. That’s probably the best way to go for release version, but for development and debugging purposes it’s easier if we can change shader source just as we change the source of CPU code, easily rebuild or run, or even reload changed shader while the app is running. For this, it’s convenient to integrate shader compiler as part of your program, which is possible through a compiler API.

This gives us 4 different ways of compiling shaders. This article is a quick tutorial for all of them.

1. Old Compiler - Offline

The standalone executable of the old compiler is called “fxc.exe”. You can find it bundled with Windows SDK, which is installed together with Visual Studio. For example, in my system I located it in this path: “c:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64\fxc.exe”.

To compile a shader from HLSL source to the old binary format, issue a command like this:

fxc.exe /T ps_5_0 /E main PS.hlsl /Fo PS.bin

/T is target profile
ps_5_0 means pixel shader with Shader Model 5.0
/E is the entry point - the name of the main shader function, “main” in my case
PS.hlsl is the text file with shader source
/Fo is binary output file to be written

There are many more command line parameters supported for this tool. You can display help about them by passing /? parameter. Using appropriate parameters you can change optimization level, other compilation settings, provide additional #include directories, #define macros, preview intermediate data (preprocessed source, compiled assembly), or even disassemble existing binary file.

2. Old compiler - API

To use the old compiler as a library in your C++ program:

  • #include <d3dcompiler.h>
  • link with "d3dcompiler.lib"
  • call function D3DCompileFromFile


CComPtr<ID3DBlob> code, errorMsgs;
HRESULT hr = D3DCompileFromFile(
    L"PS.hlsl", // pFileName
    nullptr, // pDefines
    nullptr, // pInclude
    "main", // pEntrypoint
    "PS_5_0", // pTarget
    0, // Flags2
    &code, // ppCode
    &errorMsgs); // ppErrorMsgs
        wprintf(L"Compilation failed with errors:\n%hs\n",
            (const char*)errorMsgs->GetBufferPointer());
    // Handle compilation error...

// (...)
psoDesc.PS.BytecodeLength = code->GetBufferSize();
psoDesc.PS.pShaderBytecode = code->GetBufferPointer();
CComPtr<ID3D12PipelineState> pso;
hr = device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pso));

First parameter is the path to the file that contains HLSL source. If you want to load the source in some other way, there is also a function that takes a buffer in memory: D3DCompile. Second parameter (optional) can specify preprocessor macros to be #define-d during compilation. Third parameter (optional) can point to your own implementation of ID3DInclude interface that would provide additional files requested via #include. Entry point and target platforms is a string just like in command-line compiler. Other options that have their command line parameters (e.g. /Zi, /Od) can be specified as bit flags.

Two objects returned from this function are just buffers of binary data. ID3DBlob is a simple interface that you can query for its size and pointer to its data. In case of a successful compilation, ppCode output parameter returns buffer with compiled shader binary. You should pass its data to ID3D12PipelineState creation. After successful creation, the blob can be Release-d. The second buffer ppErrorMsgs contains a null-terminated string with error messages generated during compilation. It can be useful even if the compilation succeeded, as it then contains warnings.

Update: "d3dcompiler_47.dll" file is needed. Typically some version of it is available on the machine, but generally you still want to redistribute the exact version you're using from the Win10 SDK. Otherwise you could end up compiling with an older or newer version on an end-user's machine.

3. New Compiler - Offline

Using the new compiler in its standalone form is very similar to the old one. The executable is called “dxc.exe” and it’s also bundled with Windows SDK, in the same directory. Documentation of command line syntax mentions parameters starting with "-", but old "/" also seems to work. To compile the same shader using Shader Model 6.0 issue following command, which looks almost the same as for "fxc.exe":

dxc.exe -T ps_6_0 -E main PS.hlsl -Fo PS.bin

Despite using a new binary format (called “DXIL”, based on LLVM IR), you can load it and pass it to D3D12 PSO creation the same way as before. There is a tricky issue though. You need to attach file “dxil.dll” to your program. Otherwise, the PSO creation will fail! You can find this file in Windows SDK path like: “c:\Program Files (x86)\Windows Kits\10\Redist\D3D\x64\dxil.dll”. Just copy it to the directory with target EXE of your project or the one that you use as working directory.

4. New Compiler - API

The new compiler can also be used programatically as a library, but its usage is a bit more difficult. Just as with any C++ library, start with:

  • #include <dxcapi.h>
  • link "dxcompiler.lib"
  • create and use object of type IDxcCompiler

This time though you need to bundle additional DLL to your program (next to “dxil.dll” mentioned above): “dxcompiler.dll”, to be found in the same “Redist\D3D\x64” directory. There is more code needed to perform the compilation. First create IDxcLibrary and IDxcCompiler objects. They can stay alive for the whole lifetime of your application or as long as you need to compile more shaders. Then for each shader, load it from a file (or any source of your choice) to a blob, call Compile method, and inspect its result, whether it’s an error + a blob with error messages, or a success + a blob with compiled shader binary.

CComPtr<IDxcLibrary> library;
HRESULT hr = DxcCreateInstance(CLSID_DxcLibrary, IID_PPV_ARGS(&library));
//if(FAILED(hr)) Handle error...

CComPtr<IDxcCompiler> compiler;
hr = DxcCreateInstance(CLSID_DxcCompiler, IID_PPV_ARGS(&compiler));
//if(FAILED(hr)) Handle error...

uint32_t codePage = CP_UTF8;
CComPtr<IDxcBlobEncoding> sourceBlob;
hr = library->CreateBlobFromFile(L"PS.hlsl", &codePage, &sourceBlob);
//if(FAILED(hr)) Handle file loading error...

CComPtr<IDxcOperationResult> result;
hr = compiler->Compile(
    sourceBlob, // pSource
    L"PS.hlsl", // pSourceName
    L"main", // pEntryPoint
    L"PS_6_0", // pTargetProfile
    NULL, 0, // pArguments, argCount
    NULL, 0, // pDefines, defineCount
    NULL, // pIncludeHandler
    &result); // ppResult
        CComPtr<IDxcBlobEncoding> errorsBlob;
        hr = result->GetErrorBuffer(&errorsBlob);
        if(SUCCEEDED(hr) && errorsBlob)
            wprintf(L"Compilation failed with errors:\n%hs\n",
                (const char*)errorsBlob->GetBufferPointer());
    // Handle compilation error...
CComPtr<IDxcBlob> code;

// (...)
psoDesc.PS.BytecodeLength = code->GetBufferSize();
psoDesc.PS.pShaderBytecode = code->GetBufferPointer();
CComPtr<ID3D12PipelineState> pso;
hr = device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pso));

Compilation function also takes strings with entry point and target profile, but in Unicode format this time. The way to pass additional flags also changed. Instead of using bit flags, parameter pArguments and argCount take an array of strings that can specify additional parameters same as you would pass to the command-line compiler, e.g. L"-Zi" to attach debug information or L"-Od" to disable optimizations.

Update 2020-01-05: Thanks @MyNameIsMJP for your feedback!

Comments | #directx #rendering Share

# Xiaomi Smart Band - a Very Good Purcharse

Dec 2019

Despite being a programmer, I'm quite conservative with technological novelties. For example, I never owned a tablet and never felt a need to have one. I was also aware there are smart watches on the market, but an idea of spending 300 EUR for another device that I would then need to charge every day seemed like too much for me. Neither I was interested in those smart bands that claim to monitor your pulse, sleep, and count your steps.

What made me revisit those types of devices was a real need I felt many times recently. Sometimes I was attending a conference, sitting on a talk with my smartphone switched to quiet mode. I was repeatedly pulling it out to check the time or to see if anyone tried to call me or sent me a message. Those are the situations where it's important not to miss the talk I want to see and to catch up with my friends, while having my phone ringing would be undesirable. Another time I was on a party in a club or concert where the music was so loud I couldn't hear or feel my phone ringing, while that's also the time when I repeatedly check the clock not to miss the show of my favourite DJ while trying to hang out with my friends. Then I thought: maybe a smart watch could provide those two simple things: show the current time and display notifications from text messages, Messenger etc., notifying about them using vibration?

After some research I found out that smart bands do exactly that if I disregard all sport-related features. I chose Xiaomi Mi Smart Band 4, but devices from other manufacturers would probably provide the similar experience. It surprised me it costs only 140 PLN (33 EUR). After charging the battery, installing special "Mi Fit" app on my Android phone and pairing the two devices using Bluetooth, I could configure my smart band to change wallpaper etc. (BTW the default ones are terrible, but their format has been reverse engineered, so there are many user-created watchfaces available to download on the Internet.)

This device has only 512 KB of RAM and 120x240 pixel screen, but even with those parameters looking like some Atari computer from 30 years ago it provides many useful features. First and foremost, it shows current time and date - for 5 seconds after activated using touch screen. It also vibrates when there is an incoming call or a new message. You can be configur which of the apps installed on the smartphone display their notifications also on the band, but these can be basically the same as the notifications on the phone, so any messaging app will work - whether Messenger, WhatsApp, Signal, Tinder, or standard text messages. Sender and text can be seen on the band, but responding requires pulling out the phone. Additional features I like are showing current weather and the weather forecast for the following days for current location, timer, and alarms, which can wake you up using vibration - useful not to wake up other people sleeping in the same room. The biggest surprise for me was battery life. 20 days declared by the manufacturers seemed unrealistic, but after I charged it for the first time on November 20th, it worked until... yesterday, which gives 29 days. One disadvantage I can see is that now I must have Bluetooth enabled on my phone all the time, which drains its battery faster, but I charge the phone every day anyway.

This article is not sponsored. I wrote it from my initiative, just to share my experiences with this type of device. If you consider it useful to have current time and incoming messages available on your wristband without a need to pull out your phone, a smart band like Xiaomi Mi Smart Band 4 is a good choice.

Comments | #hardware #shopping Share

# Vulkan Memory Allocator - budget management

Nov 2019

Querying for memory budget and staying within the budget is a very needed feature of the Vulkan Memory Allocator library. I implemented prototype of it on a separate branch "MemoryBudget".

It also contains documentation of all new symbols and a general chapter "Staying within budget" that describes this topic. Documentation is pregenerated so it can be accessed by just downloading the repository as ZIP, unpacking, and opening file "docs\html\index.html" > chapter “Staying within budget”.

If you are interested, please take a look. Any feedback is welcomed - you can leave your comment below or send me an e-mail. Now is the best time to adjust this feature to users' needs before it gets into the official release of the library.

Long story short:

  • A function is added to query for current memory usage and available budget per Vulkan memory heap.
  • If you enable extension VK_EXT_memory_budget and tell VMA about it, the extension is used for that query. If not, current usage and budget is estimated based on total size of currently allocated blocks made and 80% of heap sizes, respectively.
  • If you are close to exceeding the budget or it is already exceeded, the library doesn’t allocate another default 256 MB memory block. It instead tries to allocate smaller block or even dedicated allocation just for your resource, to stay withing the budget.
  • It still tries to make the allocation and leaves to Vulkan the decision whether the allocation succeeds or fails, unless you use new VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT, which causes the allocation to just return failure if it would go over budget.

Update 2019-12-20: This has been merged to master branch and shipped with the latest major release: Vulkan Memory Allocator 2.3.0.

Comments | #vulkan #libraries #productions Share

# Further improvements on my website

Oct 2019

Have you noticed any changes on my website? Probably not - and that’s the whole point. I’ve made few improvements on the technical side of it, but it’s still working as usual. Here is a brief story of the development of my home page...

I was never a passionate web developer, but I learned a bit of some languages and technologies needed to make a web page. When I started this one in 2004, the word “blog” was already in use, but there was no “cloud”, no Node.js or Ruby on Rails. I could either buy a hosting account with PHP scripting and MySQL database on the back end, or a Linux shell account with full SSH access, which would be much more expensive. Surely I chose the first option. Besides that, there was HTML 4.01 and CSS 1 on the client’s side.

Over time, I introduced gradual improvements to my home page, including:

  • Started blogging in English instead of Polish (since June 2009).
  • Installed Google Custom Search for searching within this page (see text box in the top-right corner).
  • Added Atom feed.
  • Used mod_rewrite to support nice looking URLs like “/news_1657_title_of_my_post” instead of original ones like “/news.php5?action=view&id=1657”
  • Registered in Google Search Console, added sitemap.
  • Switched from old-fashioned layout based on HTML <table>s to more modern based on <div>s and CSS formatting. Started using HTML5. (See Changes on My Website.)
  • Installed external service Disqus for comments instead of my script.
  • Changed the CSS style sheet according to the idea of “responsive design” to make the site friendly to mobile devices, like smartphones.

For some time I thought maybe I should rewrite this whole website from scratch. Then there would be a difficult question to answer: What technology to use? I don’t know web technologies well, but I know there are many of them. I could just install WordPress or some other blogging system and somehow move all the existing content there. I could rewrite all the scripts using more modern PHP 7 or a more trendy language, like Ruby, server-side JavaScript. I could even make it all static HTML content, which would be enough for the things I have here. Then I could use some offline tool to generate those pages, or write my own. I could also use Amazon S3 to host those pages. The possibilities are endless...

Then I recalled the rule that “if it ain’t broke, don’t fix it” and thought the hosting service I now use at company is quite good with a low price for WWW + PHP + MySQL + FTP + e-mail account. I decided eventually just to improve the existing solution. Here is what I’ve changed recently:

  • Added support for HTTPS, with help of my hosting company, who generated an SSL certificate for my domains. It’s not that important for a static website intended just for reading, but argues that everyone should use it and modern web browsers warn about “connection not secure” when using HTTP, so it was worth doing. The official address of my home page is now!
  • Converted static pages and database content to UTF-8 character encoding. Until this week the page was still using ISO-8859-2 (latin2) codepage. Again, this doesn’t make much difference on a page using only English and Polish characters, but argues that everyone should use UTF-8, so I wanted to be up-to-date with the latest trends :)

If you have any suggestions about my website, whether its looks or technical details, please leave a comment.

Comments | #homepage #webdev Share

# Book review: C++17 in Detail

Oct 2019

Courtesy its author Bartłomiej Filipek, I was given an opportunity to read a new book "C++17 in Detail". Here is my review:

When I am about to read or decide whether to buy a book, I first look at two things. These are not the looks of the cover or the description on the back. Instead, I check the table of contents and the number of pages. It gives me a good overview of the topics covered and the estimation of chances they are sufficiently covered. "C++17 in Detail" with its 361 pages looks good as for a book describing what's new in C++17 standard, considering the additions to the standard are not as extensive as they were in C++11. The author is undoubtedly an expert in this field, as seen from entries on his Bartek's coding blog.

Author claims to describe all the significant additions to the language. However, this is not a dull, difficult to read documentation of the new language elements, like you can find on Instead, the book describes each of them by giving some background and rationale, and showing real-life examples. It makes them easy to understand and to appreciate their usefulness. Each addition to the standard is also accompanied with a reference to the official documents by C++ standard committee and a table showing which versions of the most popular C++ compilers (GCC, Clang, Microsoft Visual C++) support it. Spoiler: They already support almost all of them :)

The book doesn't teach everything from scratch. That would be impossible in that number of pages, considering how big and complex C++ is. It assumes the reader already knows the language quite well, including some features from C++11 like unique_ptr or r-value reference + move semantics. It explains however few topics needed for the book in more details, like the concept of "reduce" and "scan" parallel algorithms, which C++17 adds to the standard library.

The contents of the book is grouped into 3 parts. Part 1 describes additions to the C++ language itself, including init statement for if and switch (e.g. if(int i = Calculate(); i > 0) ...), additions to templates like if constexpr, and attributes like [[nodiscard]], [[maybe_unused]]. Part 2 describes what has been added to its standard library, including std::optional, variant, any, string_view, filesystem. Finally, part 3 shows more extensive code examples that combine multiple new C++ features to refactor existing code into more clean and more efficient one. The author also mentions what parts of the language have been deprecated or removed in the new standard (like auto_ptr).

To summarize, I recommend this book to any C++ developer. It's a good one, and it lets you stay up-to-date with the language standard. You will learn all new features of the language and its standard library from it in a more pleasing way than by reading documents from the C++ committee. Even if you won't be able to use these new features in your current project because your old compiler not upgraded for many years or the coding standard imposed by your team lead doesn't let you, I think it's worth learning those things. Who knows if you won't be asked about them on your next job interview?

You can buy printed version of the book on and electronic version on Leanpub. Bartek, the author of the book, also agreed to give all of the readers of my blog a nice discount - 30%. It's valid till the end of October, and to use it just visit this link.

Comments | #C++ #books Share

# Weirdest rules from coding standards

Sep 2019

Earlier this month I asked on Twitter "what is the weirdest and the most stupid rule you had to follow because of the "Coding Standard"?" I've got some interesting responses. Thinking about it more, I concluded that coding standards are complex. Having one in your project is a good thing because it imposes a consistent style, which is a value by itself. But specific rules are of various types. Some carry universally recognized good practices, like "use std::unique_ptr, don't use std::auto_ptr". Some serve good code performance, like "pass large structures as const& parameters, not by value". Others are purely a matter of subjective preference of its authors, e.g. to use CamelCaseIdentifiers rather than snake_case_identifiers or spaces instead of tabs for indentation. Even the division between those categories is no clear though. For example, there is a research showing that Developers Who Use Spaces Make More Money Than Those Who Use Tabs.

But some rules are simply ridiculous and hard to explain in a rational way. Here are two examples from my experience:

Number 2: Lengthy ASCII-art comment required before every function. In that project we couldn't write an inline function even for the simplest getters, like:

class Element
    int identifier;
    int GetIdentifier() { return identifier; } // Illegal!

We had to only declare member functions in the header file, while definition had to contain a specific comment that repeats the name of the function (which is a nonsense and a bad practice by itself, as it introduces duplication and may go out of sync with actual code), and its description (even if the name is self-descriptive), description of all its parameters (even if their names are self-descriptive), return value etc. Example:



    Returns identifier.


    Identifier of the current element.

int Element::GetIdentifier()
    return identifier;

I like comments. I believe they are useful to explain and augment information carried by function and variable names, especially when they document valid usage of a library interface. For example, a comment may say that a pointer can be null and what that means, a uint may have special value UINT32_MAX and what happens then, or that a float is expressed in seconds. But the comment as shown above doesn't add any useful information. It's just more symbols to type, developer's time wasted, makes code bloated and less readable. It's not even in any standard format that could automatically generate documentation, like with Doxygen. It's just custom, arbitrary rule.

What was the reason behind this rule? A colleague once told me that many years ago the architect of this whole program hoped that they would develop a tool to parse all this code and those comments and generate documentation. Decades have passed, and it didn't happen, but developers still had to write those comments.

The effect was that everyone avoided adding new functions as much as possible or splitting their code into small functions. They were just adding more and more code to the existing ones, which could grow to hundreds of lines. That also caused one of the most obscure bugs I've met in my life (bug number 2).

Number 1: Don't use logical negation operator '!', like: if(!isEnabled). Always compare with false instead, like: if(isEnabled == false).

I understand a requirement to always compare pointers and numbers to some value like nullptr instead of treating them as booleans, although I don't like it. But banning one of the fundamental operators, also when used with bool variables, is hard to justify for me.

Why would anyone come up with something like this? Is it because a single '!' symbol is easy to omit when writing or reading and not so explicit as == false? Then, if the author of this rule suffers from bad sight or dyslexia and a single dash here and there doesn't make a difference to him, maybe he should also define functions Add(), Subtract(), and tell developers to use them instead of operators '+' and '-', because they too are so easy to confuse? Or maybe not... He should rather go do something other than programming :)

Comments | #software engineering #c++ Share

# D3D12 Memory Allocator 1.0.0

Sep 2019

Since 2017 I develop Vulkan Memory Allocator - a free, MIT-licensed C++ library that helps with GPU memory management for those who develop games or other graphics applications using Vulkan. Today we released a similar library for DirectX 12: D3D12 Memory Allocator, which I was preparing for some time. Because that's a project I do at my work at AMD rather than a personal project, I won't describe it in more details here, but just point to the official resources:

If you are interested in technical details and problems I had to consider during development or you want to write your own allocator for either Vulkan or Direct3D 12, you may also check my recent article: Differences in memory management between Direct3D 12 and Vulkan.

Comments | #libraries #directx #productions Share

# Most frequent questions on programming job interviews

Aug 2019

There are different questions that appear on job interviews. Good preparation to such interviews is an art on its own, just like making a good CV, and there are whole books and trainings about it. A potential employer usually asks for details of what is in your CV - your past experience (especially previous 1 or 2 jobs) and why did you leave your previous job. He may ask you to solve some algorithmic puzzles to test your general “intelligence”. He may even ask some ridiculous questions like “Try to estimate how many litres of paint is used every year to renovate buildings in London?”, believing that your response and your way of thinking will be some estimate of whether you will be a good employee. But most of the interviewers also ask some real, technical questions, checking your knowledge from the domain you will work with. Out of these, I observed few questions occurring most frequently. Here they are, together with a draft of the correct response:

1. About C++

1.1. What does inline keyword do? Answer: It means the function should be inlined by the compiler in each place where it is called instead of leaving the function call in the compiled code. It’s only a hint, modern compilers decide which functions to inline on their own.

1.2. What does virtual keyword do? How does it work? Answer: It lets you implement polymorphism when using class inheritance. When an object of a derived class is passed by a pointer or a reference to a base class, calling such a function will call the version from the derived class - the one that the passed object “really is”. It works thanks to each object having an additional, hidden pointer to the virtual function table, which dispatches calls to such functions to the appropriate version.

1.3. What does volatile keyword do? Answer: It tells the compiler that a variable marked with this modifier can change value out of its control (e.g. by a different thread, operating system, or hardware) so it should not optimize its access by caching its value in CPU registers, assume it won’t change for some period etc., but use it in its original location every time. Side note: This keyword is rarely used. It probably appears more frequently on job interviews than in a real code :)

1.4. What does mutable keyword do? Answer: It allows to change a class member variable marked with this modifier even if the member function and the current object is const. It might be useful e.g. to perform lazy evaluation - object is passed as const& and appears to be unchanged, but its methods GetSomething() calculates and caches the value of “something” on first call. Side note: This keyword is rarely used. It probably appears more frequently on job interviews than in a real code :)

2. About general programming

2.1. What is the difference between process and thread? Answer: Process is launched from a particular executable file. It has its own address space (memory heap is common for the whole process), handles to open files, network sockets etc., separated from other processes in the operating system. It comprises a main thread and may have several additional threads. Each thread shares the same code and memory heap, but it has its own stack, instruction pointer (the place in the code currently executed), and values of CPU registers.

2.2. What is a mutex? Answer: It’s a synchronization object that allows multiple threads to access a common resource safely. If only one thread can access the resource, the code accessing it (called a critical section) has to be surrounded by locking the mutex above it and unlocking the mutex below it. Then only one thread will be able to execute that code in any moment. Other threads have to wait.

2.3. What is a deadlock? How to prevent it? Answer: Deadlock is an error in multithreaded code occurring when two or more threads wait for each other and will never make progress. To prevent it, always remember to unlock your mutexes (even when doing early return, break, throwing exception etc.) Also, when locking multiple mutexes, always lock them in the same order, never like: thread 1 locking A then B, thread 2 locking B then A.

3. About graphics programming

3.1. Describe graphics pipeline in modern GPUs. Answer: Depending on how detailed you want to describe it, you can tell that data is processed through following stages: vertex fetch / input assembler (fetching vertices/triangles) --> vertex shader --> optional tessellation (hull shader aka tessellation control shader --> fixed tessellator --> domain shader aka tessellation evaluation shader) --> optional geometry shader --> triangle clipping and culling (incl. viewport culling, backface culling etc.) --> rasterizer (converting triangles to pixels) --> pixel shader aka fragment shader --> depth-stencil test --> writing to render targets with blending.

3.2. What’s the difference between forward and deferred shading? Answer: In traditional forward shading, each object is rendered already shaded by each affecting light. In deferred shading, objects are rendered only once, with their parameters stored in intermediate render targets called G-buffer - like albedo color (taken straight from color texture), normal, depth (allows to reconstruct full position), material parameters (like roughness and metalness in case of PBR). Then separate screen-space passes use these data to apply shading from particular lights. One could say forward shading has complexity of O*L, where O is a number of objects and L is a number of lights, while deferred shading has O+L, which is better for a large number of lights. But deferred shading has its drawbacks: heavy G-buffers consume lots of bandwidth, the algorithm doesn’t work well with translucent objects, MSAA, or using different materials.

There are many more questions you may meet on programming job interview, whether from the topics described above (e.g. from advanced C++ - what is RAII, what is SFINAE), or any other topics important for a specific position, but from my experience those mentioned above are especially frequent, so preparing good answers to them may be the best investment of your time before the interview.

Comments | #career Share

Older entries >


Pinboard Bookmarks


Blog Tags

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2019