November 2012

Warning! Some information on this page is older than 3 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

18:16
Sun
25
Nov 2012

C++ is Good for Fast Coding

Many people believe that C and C++ are languages suitable only for coding some special kinds of applications - low level or high performance. I think it's not 100% true. Here is a story: 2011-11-11 we had a game development competition organized in this topic on forum.warsztat.gd, called "Explosive Hamster Exhibition Compo". The name comes from crazy game titles that can be generated by Video Game Name Generator, which was used in this compo to generate a topic, unique for each participant. We had to develop a game in 3 hours. From 3 topics generated for me I chose "Micro Sewer Plus" and made a game about closing sewers. (Download: Reg - MicroSewerPlus.7z - binary + source code, 479 KB.)

I managed to write this simple yet playable game in 3 hours and took 2nd place out of 10, despite my game was written in C++, while many others used the "easier" or "quicker" technologies like Java, JavaScript, XNA o Game Maker. What I want to show here is that C++ is not necessarily a language in which coding is hard and slow. It's all about having a good framework - a library with a set of functions and classes that handles all low-level stuff and allows you to implement the game itself quickly, easily and directly as you think about it. You don't have to manually free all allocated memory if you have smart pointers. You don't have to write shaders and setup Direct3D render states if you have Canvas class with methods like DrawSprite(x, y, color).

You can prepare a good library by yourself or download one of many freely available on the Internet and use just like in any other programming language. What you get in return when deciding to use C++ is great flexibility in defining how the interface of your library looks like. Thanks to templates, operator overloading and all that stuff you can create your own domain-specific language inside C++ (like the << operator is used to write to stream objects). At the same time, due to compiling to native code, creating objects on the stack and other language features you don't have to sacrifice performance. You don't have to use separate variables float x, y, z or dynamically allocate new Vector(x, y, z). You can define a vector structure with overloaded operators, use it conveniently and compiler will optimize the code so you can do thousands or millions of vector computations per second.

Comments (2) | Tags: c++ warsztat compo productions | Author: Adam Sawicki | Share

20:34
Mon
12
Nov 2012

Zjazd Twórców Gier 2012 - Review

By request of Tomasz Cieślewicz "tdc", I've written a review of Zjazd Twórców Gier - Game Developers Convention that took place 27-28th October 2012. It is in Polish, published on atarionline.pl website:

Zjazd Twórców Gier 2012 – relacja @ atarionline.pl

Comments (0) | Tags: events productions | Author: Adam Sawicki | Share

21:58
Fri
09
Nov 2012

Half-Pixel Offset in DirectX 11

There is a problem graphics programmers often have to face known as half-pixel/half-texel offset. In Direct3D 9 the probably most official article about it was Directly Mapping Texels to Pixels. In Direct3D 10/11 they changed the way it works so it can be said that the problem is gone now. But I can see the entry about SV_Position in the Semantics article does not explain it clearly, so here is my short explanation.

A pixel on a screen or texel on a texture can be seen as a matrix cell, visualized as square filled with some color. That's the way we treat it in 2D graphics where we index pixels from left-top corner using integer (x, y) coordinates.

But in 3D graphics, texture can be sampled using floating-point coordinates and interpolation between texel colors can be performed. Texture coordinates in DirectX also starts from left-top corner, but position (0, 0) means the very corner of the texture, NOT the center of the first texel! Similarly, position (1, 1) on texture means its bottom-right corner. So to get exactly the color of the second texel of 8 x 8 texture, we have to sample with coordinates (1.5 / 8, 0.5 / 8).

Now if we rendered 3D scene onto a texture to perform e.g. deferred shading or some other screen-space postprocessing and we want to redraw it to the target back buffer, how do we determine coordinates for sampling this texture based on position of rendered pixel? There is a system value semantics available at pixel shader input called SV_Position which gives (x, y) pixel position. It is expressed in pixels, so it goes to e.g. (1920, 1080) and not to (1, 1) like texture coordinates. But it turns out that in Direct3D 10/11, the value of SV_Target for first pixels on the screen is not (0, 0), (1, 0), but (0.5, 0.5), (1.5, 0.5)!

Therefore to sample texture based on pixel position, it's enought to divide it by screen resolution. No need to perform any half-pixel offset - to add or subtract (0.5, 0.5), like it was done in Direct3D 9.

Texture2D g_Texture : register(t0);
SamplerState g_SamplerState : register(s0);

cbuffer : register(b0) {
    float2 g_ScreenResolution;
};

void PixelShader(float2 Position : SV_Position, out float4 Color : SV_Target)
{
    Color = g_Texture.Sample(g_SamplerState, Position / g_ScreenResolution);
}

Comments (0) | Tags: directx rendering | Author: Adam Sawicki | Share

20:56
Sat
03
Nov 2012

Implementing ID3D10Include

DirectX shaders in HLSL language can use #include preprocessor directive, just like C and C++ code. When you compile a shader from file using D3DX11CompileFromFile function, it works automatically. But what if you keep your shader source in memory and compile it with D3DX11CompileFromMemory? DirectX doesn't know the location of source file (if any), so it can't locate additional files to include.

But there is a solution. Parameter __in LPD3D10INCLUDE pInclude allows passing your own object implementing ID3D10Include interface (a typedef to ID3DInclude) that can help DirectX in loading included files while preprocessing shader source. We don't event have to deal with COM interfaces here. We can just implement and use a class like the one below. The code uses lots of stuff from my CommonLib, so it's not self-contained and ready to copy-paste, but it should give a clue about how to do it.

class CShaderInclude : public ID3DInclude
{
public:
    // shaderDir: Directory of current shader file, used by #include "FILE"
    // systemDir: Default shader includes directory, used by #include <FILE>
    CShaderInclude(const char* shaderDir, const char* systemDir) :
        m_ShaderDir(shaderDir),
        m_SystemDir(systemDir)
    {
    }

    HRESULT __stdcall Open(
        D3D_INCLUDE_TYPE IncludeType,
        LPCSTR pFileName,
        LPCVOID pParentData,
        LPCVOID *ppData,
        UINT *pBytes);
    
    HRESULT __stdcall Close(LPCVOID pData);

private:
    string m_ShaderDir;
    string m_SystemDir;
};

HRESULT __stdcall CShaderInclude::Open(
    D3D_INCLUDE_TYPE IncludeType,
    LPCSTR pFileName,
    LPCVOID pParentData,
    LPCVOID *ppData,
    UINT *pBytes)
{
    try
    {
        /*
        If pFileName is absolute: finalPath = pFileName.
        If pFileName is relative: finalPath = dir + "\\" + pFileName
        */
        string finalPath;
        switch(IncludeType)
        {
        case D3D_INCLUDE_LOCAL: // #include "FILE"
            common::RelativeToAbsolutePath(&finalPath, m_ShaderDir, pFileName);
            break;
        case D3D_INCLUDE_SYSTEM: // #include <FILE>
            common::RelativeToAbsolutePath(&finalPath, m_SystemDir, pFileName);
            break;
        default:
            assert(0);
        }

        common::FileStream fileStream(finalPath, common::FM_READ);
        uint32_t fileSize = fileStream.GetSize();

        if(fileSize)
        {
            char* buf = new char[fileSize];
            fileStream.MustRead(buf, fileSize);

            *ppData = buf;
            *pBytes = fileSize;
        }
        else
        {
            *ppData = nullptr;
            *pBytes = 0;
        }
        return S_OK;
    }
    catch(common::Error& err)
    {
        PrintError(err);
        return E_FAIL;
    }
}

HRESULT __stdcall CShaderInclude::Close(LPCVOID pData)
{
    // Here we must correctly free buffer created in Open.
    char* buf = (char*)pData;
    delete[] buf;
    return S_OK;
}

/*
Inputs:
    std::vector<char> shaderData
    const char* filePath
    const char* functionName
    const char* profile
Outputs:
    HRESULT hr
    ID3DBlob* shaderBlob
    ID3DBlob* errorBlob
*/

string shaderDir;
common::ExtractFilePath(&shaderDir, filePath);

CShaderInclude includeObj(shaderDir.c_str(), "Shaders/Include");

HRESULT hr = D3DX11CompileFromMemory(
    shaderData.data(),
    shaderData.size(),
    filePath,
    nullptr, // pDefines
    &includeObj,
    functionName,
    profile,
    0, // Flags1
    0, // Flags2
    nullptr, // pPump
    &shaderBlob,
    &errorBlob,
    nullptr); // pHResult

Comments (1) | Tags: directx | Author: Adam Sawicki | Share

23:50
Fri
02
Nov 2012

Flat Shading in DirectX 11

I code a terrain heightmap rendering and today I wanted to make it looking like in Darwinia. After preparing approporiate texture it turned out that normals of my mesh used for lighting are smoothed. That's not a suprise - after all there is a single vertex in every particular place, shared by surrounding triangles, refered multiple times from index buffer.

How to make triangles flat shaded? Make every vertex unique to its triangle to have different normal? In Direct3D 9 there was this state used by fixed function pipeline you could set to device->SetRenderState(D3DRS_SHADEMODE, D3DSHADE_FLAT); What about Direct3D 10/11?

A simple solution is to use interpolation modifiers. Fields of structure passed from vertex shader to pixel shader can have these modifiers to tell GPU how to interpolate (or not to interpolate) particular value on the surface of a triangle. nointerpolation means the value will be taken from one of vertices. That solves the problem.

struct VS_TO_PS
{
    float4 Pos : SV_Position;
    nointerpolation float3 Normal : NORMAL;
    float2 TexCoord : TEXCOORD0;
};

Comments (2) | Tags: rendering directx | Author: Adam Sawicki | Share

21:35
Thu
01
Nov 2012

Poznań Game Arena and Zjazd Twórców Gier

Last weekend I've been in Poznań on Poznań Game Arena and Zjazd Twórców Gier (that would be Game Developers Convention in English). Here is the gallery of my photos:

Poznań Game Arena was an event about gaming. It wasn't very big and I didn't see many game developers or publishers showing their new titles, but surely there was some interesting stuff like PC hardware, driving simulators, Xbox and Kinect and of course beautiful hostesses. A real tractor was also inside, advertising Farming Simulator game. Some other interesting places were a shop with lots of T-shirts with gaming-related overprints or Retro Area with oldschool computers like Atari or Commodore available to anyone who wanted to play these old games.

One thing I like about this event is that it showed hardcore gaming is not dead. In the era of simple, casual, 2D games for smartphones, tables and Facebook everyone is talking about the decline of hardcore PC gaming. I had this thought while on PGA that it's just from the marketing and finance guys' point of view. They sell lots of mobile devices these days because many people buy their first smartphone or tablet. But it doesn't mean we are going to throw our PC-s. Everyone has a PC or laptop. They just aren't able to sell us a new model every year or two like it was some time ago, so they talk about smaller money, smaller market or whatever. Obviously, more people play games like Angry Birds than games like Call of Duty. But hardcore gaming is still there for those who like it, so it was nice to see all these PC-s, powerful graphics cards, extreme overclocking and good looking 3D games.

ZTG, on the other hand, was an event for game developers, taking place at the same time and in the building next to PGA. I've been on ZTG 2 years ago and my experiences were mixed. This time I must tell the organizers did their homework and made an event that could be called a conference, not the convention. 2 days, 4 parallel sessions including talks, discussion panels and movie screenings - that's a lot of interesting stuff. In this year's edition everything was on time and just felt well prepared. One thing that remained is the "spirit" of the event, coming from the people that organize it. They are from the community creating games using middleware, like RPG Maker or Multimedia Fusion and so ZTG is focused more on game design than programming (a talk about terrain rendering techniques was one exception) and more on becoming indie games developer than getting a job in a AAA game developing studio.

When listening talks on ZTG, I had the reflection that there are two general types of talks on every gamedev conference. Both have some pros and cons. First is more scientific - presented by a student or a lecturer from a university. It can teach some subject or show the results of some novel research. But some may say that it is often too theoretic and far from what is really used in game industry, like when about neural networks, genetic algorithms or procedural content generation. Second is a talk from someone representing a game company. This one is often atractively presented and looks like the presenter knows a lot about the subjects from his own practice. But it often doesn't tell anything new or profound as real developers from famous game companies are either not allowed to talk about the details of their work, or just don't need to - it's their name in the agenda what matters, not what they say :) So it's easy to complain on either type of these talks, but for me a good conference finds balance between them. In my opinion, ZTG did this right.

Comments (0) | Tags: events | Author: Adam Sawicki | Share

STAT NO AD [Stat] [Admin] [STAT NO AD] [pub] [Mirror] Copyright © 2004-2017 Adam Sawicki
Copyright © 2004-2017 Adam Sawicki