Tag: rendering

Entries for tag "rendering", ordered from most recent. Entry count: 165.

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: > 1 2 3 4 ... 21 >

# Lower-Level Graphics API - What Does It Mean?

Sat
06
Jun 2015

They say that the new, upcoming generation of graphics API-s (like DirectX 12 and Vulkan) will be lower-level, closer to the GPU. You may wonder what does it exactly mean or what is the purpose of it? Let me explain that with a picture that I have made few months ago and already shown on my two presentations.

Row 1: Back in the early days of computer graphics (like on Atari, Commodore 64), there were only applications (green rectangle), communicating directly with graphics hardware (e.g. by setting hardware registers).

Row 2: Hardware and software became more complicated. Operating systems started to separate applications from direct access to hardware. To make applications working on variety of devices available on the market, some standards had to be defined. Device drivers appeared as a separate layer (red rectangle).

Graphics API (Application Programming Interface), like every interface, is just the means of communication - standardized, documented definition of functions and other stuff that is used on the application's side and implemented by the driver. Driver translates these calls to commands specific to particular hardware.

Row 3: As games became more complex, it was no longer convenient to call graphics API directly from game logic code. Another layer appeared, called game engine (yellow rectangle). It is essentially a comprehensive library that provides some higher-level objects (like an entity, asset, material, camera, light) and implements them (in its graphical part) using lower-level commands of graphics API (like mesh, texture, shader).

Row 4: This is where we are now. Games, as well as game engines constantly become more complex and expensive to make. Less and less game development studios make their own engine technology, more prefer to use existing, universal engines (like Unity, Unreal Engine) and just focus on gameplay. These engines recently became available for free and on very attractive licenses, so this trend affects both AAA, as well as indie and amateur game developers.

Graphics drivers became incredibly complex programs as well. You may not see it directly, but just take a look at the size of their installers. They are not games - they don't contain tons of graphics and music assets. So guess what is inside? That is a lot of code! They have to implement all API-s (DirectX 9, 10, 11, OpenGL). In addition to that, these API-s have to backward compatible and not necessarily reflect how modern GPU-s work, so additional logic needed for that can introduce some performance overhead or contain some bugs.

Row 5: The future, with new generation of graphics API-s. Note that the sum width of the bars is not smaller than in the previous row. (Maybe it should be a bit smaller - see comment below.) That is because according to the concept of accidental complexity and essential complexity from famous book No Silver Bullet, stuff that is really necessary has to be done somewhere anyway. So lower-level API means just that driver could be smaller and simpler, while upper layers will have more responsibility of manually managing stuff instead of automatic facilities provided by the driver (for example, there is no more DISCARD or NOOVERWRITE flag when mapping a resource in DirectX 12). It also means API is again closer to the actual hardware. Thanks to all that, the usage of GPU can be optimized better by knowing all higher-level details about specific application on the engine level.

Question is: Will that make graphics programming more difficult? Yes, it will, but these days it will affect mostly a small group of programmers working directly on game engines or just passionate about this stuff (like myself) and not the rest of game developers. Similarly, there may be a concern about potential fragmentation. Time will show which API-s will be more successful than the others, but in case none of them will become standard across all platforms (Vulkan is a good candidate) and GPU/OS vendors succeed in convincing developers to use their platform-specific ones, it will also complicate life only for these engine developers. Successful games have to be multiplatform anyway and modern game engines do good job in hiding many of differences between platforms, so they can do the same with graphics.

Comments | #gpu #rendering #directx Share

# Nothing Renders - Why?

Sun
24
May 2015

"I have a blank screen" or "nothing is rendered" is probably the most frequent bug in graphics programming. It's also quite hard to debug because there are many possible causes. Graphics pipeline is compilated, so there are multiple things that can be wrong at each stage. Few years ago I've written a short article about this, in Polish, titled Nic nie widać. This is translation of that article. It provides a list of questions you should ask yourself while considering the most frequent reasons for why nothing appears on the screen. It is dedicated for Direct3D 9, but it can also be applied to OpenGL (only some things are named differently) and, to some degree, to newer graphics API-s.

It's black

First of all, please clear your background to some color other than black, e.g. gray or blue. Maybe your geometry is rendered, but it is black. It is a frequent bug, especially if you have lighting enabled (and it is enabled by default) while you didn't setup any lights.

Matrices

Are you sure you correctly setup all matrices - world, view and projection? Did you create them using correct functions? Is the camera located in the right place and looks in the desired direction? Maybe your object is in the same position as camera or behind the camera, which is pointing backward?

Position

Is the size and position of your object correct? Is your object too close or too far from the camera, relative to the minimum and maximum Z value set in projection matrix? Isn't it too small to be visible?

Errors

Do all the calls to DirectX functions return a value meaning success? Do you even check that value? Please also launch "DirectX Control Panel", enable Debug Layer for your application and analyze Output for any error or warning messages.

Vertex Format

Do you use correct vertex format? Did you define a structure describing your vertex correctly and compatible with the FVF/vertex declaration that you use? Are all the fields in the correct order and of the right type? Do you tell DirectX what vertex format you want to use by calling SetFVF/SetVertexDeclaration before rendering?

Draw Call

Do you pass correct parameters to the rendering function? In the most basic case, all offsets should be 0 and "stride" is the size of your vertex structure, in bytes, like sizeof(SMyVertex). Do you pass correct number of primitives to render?

Buffers

Do you fill your vertex and (optional) index buffer correctly? Do they have correct number of elements? Do you fill all of them? If you use transformed coordinates XYZRHW, the RHW component should be set to 1.0 and never to 0.0.

Alpha Channel

Maybe your geometry is totally transparent. Is the alpha channel set to maxium (1.0 or 0xFF, depending on type) and not to minimum in all of these: vertices, texture, material (only if you use lighting)?

Backface Culling

Maybe the triangles you want to render are ignored as "back facing" the camera, because they have wrong winding (clockwise or counterclockwise)? Try to disable backface culling to check that.

States

Did you setup blending on all texture stages correctly? Did you correctly setup all rest of the states of graphics pipeline? Maybe the problem appears only when you render some objects in a specific order? That means states set before rendering one object remain in the pipeline and break rendering of the next one.

Advanced Effects

If you use some advanced rendering features, your graphics card may not support them. Set reference software rasterizer during creation of the device object (D3DDEVTYPE_REF instead of HAL). Your program will run very slowly, but everything should be drawn as expected. Query device object for capabilities of your GPU (device caps).

Z-Buffer

If you use depth buffer, remember to clear it as well, together with backbuffer. In 3rd parameter of Clear function bitwise OR following flag: D3DCLEAR_ZBUFFER. Without it, you won't see anything on the screen or you will see artifacts. Value to clear Z-buffer to is 1.0f (not 0.0f).

Finally, there are ways you can actually debug how data and state look like on subsequent stages of the graphics pipeline while this bugged draw call is executed, using Graphics Diagnostics in Visual Sudio or other GPU debugging tool.

See also: How not to render 3D graphics: 40 ways to get a blank black screen

Comments | #directx #rendering Share

# Rendering Video Special Effects in GLSL

Mon
16
Jun 2014

Rendering real-time, hardware accelerated 3D graphics is one aspect of computer graphics, but there are others too. Recently I became interested in video editing. I wanted to add some special effects to a video and was looking for a technology to do that. Of course video editing software usually has some effects built-in, like different filters or transition effects, some borders or gradients. But I wanted something different. If I had and I knew how to use software like Adobe After Effects, I'm sure that would be the best and easiest way to make any effect imaginable. But as I don't, I decided to use what I already know - to write a shader :)

1. To run a shader, some hosting app is needed. Of course I could write one in C++, but for the purpose of this work it was enough to use Live Coding Compo Framework (a demoscene tool created by bonzaj, which was used during last year's WeCan demoparty). This simple and free package contains rendering application and preconfigured Visual Studio solution. Having VS installed (it works with Express version as well), all I needed to do was to edit "Run.bat" file to point to directory with VS installation in my system. Next, I just executed "Run.bat", and two programs were launched. On the left monitor I had fullscreen "Live Coding Preview", on the right: Visual Studio with special solution opened. I could then edit any of the GLSL fragment shaders contained in the solution. Every time I hit Compile (Ctrl+F7), the shader was compiled and displayed in the preview.

2. Being able to render my effect in real-time, next I needed to capture it to a video. Probably the most popular app for this is FRAPS. I ran it, set Video Capture Settings to frame rate that I was going to use in my final video (which was 29.97 fps) and then captured appropriate period of time of rendering my effect, starting and stopping recording with F9 hotkey.

3. Video captured by FRAPS is in full, original resolution and encoded with some strange codec, so next I needed to convert it to desired format. To do this, I used VLC media player. Some may think that it's just a video player, but in fact it's incredibly powerful and flexible video transmitting and processing software. (I once had an opportunity to work with libVLC - its features exposed as C library.) Its greatest advantage is that it has its own collection of codecs, so it doesn't care whether you have appropriate codecs installed in your system. To convert a video file, I selected: Media > Convert / Save..., selected my AVI file captured by FRAPS, pressed "Convert / Save" button, selected Profile: "Video - H.264 + MP3 (MP4)", customized it using "Edit selected profile" image button, selecting: Encapsulation = MP4/MOV, Video codec = MPEG-4 (on Resolution tab, I could also set new resolution to scale the content, my choice was 1280px x 720px), Audio disabled, Subtitles disabled. Then after pressing "Save", selecting path to destination file, pressing "Start" and waiting some time, I had my video converted to more standard MPEG-4 format (and more than 5 times smaller than the original one recorded by FRAPS).

4. Finally I could insert this video onto a new track in my video editing software and enable blending with underlying layer to achieve desired effect (I used "Overlay" blending mode and 50% opacity).

There are some details that I intentionally skipped here (like video bitrate) not to make this post even longer, but I hope you learned something new from it. My effect looked like this, and here is the source code: Low freq fx.glsl

By the way, here is another tutorial about how to make GIF like this from a video (using only free tools this time):

1. To capture video frames as images, use VLC media player:

 2. To merge images into animated GIF, use GIMP:

Comments | #rendering #video #tools Share

# Fluorescence

Mon
05
May 2014

The main and general formula in computer graphics is Rendering Equation. It can be simplified to say that perceived color on an opaque surface is: LightColor * MaterialColor. The variables are are (R, G, B) vectors and (*) is per-component multiplication. According to this formula:

There are many phenomena that go beyond this model. One of them is subsurface scattering (SSS), where light penetrates object and goes out from different place on the surface. Another one is fluorescence - a property of a material which absorbs some light wavelength and emits different wavelength in return. One particularly interesting kind of it is UV-activity - when material absorbs UV light (also called blacklight, which is invisible to people) and emits some visible color. This way an object, when lit with UV light, looks like it's glowing in the dark, despite it has no LED-s or power source.

I've never seen a need to simulate fluorescence in computer graphics, but in real life it is used e.g. in decorations for psytrance parties, like this installation on main stage on Tree of Life 2012 festival in Turkey:

So what types of materials are fluorescent? It's not so simple that you can take any vividly colored object and it will glow in the  UV. Some of them do, some don't. You can take a very colourful T-shirt and it may be not visible under UV at all. On the other hand, some substances glow while they would better not (like dandruff :) But there are some materials that are specially designed and sold to be fluorescent, like the Fluor series of Montana MNT 94 paints I used to paint my origami decorations.

 

Comments | #art #psytrance #rendering Share

# Four primary colors

Sun
04
May 2014

I've already posted about my origami decoration. My choice of colors is not random. Of course I could make it more colorful, but evey paint costs some money, so I decided to buy just four: red, green, yellow and blue. Why?

That's because I still keep in my mind the great article Color Wheels are wrong? How color vision actually works. It tells that although our eyes can see three colors: red, green and blue (RGB), our perception is not that simple and direct. Our vision first does the difference between R and G, so each color is even more red or more green. Next and more importantly, it does the difference between RG and B, so each color is either more yellow (or red, or green) also known as warm colors, or more blue, aka cool colors.

That's also how photo manipulation software works (e.g. Adobe Lightroom). Instead of scrollbars for RGB, you can find there two scrollbars: to choose between more red and more green (called tint) and between more yellow and more blue (called temperature).

That's why it could be said that for our vision, there are four primary colors: red, green, yellow and blue.

Comments | #rendering Share

# After WeCan 2013

Mon
23
Sep 2013

Last weekend I've been in Łódź at WeCan - multiplatform demoparty. It was great! - well organized, full of interesting stuff to watch and participate, as well as many nice people and of course a lot of beer :) Here is my small photo gallery from the event. On the first, as well as second day in the evening there were some concerts with various music (metal, drum'n'bass). ARM - one of the sponsors, delivered a talk about their mobile processors and GPU-s. They talked about tools they provide for game developers on their platform, like the one for performance profiling or offline shader compiler. On Saturday there were competitions in different categories: music (chip, tracker, streaming), game, wild/anim, gfx (oldschool, newschool), game, intro (256B, 1k/4k/64k any platform) and of course demo (any platform - there were demos for PC, Android, but the winning one was for Amiga!) I think the full compo results and prods will soon be published on WeCan 2013 :: pouet.net.

But in my opinion, most interesting from the whole party was the real-time coding competition. There were 3 stages. In each stage, pairs of programmers had to write a GLSL fragment shader in a special environment similar to Shadertoy. They could use some predefined input - several textures and constants, including data calculated real-time from music played by a DJ during the contest (array with FFT). Time was limited to 10-30 minutes for each stage. The goal was to generate some good looking graphics and animation. Who had louder applause at the end was the winner and advanced to next stage, where he could continue to improve his code. I didn't pass to the second stage, but anyway it was fun to participate in this compo.

Just as one could expect by looking at what is now state-of-the-art in 4k intros, winning strategy was to implement sphere tracing or something like that. Even if someone had just one sphere displayed on the screen after the first stage, from there he could easily make some amazing effects with interesting shapes, lighting, reflections etc. So it's not suprising many participants took this strategy. The winner was w23 from Russia.

I think that this real-time coding compo was an amazing idea. I've never seen anything like this before. Now I think that such competition is much better - more exciting and less time-consuming than any 8-hour long game development compo, which is traditional on Polish gamedev conferences. Of course that's just different thing. Not every game developer is a shader programmer. But on this year's WeCan, even those who don't code at all told me that the compo about real-time shader programming was very fun to watch.

Comments | #demoscene #events #competitions #rendering Share

# Mesh of Box

Mon
21
Jan 2013

Too many times I had to come up with triangle mesh of a box to hardcode it in my program, written just from memory or with help of a sheet of paper. It's easy to make a mistake and have a box with one face missing or something like that. So in case me or somebody in the future will need it, here it is. Parameters:

Box spanning from (-1, -1, -1) to (+1, +1, +1). Contains 3D positions and normals. Topology is triangle strip, using strip-cut index. Backface culling can be used, front faces are clockwise (using Direct3D coordinate system).

// H file

struct SVertex {
    vec3 Position;
    vec3 Normal;
};

const size_t BOX_VERTEX_COUNT = 6 * 4;
const size_t BOX_INDEX_COUNT  = 6 * 5;
extern const SVertex BOX_VERTICES[];
extern const uint16_t BOX_INDICES[];

// CPP file

const SVertex BOX_VERTICES[] = {
    // -X
    { vec3(-1.f, -1.f,  1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f,  1.f,  1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f, -1.f, -1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f,  1.f, -1.f), vec3(-1.f,  0.f,  0.f) },
    // -Z
    { vec3(-1.f, -1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3(-1.f,  1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3( 1.f, -1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    // +X
    { vec3( 1.f, -1.f, -1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f, -1.f,  1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 1.f,  0.f,  0.f) },
    // +Z
    { vec3( 1.f, -1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3(-1.f, -1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3(-1.f,  1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    // -Y
    { vec3(-1.f, -1.f,  1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3(-1.f, -1.f, -1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3( 1.f, -1.f,  1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3( 1.f, -1.f, -1.f), vec3( 0.f, -1.f,  0.f) },
    // +Y
    { vec3(-1.f,  1.f, -1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3(-1.f,  1.f,  1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 0.f,  1.f,  0.f) },
};

const uint16_t BOX_INDICES[] = {
     0,  1,  2,  3, 0xFFFF, // -X
     4,  5,  6,  7, 0xFFFF, // -Z
     8,  9, 10, 11, 0xFFFF, // +X
    12, 13, 14, 15, 0xFFFF, // +Z
    16, 17, 18, 19, 0xFFFF, // -Y
    20, 21, 22, 23, 0xFFFF, // +Y
};

Comments | #rendering Share

# How to Flip Triangles in Triangle Mesh

Sat
19
Jan 2013

Given triangle mesh, as we use it in real-time rendering of 3D graphics, we say that each triangle have two sides, depending on whether its vertices are oriented clockwise or counterclockwise from particular point of view. In Direct3D, by default, triangles oriented clockwise are considered front-facing and they are visible, while triangles oriented counterclockwise are invisible because they are discarded by the API feature called backface culling.

When we have backface culling enabled and we convert mesh between coordinate systems, we sometimes need to "flip triangles". When vertices of each triangle are separate, an algorithm for this is easy. We just need to swap first with third vertex of each triangle (or any other two vertices). So we can start implementing the flipping method like this:

class CMesh
{
    ...
    D3D11_PRIMITIVE_TOPOLOGY m_Topology;
    bool m_HasIndices;
    std::vector<SVertex> m_Vertices;
    std::vector<uint32_t> m_Indices;
};

void CMesh::FlipTriangles()
{
    if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
    {
        if(m_HasIndices)
            FlipTriangleListInArray<uint32_t>(m_Indices);
        else
            FlipTriangleListInArray<SVertex>(m_Vertices);
    }
    ...
}

Where the function template for flipping triangles in a vector is:

template<typename T>
void CMesh::FlipTriangleListInArray(std::vector<T>& values)
{
    for(size_t i = 0, count = values.size(); i < count - 2; i += 3)
        std::swap(values[i], values[i + 2]);
}

Simple reversing all elements in the vector with std::reverse would also do the job. But things get complicated when we consider triangle strip topology. (I assume here that you know how graphics API-s generate orientation of triangles in a triangle strip.) Reversing vertices works, but only when number of vertices in the strip is odd. When it's even, triangles stay oriented in the same way.

I asked question about this on forum.warsztat.gd (in Polish). User albiero proposed following solution: just duplicate first vertex. It will generate additional degenerate (invisible) triangle, but thanks to this all following triangles will be flipped. It seems to work!

I also wanted to handle strip-cut index (a special value -1 which starts new triangle strip), so the rest of my fully-featured algorithm for triangle flipping is:

    ...
    else if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP)
    {
        if(m_HasIndices)
        {
            size_t begIndex = 0;
            while(begIndex < m_Indices.size())
            {
                const size_t indexCount = m_Indices.size();
                while(begIndex < indexCount && m_Indices[begIndex] == UINT_MAX)
                    ++begIndex;
                if(begIndex == indexCount)
                    break;
                size_t endIndex = begIndex + 1;
                while(endIndex < indexCount && m_Indices[endIndex] != UINT_MAX)
                    ++endIndex;
                // m_Indices.size() can change here!
                FlipTriangleStripInArray<uint32_t>(m_Indices, begIndex, endIndex);
                begIndex = endIndex + 1;
            }
        }
        else
            FlipTriangleStripInArray<SVertex>(m_Vertices, 0, m_Vertices.size());
    }
}

Where function template for flipping triangles in selected part of a vector is:

template<typename T>
void CMesh::FlipTriangleStripInArray(std::vector<T>& values, size_t begIndex, size_t endIndex)
{
    const size_t count = endIndex - begIndex;
    if(count < 3) return;
    // Number of elements (and triangles) is odd: Reverse elements.
    if(count % 2)
        std::reverse(values.begin() + begIndex, values.begin() + endIndex);
    // Number of elements (and triangles) is even: Repeat first element.
    else
        values.insert(values.begin() + begIndex, values[begIndex]);
}

Comments | #directx #rendering #algorithms Share

Pages: > 1 2 3 4 ... 21 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2020