Entries for tag "rendering", ordered from most recent. Entry count: 176.

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

# After WeCan 2013

Mon

23

Sep 2013

Last weekend I've been in Łódź at **WeCan** - multiplatform demoparty. It was great! - well organized, full of interesting stuff to watch and participate, as well as many nice people and of course a lot of beer :) Here is my small photo gallery from the event. On the first, as well as second day in the evening there were some concerts with various music (metal, drum'n'bass). ARM - one of the sponsors, delivered a talk about their mobile processors and GPU-s. They talked about tools they provide for game developers on their platform, like the one for performance profiling or offline shader compiler. On Saturday there were competitions in different categories: music (chip, tracker, streaming), game, wild/anim, gfx (oldschool, newschool), game, intro (256B, 1k/4k/64k any platform) and of course demo (any platform - there were demos for PC, Android, but the winning one was for Amiga!) I think the full compo results and prods will soon be published on WeCan 2013 :: pouet.net.

But in my opinion, most interesting from the whole party was the **real-time coding competition**. There were 3 stages. In each stage, pairs of programmers had to write a GLSL fragment shader in a special environment similar to Shadertoy. They could use some predefined input - several textures and constants, including data calculated real-time from music played by a DJ during the contest (array with FFT). Time was limited to 10-30 minutes for each stage. The goal was to generate some good looking graphics and animation. Who had louder applause at the end was the winner and advanced to next stage, where he could continue to improve his code. I didn't pass to the second stage, but anyway it was fun to participate in this compo.

Just as one could expect by looking at what is now state-of-the-art in 4k intros, winning strategy was to implement sphere tracing or something like that. Even if someone had just one sphere displayed on the screen after the first stage, from there he could easily make some amazing effects with interesting shapes, lighting, reflections etc. So it's not suprising many participants took this strategy. The winner was w23 from Russia.

I think that this real-time coding compo was an amazing idea. I've never seen anything like this before. Now I think that such competition is much better - more exciting and less time-consuming than any 8-hour long game development compo, which is traditional on Polish gamedev conferences. Of course that's just different thing. Not every game developer is a shader programmer. But on this year's WeCan, even those who don't code at all told me that the compo about real-time shader programming was very fun to watch.

Comments | #demoscene #events #competitions #rendering Share

# Mesh of Box

Mon

21

Jan 2013

Too many times I had to come up with triangle mesh of a box to hardcode it in my program, written just from memory or with help of a sheet of paper. It's easy to make a mistake and have a box with one face missing or something like that. So in case me or somebody in the future will need it, here it is. Parameters:

Box spanning from (-1, -1, -1) to (+1, +1, +1). Contains 3D positions and normals. Topology is triangle strip, using strip-cut index. Backface culling can be used, front faces are clockwise (using Direct3D coordinate system).

// H file

struct SVertex {

vec3 Position;

vec3 Normal;

};

const size_t BOX_VERTEX_COUNT = 6 * 4;

const size_t BOX_INDEX_COUNT = 6 * 5;

extern const SVertex BOX_VERTICES[];

extern const uint16_t BOX_INDICES[];

// CPP file

const SVertex BOX_VERTICES[] = {

// -X

{ vec3(-1.f, -1.f, 1.f), vec3(-1.f, 0.f, 0.f) },

{ vec3(-1.f, 1.f, 1.f), vec3(-1.f, 0.f, 0.f) },

{ vec3(-1.f, -1.f, -1.f), vec3(-1.f, 0.f, 0.f) },

{ vec3(-1.f, 1.f, -1.f), vec3(-1.f, 0.f, 0.f) },

// -Z

{ vec3(-1.f, -1.f, -1.f), vec3( 0.f, 0.f, -1.f) },

{ vec3(-1.f, 1.f, -1.f), vec3( 0.f, 0.f, -1.f) },

{ vec3( 1.f, -1.f, -1.f), vec3( 0.f, 0.f, -1.f) },

{ vec3( 1.f, 1.f, -1.f), vec3( 0.f, 0.f, -1.f) },

// +X

{ vec3( 1.f, -1.f, -1.f), vec3( 1.f, 0.f, 0.f) },

{ vec3( 1.f, 1.f, -1.f), vec3( 1.f, 0.f, 0.f) },

{ vec3( 1.f, -1.f, 1.f), vec3( 1.f, 0.f, 0.f) },

{ vec3( 1.f, 1.f, 1.f), vec3( 1.f, 0.f, 0.f) },

// +Z

{ vec3( 1.f, -1.f, 1.f), vec3( 0.f, 0.f, 1.f) },

{ vec3( 1.f, 1.f, 1.f), vec3( 0.f, 0.f, 1.f) },

{ vec3(-1.f, -1.f, 1.f), vec3( 0.f, 0.f, 1.f) },

{ vec3(-1.f, 1.f, 1.f), vec3( 0.f, 0.f, 1.f) },

// -Y

{ vec3(-1.f, -1.f, 1.f), vec3( 0.f, -1.f, 0.f) },

{ vec3(-1.f, -1.f, -1.f), vec3( 0.f, -1.f, 0.f) },

{ vec3( 1.f, -1.f, 1.f), vec3( 0.f, -1.f, 0.f) },

{ vec3( 1.f, -1.f, -1.f), vec3( 0.f, -1.f, 0.f) },

// +Y

{ vec3(-1.f, 1.f, -1.f), vec3( 0.f, 1.f, 0.f) },

{ vec3(-1.f, 1.f, 1.f), vec3( 0.f, 1.f, 0.f) },

{ vec3( 1.f, 1.f, -1.f), vec3( 0.f, 1.f, 0.f) },

{ vec3( 1.f, 1.f, 1.f), vec3( 0.f, 1.f, 0.f) },

};

const uint16_t BOX_INDICES[] = {

0, 1, 2, 3, 0xFFFF, // -X

4, 5, 6, 7, 0xFFFF, // -Z

8, 9, 10, 11, 0xFFFF, // +X

12, 13, 14, 15, 0xFFFF, // +Z

16, 17, 18, 19, 0xFFFF, // -Y

20, 21, 22, 23, 0xFFFF, // +Y

};

# How to Flip Triangles in Triangle Mesh

Sat

19

Jan 2013

Given triangle mesh, as we use it in real-time rendering of 3D graphics, we say that each triangle have two sides, depending on whether its vertices are oriented clockwise or counterclockwise from particular point of view. In Direct3D, by default, triangles oriented clockwise are considered front-facing and they are visible, while triangles oriented counterclockwise are invisible because they are discarded by the API feature called backface culling.

When we have backface culling enabled and we convert mesh between coordinate systems, we sometimes need to "flip triangles". When vertices of each triangle are separate, an algorithm for this is easy. We just need to swap first with third vertex of each triangle (or any other two vertices). So we can start implementing the flipping method like this:

class CMesh

{

...

D3D11_PRIMITIVE_TOPOLOGY m_Topology;

bool m_HasIndices;

std::vector<SVertex> m_Vertices;

std::vector<uint32_t> m_Indices;

};

void CMesh::FlipTriangles()

{

if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)

{

if(m_HasIndices)

FlipTriangleListInArray<uint32_t>(m_Indices);

else

FlipTriangleListInArray<SVertex>(m_Vertices);

}

...

}

Where the function template for flipping triangles in a vector is:

template<typename T>

void CMesh::FlipTriangleListInArray(std::vector<T>& values)

{

for(size_t i = 0, count = values.size(); i < count - 2; i += 3)

std::swap(values[i], values[i + 2]);

}

Simple reversing all elements in the vector with std::reverse would also do the job. But things get complicated when we consider triangle strip topology. (I assume here that you know how graphics API-s generate orientation of triangles in a triangle strip.) Reversing vertices works, but only when number of vertices in the strip is odd. When it's even, triangles stay oriented in the same way.

I asked question about this on forum.warsztat.gd (in Polish). User albiero proposed following solution: just duplicate first vertex. It will generate additional degenerate (invisible) triangle, but thanks to this all following triangles will be flipped. It seems to work!

I also wanted to handle strip-cut index (a special value -1 which starts new triangle strip), so the rest of my fully-featured algorithm for triangle flipping is:

...

else if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP)

{

if(m_HasIndices)

{

size_t begIndex = 0;

while(begIndex < m_Indices.size())

{

const size_t indexCount = m_Indices.size();

while(begIndex < indexCount && m_Indices[begIndex] == UINT_MAX)

++begIndex;

if(begIndex == indexCount)

break;

size_t endIndex = begIndex + 1;

while(endIndex < indexCount && m_Indices[endIndex] != UINT_MAX)

++endIndex;

// m_Indices.size() can change here!

FlipTriangleStripInArray<uint32_t>(m_Indices, begIndex, endIndex);

begIndex = endIndex + 1;

}

}

else

FlipTriangleStripInArray<SVertex>(m_Vertices, 0, m_Vertices.size());

}

}

Where function template for flipping triangles in selected part of a vector is:

template<typename T>

void CMesh::FlipTriangleStripInArray(std::vector<T>& values, size_t begIndex, size_t endIndex)

{

const size_t count = endIndex - begIndex;

if(count < 3) return;

// Number of elements (and triangles) is odd: Reverse elements.

if(count % 2)

std::reverse(values.begin() + begIndex, values.begin() + endIndex);

// Number of elements (and triangles) is even: Repeat first element.

else

values.insert(values.begin() + begIndex, values[begIndex]);

}

Comments | #directx #rendering #algorithms Share

# DirectX 11 Renderer - a Screenshot

Wed

16

Jan 2013

Here is what I've been working on in my free time recently. It's a renderer For PC, Windows, using DirectX 11.

It may not look spectacular here because I've just quickly put random stuff into this scene, but I already have lots of code that can do useful things, like deferred shading with dynamic directional and point lights, a bunch of material parameters, mesh processing and loading from OBJ file format, heightmap generation, particle effects and postprocessing (including bloom of course :)

In the next posts I will describe some pieces of my technology and share some C++ code.

Comments | #rendering #productions #directx Share

# Particle System - How to Store Particle Age?

Wed

12

Dec 2012

Particle effects are nice because they are simple and look interesting. Besides, coding it is fun. So I code it again :) Particle systems can have state (when parameters of each particle are calculated based on previous values and time step) or stateless (when parameters are always recalculated from scratch using fixed function and current time). My current particle system has the state.

Today a question came to my mind about how to store age of a particle to delete it after some expiration time, determined by the emitter and also unique for each particle. First, let's think for a moment about the operations we need to perform on this data. We need to: 1. increment age by time step 2. check if particle expired and should be deleted.

If that was all, the solution would be simple. It would be enough to store just one number, let's call it **TimeLeft**. Assigned to the particle life duration at the beginning, it would be:

Step with dt: TimeLeft = TimeLeft - dt

Delete if: TimeLeft <= 0

But what if we additionally want to determine the progress of the particle lifetime, e.g. to interpolate its color of other parameters depending on it? The progress can be expressed in seconds (or whatever time unit we use) or in percent (0..1). My first idea was to simply store two numbers, expressed in seconds: **Age** and **MaxAge**. Age would be initialized to 0 and MaxAge to particle lifetime duration. Then:

Step with dt: Age = Age + dt

Delete if: Age > MaxAge

Progress: Age

Percent progress: Age / MaxAge

Looks OK, but it involves costly division. So I came up with an idea of pre-dividing everything here by MaxAge, thus defining new parameters: **AgeNorm** = Age / MaxAge (which goes from 0 to 1 during particle lifetime) and **AgeIncrement** = 1 / MaxAge. Then it gives:

Step with dt: AgeNorm = AgeNorm + dt * AgeIncrement

Delete if: AgeNorm > 1

Progress: Age / AgeIncrement

Percent progress: AgeNorm

This needs additional multiplication during time step and division when we want to determine progress in seconds. But as I consider progress in percent more useful than the absolute progress value in seconds, that's my solution of choice for now.

Comments | #math #rendering Share

# Half-Pixel Offset in DirectX 11

Fri

09

Nov 2012

There is a problem graphics programmers often have to face known as half-pixel/half-texel offset. In Direct3D 9 the probably most official article about it was Directly Mapping Texels to Pixels. In Direct3D 10/11 they changed the way it works so it can be said that the problem is gone now. But I can see the entry about SV_Position in the Semantics article does not explain it clearly, so here is my short explanation.

A pixel on a screen or texel on a texture can be seen as a matrix cell, visualized as square filled with some color. That's the way we treat it in 2D graphics where we index pixels from left-top corner using integer (x, y) coordinates.

But in 3D graphics, texture can be sampled using floating-point coordinates and interpolation between texel colors can be performed. Texture coordinates in DirectX also starts from left-top corner, but position (0, 0) means the very corner of the texture, NOT the center of the first texel! Similarly, position (1, 1) on texture means its bottom-right corner. So to get exactly the color of the second texel of 8 x 8 texture, we have to sample with coordinates (1.5 / 8, 0.5 / 8).

Now if we rendered 3D scene onto a texture to perform e.g. deferred shading or some other screen-space postprocessing and we want to redraw it to the target back buffer, how do we determine coordinates for sampling this texture based on position of rendered pixel? There is a system value semantics available at pixel shader input called SV_Position which gives (x, y) pixel position. It is expressed in pixels, so it goes to e.g. (1920, 1080) and not to (1, 1) like texture coordinates. But it turns out that in Direct3D 10/11, the value of SV_Target for first pixels on the screen is not (0, 0), (1, 0), but (0.5, 0.5), (1.5, 0.5)!

Therefore to sample texture based on pixel position, it's enought to divide it by screen resolution. No need to perform any half-pixel offset - to add or subtract (0.5, 0.5), like it was done in Direct3D 9.

Texture2D g_Texture : register(t0);

SamplerState g_SamplerState : register(s0);

cbuffer : register(b0) {

float2 g_ScreenResolution;

};

void PixelShader(float2 Position : SV_Position, out float4 Color : SV_Target)

{

Color = g_Texture.Sample(g_SamplerState, Position / g_ScreenResolution);

}

Comments | #directx #rendering Share

# Flat Shading in DirectX 11

Fri

02

Nov 2012

I code a terrain heightmap rendering and today I wanted to make it looking like in **Darwinia**. After preparing approporiate texture it turned out that normals of my mesh used for lighting are smoothed. That's not a suprise - after all there is a single vertex in every particular place, shared by surrounding triangles, refered multiple times from index buffer.

How to make triangles flat shaded? Make every vertex unique to its triangle to have different normal? In Direct3D 9 there was this state used by fixed function pipeline you could set to device->SetRenderState(D3DRS_SHADEMODE, D3DSHADE_FLAT); What about Direct3D 10/11?

A simple solution is to use interpolation modifiers. Fields of structure passed from vertex shader to pixel shader can have these modifiers to tell GPU how to interpolate (or not to interpolate) particular value on the surface of a triangle. nointerpolation means the value will be taken from one of vertices. That solves the problem.

struct VS_TO_PS

{

float4 Pos : SV_Position;

nointerpolation float3 Normal : NORMAL;

float2 TexCoord : TEXCOORD0;

};

Comments | #rendering #directx Share

# Developing Graphics Driver

Thu

12

Jul 2012

Want to know what do I do at Intel? Obviously all details are secret, but generally, as a Graphics Software Engineer, I code graphics driver for our GPU. What does this software do? When you write a game these days, you usually use some game engine, but I'm sure you know that on a lower level, everything ends up as a bunch of textured 3D triangles rendered with hardware acceleration by the GPU. To render them, the engine uses one of standard graphics APIs. On Windows it can be DirectX or OpenGL, on Linux and Mac it is OpenGL, on mobile platforms it is OpenGL ES. On the other side, there are many hardware manufacturers - like NVIDIA, AMD, Intel or Imagination Technologies - that make discrete or embedded GPUs. These chips have different capabilities and instruction sets. So graphics driver is needed to translate calls to API (like IDirect3DDevice9::DrawIndexedPrimitive) and shader code to form specific to the hardware.

Want to know more? Intel recently published documentation of the GPU from the new Ivy Bridge processor - see this news. You can find this documentation on **intellinuxgraphics.org** website. It consists of more than 2000 pages in 17 PDF files. For example, in the last volume (Volume 4 Part 3) you can see how instructions of our programmable execution units look like. They are quite powerful :)

Copyright © 2004-2022