Entries for tag "rendering", ordered from most recent. Entry count: 187.
# Rendering Video Special Effects in GLSL
Mon
16
Jun 2014
Rendering real-time, hardware accelerated 3D graphics is one aspect of computer graphics, but there are others too. Recently I became interested in video editing. I wanted to add some special effects to a video and was looking for a technology to do that. Of course video editing software usually has some effects built-in, like different filters or transition effects, some borders or gradients. But I wanted something different. If I had and I knew how to use software like Adobe After Effects, I'm sure that would be the best and easiest way to make any effect imaginable. But as I don't, I decided to use what I already know - to write a shader :)
1. To run a shader, some hosting app is needed. Of course I could write one in C++, but for the purpose of this work it was enough to use Live Coding Compo Framework (a demoscene tool created by bonzaj, which was used during last year's WeCan demoparty). This simple and free package contains rendering application and preconfigured Visual Studio solution. Having VS installed (it works with Express version as well), all I needed to do was to edit "Run.bat" file to point to directory with VS installation in my system. Next, I just executed "Run.bat", and two programs were launched. On the left monitor I had fullscreen "Live Coding Preview", on the right: Visual Studio with special solution opened. I could then edit any of the GLSL fragment shaders contained in the solution. Every time I hit Compile (Ctrl+F7), the shader was compiled and displayed in the preview.
2. Being able to render my effect in real-time, next I needed to capture it to a video. Probably the most popular app for this is FRAPS. I ran it, set Video Capture Settings to frame rate that I was going to use in my final video (which was 29.97 fps) and then captured appropriate period of time of rendering my effect, starting and stopping recording with F9 hotkey.
3. Video captured by FRAPS is in full, original resolution and encoded with some strange codec, so next I needed to convert it to desired format. To do this, I used VLC media player. Some may think that it's just a video player, but in fact it's incredibly powerful and flexible video transmitting and processing software. (I once had an opportunity to work with libVLC - its features exposed as C library.) Its greatest advantage is that it has its own collection of codecs, so it doesn't care whether you have appropriate codecs installed in your system. To convert a video file, I selected: Media > Convert / Save..., selected my AVI file captured by FRAPS, pressed "Convert / Save" button, selected Profile: "Video - H.264 + MP3 (MP4)", customized it using "Edit selected profile" image button, selecting: Encapsulation = MP4/MOV, Video codec = MPEG-4 (on Resolution tab, I could also set new resolution to scale the content, my choice was 1280px x 720px), Audio disabled, Subtitles disabled. Then after pressing "Save", selecting path to destination file, pressing "Start" and waiting some time, I had my video converted to more standard MPEG-4 format (and more than 5 times smaller than the original one recorded by FRAPS).
4. Finally I could insert this video onto a new track in my video editing software and enable blending with underlying layer to achieve desired effect (I used "Overlay" blending mode and 50% opacity).
There are some details that I intentionally skipped here (like video bitrate) not to make this post even longer, but I hope you learned something new from it. My effect looked like this, and here is the source code: Low freq fx.glsl

By the way, here is another tutorial about how to make GIF like this from a video (using only free tools this time):
1. To capture video frames as images, use VLC media player:
2. To merge images into animated GIF, use GIMP:
Comments | #rendering #video #tools Share
# Fluorescence
Mon
05
May 2014
The main and general formula in computer graphics is Rendering Equation. It can be simplified to say that perceived color on an opaque surface is: LightColor * MaterialColor. The variables are are (R, G, B) vectors and (*) is per-component multiplication. According to this formula:
There are many phenomena that go beyond this model. One of them is subsurface scattering (SSS), where light penetrates object and goes out from different place on the surface. Another one is fluorescence - a property of a material which absorbs some light wavelength and emits different wavelength in return. One particularly interesting kind of it is UV-activity - when material absorbs UV light (also called blacklight, which is invisible to people) and emits some visible color. This way an object, when lit with UV light, looks like it's glowing in the dark, despite it has no LED-s or power source.
I've never seen a need to simulate fluorescence in computer graphics, but in real life it is used e.g. in decorations for psytrance parties, like this installation on main stage on Tree of Life 2012 festival in Turkey:
So what types of materials are fluorescent? It's not so simple that you can take any vividly colored object and it will glow in the UV. Some of them do, some don't. You can take a very colourful T-shirt and it may be not visible under UV at all. On the other hand, some substances glow while they would better not (like dandruff :) But there are some materials that are specially designed and sold to be fluorescent, like the Fluor series of Montana MNT 94 paints I used to paint my origami decorations.
Comments | #art #psytrance #rendering Share
# Four primary colors
Sun
04
May 2014
I've already posted about my origami decoration. My choice of colors is not random. Of course I could make it more colorful, but evey paint costs some money, so I decided to buy just four: red, green, yellow and blue. Why?
That's because I still keep in my mind the great article Color Wheels are wrong? How color vision actually works. It tells that although our eyes can see three colors: red, green and blue (RGB), our perception is not that simple and direct. Our vision first does the difference between R and G, so each color is even more red or more green. Next and more importantly, it does the difference between RG and B, so each color is either more yellow (or red, or green) also known as warm colors, or more blue, aka cool colors.
That's also how photo manipulation software works (e.g. Adobe Lightroom). Instead of scrollbars for RGB, you can find there two scrollbars: to choose between more red and more green (called tint) and between more yellow and more blue (called temperature).

That's why it could be said that for our vision, there are four primary colors: red, green, yellow and blue.
# After WeCan 2013
Mon
23
Sep 2013
Last weekend I've been in Łódź at WeCan - multiplatform demoparty. It was great! - well organized, full of interesting stuff to watch and participate, as well as many nice people and of course a lot of beer :) Here is my small photo gallery from the event. On the first, as well as second day in the evening there were some concerts with various music (metal, drum'n'bass). ARM - one of the sponsors, delivered a talk about their mobile processors and GPU-s. They talked about tools they provide for game developers on their platform, like the one for performance profiling or offline shader compiler. On Saturday there were competitions in different categories: music (chip, tracker, streaming), game, wild/anim, gfx (oldschool, newschool), game, intro (256B, 1k/4k/64k any platform) and of course demo (any platform - there were demos for PC, Android, but the winning one was for Amiga!) I think the full compo results and prods will soon be published on WeCan 2013 :: pouet.net.
But in my opinion, most interesting from the whole party was the real-time coding competition. There were 3 stages. In each stage, pairs of programmers had to write a GLSL fragment shader in a special environment similar to Shadertoy. They could use some predefined input - several textures and constants, including data calculated real-time from music played by a DJ during the contest (array with FFT). Time was limited to 10-30 minutes for each stage. The goal was to generate some good looking graphics and animation. Who had louder applause at the end was the winner and advanced to next stage, where he could continue to improve his code. I didn't pass to the second stage, but anyway it was fun to participate in this compo.
Just as one could expect by looking at what is now state-of-the-art in 4k intros, winning strategy was to implement sphere tracing or something like that. Even if someone had just one sphere displayed on the screen after the first stage, from there he could easily make some amazing effects with interesting shapes, lighting, reflections etc. So it's not suprising many participants took this strategy. The winner was w23 from Russia.
I think that this real-time coding compo was an amazing idea. I've never seen anything like this before. Now I think that such competition is much better - more exciting and less time-consuming than any 8-hour long game development compo, which is traditional on Polish gamedev conferences. Of course that's just different thing. Not every game developer is a shader programmer. But on this year's WeCan, even those who don't code at all told me that the compo about real-time shader programming was very fun to watch.
Comments | #demoscene #events #competitions #rendering Share
# Mesh of Box
Mon
21
Jan 2013
Too many times I had to come up with triangle mesh of a box to hardcode it in my program, written just from memory or with help of a sheet of paper. It's easy to make a mistake and have a box with one face missing or something like that. So in case me or somebody in the future will need it, here it is. Parameters:
Box spanning from (-1, -1, -1) to (+1, +1, +1). Contains 3D positions and normals. Topology is triangle strip, using strip-cut index. Backface culling can be used, front faces are clockwise (using Direct3D coordinate system).
// H file
struct SVertex {
vec3 Position;
vec3 Normal;
};
const size_t BOX_VERTEX_COUNT = 6 * 4;
const size_t BOX_INDEX_COUNT = 6 * 5;
extern const SVertex BOX_VERTICES[];
extern const uint16_t BOX_INDICES[];
// CPP file
const SVertex BOX_VERTICES[] = {
// -X
{ vec3(-1.f, -1.f, 1.f), vec3(-1.f, 0.f, 0.f) },
{ vec3(-1.f, 1.f, 1.f), vec3(-1.f, 0.f, 0.f) },
{ vec3(-1.f, -1.f, -1.f), vec3(-1.f, 0.f, 0.f) },
{ vec3(-1.f, 1.f, -1.f), vec3(-1.f, 0.f, 0.f) },
// -Z
{ vec3(-1.f, -1.f, -1.f), vec3( 0.f, 0.f, -1.f) },
{ vec3(-1.f, 1.f, -1.f), vec3( 0.f, 0.f, -1.f) },
{ vec3( 1.f, -1.f, -1.f), vec3( 0.f, 0.f, -1.f) },
{ vec3( 1.f, 1.f, -1.f), vec3( 0.f, 0.f, -1.f) },
// +X
{ vec3( 1.f, -1.f, -1.f), vec3( 1.f, 0.f, 0.f) },
{ vec3( 1.f, 1.f, -1.f), vec3( 1.f, 0.f, 0.f) },
{ vec3( 1.f, -1.f, 1.f), vec3( 1.f, 0.f, 0.f) },
{ vec3( 1.f, 1.f, 1.f), vec3( 1.f, 0.f, 0.f) },
// +Z
{ vec3( 1.f, -1.f, 1.f), vec3( 0.f, 0.f, 1.f) },
{ vec3( 1.f, 1.f, 1.f), vec3( 0.f, 0.f, 1.f) },
{ vec3(-1.f, -1.f, 1.f), vec3( 0.f, 0.f, 1.f) },
{ vec3(-1.f, 1.f, 1.f), vec3( 0.f, 0.f, 1.f) },
// -Y
{ vec3(-1.f, -1.f, 1.f), vec3( 0.f, -1.f, 0.f) },
{ vec3(-1.f, -1.f, -1.f), vec3( 0.f, -1.f, 0.f) },
{ vec3( 1.f, -1.f, 1.f), vec3( 0.f, -1.f, 0.f) },
{ vec3( 1.f, -1.f, -1.f), vec3( 0.f, -1.f, 0.f) },
// +Y
{ vec3(-1.f, 1.f, -1.f), vec3( 0.f, 1.f, 0.f) },
{ vec3(-1.f, 1.f, 1.f), vec3( 0.f, 1.f, 0.f) },
{ vec3( 1.f, 1.f, -1.f), vec3( 0.f, 1.f, 0.f) },
{ vec3( 1.f, 1.f, 1.f), vec3( 0.f, 1.f, 0.f) },
};
const uint16_t BOX_INDICES[] = {
0, 1, 2, 3, 0xFFFF, // -X
4, 5, 6, 7, 0xFFFF, // -Z
8, 9, 10, 11, 0xFFFF, // +X
12, 13, 14, 15, 0xFFFF, // +Z
16, 17, 18, 19, 0xFFFF, // -Y
20, 21, 22, 23, 0xFFFF, // +Y
};
# How to Flip Triangles in Triangle Mesh
Sat
19
Jan 2013
Given triangle mesh, as we use it in real-time rendering of 3D graphics, we say that each triangle have two sides, depending on whether its vertices are oriented clockwise or counterclockwise from particular point of view. In Direct3D, by default, triangles oriented clockwise are considered front-facing and they are visible, while triangles oriented counterclockwise are invisible because they are discarded by the API feature called backface culling.
When we have backface culling enabled and we convert mesh between coordinate systems, we sometimes need to "flip triangles". When vertices of each triangle are separate, an algorithm for this is easy. We just need to swap first with third vertex of each triangle (or any other two vertices). So we can start implementing the flipping method like this:

class CMesh
{
...
D3D11_PRIMITIVE_TOPOLOGY m_Topology;
bool m_HasIndices;
std::vector<SVertex> m_Vertices;
std::vector<uint32_t> m_Indices;
};
void CMesh::FlipTriangles()
{
if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
{
if(m_HasIndices)
FlipTriangleListInArray<uint32_t>(m_Indices);
else
FlipTriangleListInArray<SVertex>(m_Vertices);
}
...
}
Where the function template for flipping triangles in a vector is:
template<typename T>
void CMesh::FlipTriangleListInArray(std::vector<T>& values)
{
for(size_t i = 0, count = values.size(); i < count - 2; i += 3)
std::swap(values[i], values[i + 2]);
}
Simple reversing all elements in the vector with std::reverse would also do the job. But things get complicated when we consider triangle strip topology. (I assume here that you know how graphics API-s generate orientation of triangles in a triangle strip.) Reversing vertices works, but only when number of vertices in the strip is odd. When it's even, triangles stay oriented in the same way.

I asked question about this on forum.warsztat.gd (in Polish). User albiero proposed following solution: just duplicate first vertex. It will generate additional degenerate (invisible) triangle, but thanks to this all following triangles will be flipped. It seems to work!

I also wanted to handle strip-cut index (a special value -1 which starts new triangle strip), so the rest of my fully-featured algorithm for triangle flipping is:
...
else if(m_Topology == D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP)
{
if(m_HasIndices)
{
size_t begIndex = 0;
while(begIndex < m_Indices.size())
{
const size_t indexCount = m_Indices.size();
while(begIndex < indexCount && m_Indices[begIndex] == UINT_MAX)
++begIndex;
if(begIndex == indexCount)
break;
size_t endIndex = begIndex + 1;
while(endIndex < indexCount && m_Indices[endIndex] != UINT_MAX)
++endIndex;
// m_Indices.size() can change here!
FlipTriangleStripInArray<uint32_t>(m_Indices, begIndex, endIndex);
begIndex = endIndex + 1;
}
}
else
FlipTriangleStripInArray<SVertex>(m_Vertices, 0, m_Vertices.size());
}
}
Where function template for flipping triangles in selected part of a vector is:
template<typename T>
void CMesh::FlipTriangleStripInArray(std::vector<T>& values, size_t begIndex, size_t endIndex)
{
const size_t count = endIndex - begIndex;
if(count < 3) return;
// Number of elements (and triangles) is odd: Reverse elements.
if(count % 2)
std::reverse(values.begin() + begIndex, values.begin() + endIndex);
// Number of elements (and triangles) is even: Repeat first element.
else
values.insert(values.begin() + begIndex, values[begIndex]);
}
Comments | #directx #rendering #algorithms Share
# DirectX 11 Renderer - a Screenshot
Wed
16
Jan 2013
Here is what I've been working on in my free time recently. It's a renderer For PC, Windows, using DirectX 11.
It may not look spectacular here because I've just quickly put random stuff into this scene, but I already have lots of code that can do useful things, like deferred shading with dynamic directional and point lights, a bunch of material parameters, mesh processing and loading from OBJ file format, heightmap generation, particle effects and postprocessing (including bloom of course :)
In the next posts I will describe some pieces of my technology and share some C++ code.
Comments | #rendering #productions #directx Share
# Particle System - How to Store Particle Age?
Wed
12
Dec 2012
Particle effects are nice because they are simple and look interesting. Besides, coding it is fun. So I code it again :) Particle systems can have state (when parameters of each particle are calculated based on previous values and time step) or stateless (when parameters are always recalculated from scratch using fixed function and current time). My current particle system has the state.
Today a question came to my mind about how to store age of a particle to delete it after some expiration time, determined by the emitter and also unique for each particle. First, let's think for a moment about the operations we need to perform on this data. We need to: 1. increment age by time step 2. check if particle expired and should be deleted.
If that was all, the solution would be simple. It would be enough to store just one number, let's call it TimeLeft. Assigned to the particle life duration at the beginning, it would be:
Step with dt: TimeLeft = TimeLeft - dt
Delete if: TimeLeft <= 0
But what if we additionally want to determine the progress of the particle lifetime, e.g. to interpolate its color of other parameters depending on it? The progress can be expressed in seconds (or whatever time unit we use) or in percent (0..1). My first idea was to simply store two numbers, expressed in seconds: Age and MaxAge. Age would be initialized to 0 and MaxAge to particle lifetime duration. Then:
Step with dt: Age = Age + dt
Delete if: Age > MaxAge
Progress: Age
Percent progress: Age / MaxAge
Looks OK, but it involves costly division. So I came up with an idea of pre-dividing everything here by MaxAge, thus defining new parameters: AgeNorm = Age / MaxAge (which goes from 0 to 1 during particle lifetime) and AgeIncrement = 1 / MaxAge. Then it gives:
Step with dt: AgeNorm = AgeNorm + dt * AgeIncrement
Delete if: AgeNorm > 1
Progress: Age / AgeIncrement
Percent progress: AgeNorm
This needs additional multiplication during time step and division when we want to determine progress in seconds. But as I consider progress in percent more useful than the absolute progress value in seconds, that's my solution of choice for now.