Tag: rendering

Entries for tag "rendering", ordered from most recent. Entry count: 167.

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: > 1 2 3 4 ... 21 >

# Two Shader Compilers of Direct3D 12

Mon
23
Dec 2019

If we write a game or other graphics application using DX12, we also need to write some shaders. We author these in high-level language called HLSL and compile them before passing to the DirectX API while creating pipeline state objects (ID3D12Device::CreateGraphicsPipelineState). There are currently two shader compilers available, both from Microsoft, each outputting different binary format:

  1. old compiler “FXC”
  2. new compiler “DXC”

Which one to choose? The new compiler, called DirectX Shader Compiler, is more modern, based on LLVM/Clang, and open source. We must use it if we want to use Shader Model 6 or above. On the other hand, shaders compiled with it require relatively recent version of Windows and graphics drivers installed, so they won’t work on systems not updated for years.

Shaders can be compiled offline using a command-line program (standalone executable compiler) and then bundled with your program in compiled binary form. That’s probably the best way to go for release version, but for development and debugging purposes it’s easier if we can change shader source just as we change the source of CPU code, easily rebuild or run, or even reload changed shader while the app is running. For this, it’s convenient to integrate shader compiler as part of your program, which is possible through a compiler API.

This gives us 4 different ways of compiling shaders. This article is a quick tutorial for all of them.

1. Old Compiler - Offline

The standalone executable of the old compiler is called “fxc.exe”. You can find it bundled with Windows SDK, which is installed together with Visual Studio. For example, in my system I located it in this path: “c:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64\fxc.exe”.

To compile a shader from HLSL source to the old binary format, issue a command like this:

fxc.exe /T ps_5_0 /E main PS.hlsl /Fo PS.bin

/T is target profile
ps_5_0 means pixel shader with Shader Model 5.0
/E is the entry point - the name of the main shader function, “main” in my case
PS.hlsl is the text file with shader source
/Fo is binary output file to be written

There are many more command line parameters supported for this tool. You can display help about them by passing /? parameter. Using appropriate parameters you can change optimization level, other compilation settings, provide additional #include directories, #define macros, preview intermediate data (preprocessed source, compiled assembly), or even disassemble existing binary file.

2. Old compiler - API

To use the old compiler as a library in your C++ program:

Example:

CComPtr<ID3DBlob> code, errorMsgs;
HRESULT hr = D3DCompileFromFile(
    L"PS.hlsl", // pFileName
    nullptr, // pDefines
    nullptr, // pInclude
    "main", // pEntrypoint
    "PS_5_0", // pTarget
    0, // Flags1, can be e.g. D3DCOMPILE_DEBUG, D3DCOMPILE_SKIP_OPTIMIZATION
    0, // Flags2
    &code, // ppCode
    &errorMsgs); // ppErrorMsgs
if(FAILED(hr))
{
    if(errorMsgs)
    {
        wprintf(L"Compilation failed with errors:\n%hs\n",
            (const char*)errorMsgs->GetBufferPointer());
    }
    // Handle compilation error...
}

D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
// (...)
psoDesc.PS.BytecodeLength = code->GetBufferSize();
psoDesc.PS.pShaderBytecode = code->GetBufferPointer();
CComPtr<ID3D12PipelineState> pso;
hr = device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pso));

First parameter is the path to the file that contains HLSL source. If you want to load the source in some other way, there is also a function that takes a buffer in memory: D3DCompile. Second parameter (optional) can specify preprocessor macros to be #define-d during compilation. Third parameter (optional) can point to your own implementation of ID3DInclude interface that would provide additional files requested via #include. Entry point and target platforms is a string just like in command-line compiler. Other options that have their command line parameters (e.g. /Zi, /Od) can be specified as bit flags.

Two objects returned from this function are just buffers of binary data. ID3DBlob is a simple interface that you can query for its size and pointer to its data. In case of a successful compilation, ppCode output parameter returns buffer with compiled shader binary. You should pass its data to ID3D12PipelineState creation. After successful creation, the blob can be Release-d. The second buffer ppErrorMsgs contains a null-terminated string with error messages generated during compilation. It can be useful even if the compilation succeeded, as it then contains warnings.

Update: "d3dcompiler_47.dll" file is needed. Typically some version of it is available on the machine, but generally you still want to redistribute the exact version you're using from the Win10 SDK. Otherwise you could end up compiling with an older or newer version on an end-user's machine.

3. New Compiler - Offline

Using the new compiler in its standalone form is very similar to the old one. The executable is called “dxc.exe” and it’s also bundled with Windows SDK, in the same directory. Documentation of command line syntax mentions parameters starting with "-", but old "/" also seems to work. To compile the same shader using Shader Model 6.0 issue following command, which looks almost the same as for "fxc.exe":

dxc.exe -T ps_6_0 -E main PS.hlsl -Fo PS.bin

Despite using a new binary format (called “DXIL”, based on LLVM IR), you can load it and pass it to D3D12 PSO creation the same way as before. There is a tricky issue though. You need to attach file “dxil.dll” to your program. Otherwise, the PSO creation will fail! You can find this file in Windows SDK path like: “c:\Program Files (x86)\Windows Kits\10\Redist\D3D\x64\dxil.dll”. Just copy it to the directory with target EXE of your project or the one that you use as working directory.

4. New Compiler - API

The new compiler can also be used programatically as a library, but its usage is a bit more difficult. Just as with any C++ library, start with:

This time though you need to bundle additional DLL to your program (next to “dxil.dll” mentioned above): “dxcompiler.dll”, to be found in the same “Redist\D3D\x64” directory. There is more code needed to perform the compilation. First create IDxcLibrary and IDxcCompiler objects. They can stay alive for the whole lifetime of your application or as long as you need to compile more shaders. Then for each shader, load it from a file (or any source of your choice) to a blob, call Compile method, and inspect its result, whether it’s an error + a blob with error messages, or a success + a blob with compiled shader binary.

CComPtr<IDxcLibrary> library;
HRESULT hr = DxcCreateInstance(CLSID_DxcLibrary, IID_PPV_ARGS(&library));
//if(FAILED(hr)) Handle error...

CComPtr<IDxcCompiler> compiler;
hr = DxcCreateInstance(CLSID_DxcCompiler, IID_PPV_ARGS(&compiler));
//if(FAILED(hr)) Handle error...

uint32_t codePage = CP_UTF8;
CComPtr<IDxcBlobEncoding> sourceBlob;
hr = library->CreateBlobFromFile(L"PS.hlsl", &codePage, &sourceBlob);
//if(FAILED(hr)) Handle file loading error...

CComPtr<IDxcOperationResult> result;
hr = compiler->Compile(
    sourceBlob, // pSource
    L"PS.hlsl", // pSourceName
    L"main", // pEntryPoint
    L"PS_6_0", // pTargetProfile
    NULL, 0, // pArguments, argCount
    NULL, 0, // pDefines, defineCount
    NULL, // pIncludeHandler
    &result); // ppResult
if(SUCCEEDED(hr))
    result->GetStatus(&hr);
if(FAILED(hr))
{
    if(result)
    {
        CComPtr<IDxcBlobEncoding> errorsBlob;
        hr = result->GetErrorBuffer(&errorsBlob);
        if(SUCCEEDED(hr) && errorsBlob)
        {
            wprintf(L"Compilation failed with errors:\n%hs\n",
                (const char*)errorsBlob->GetBufferPointer());
        }
    }
    // Handle compilation error...
}
CComPtr<IDxcBlob> code;
result->GetResult(&code);

D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
// (...)
psoDesc.PS.BytecodeLength = code->GetBufferSize();
psoDesc.PS.pShaderBytecode = code->GetBufferPointer();
CComPtr<ID3D12PipelineState> pso;
hr = device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pso));

Compilation function also takes strings with entry point and target profile, but in Unicode format this time. The way to pass additional flags also changed. Instead of using bit flags, parameter pArguments and argCount take an array of strings that can specify additional parameters same as you would pass to the command-line compiler, e.g. L"-Zi" to attach debug information or L"-Od" to disable optimizations.

Update 2020-01-05: Thanks @MyNameIsMJP for your feedback!

Comments | #rendering #directx Share

# DeepDream aka Inceptionism - Graphical Effect from Neural Networks

Fri
03
Jul 2015

Convolutional neural networks are artificial intelligence algorithms used in image recognition. Two weeks ago engineers from Google showed how it can be used to generate or modify images in a novel style. They called it "inceptionism". Since then lots of people and websites posted about it.

I would like to play around with this technology myself, but I don't have much knowledge about artificial intelligence algorithms and I don't have much free time right now, so at least I hope to follow developments in this new graphics technique and update this list of related links.

Update 2015-07-06:

I came back after the weekend and I can see that the algorithm keeps spreading across the Internet like crazy. After Google shared their code on Github, people start using it. It is not easy though. It is a Python code and has many dependencies. Some libraries must be installed. So even if using it does not require in-depth knowledge about programming or neural networks, this is not just an application that we could download and launch.

I've spent today evening trying to set this up on my Windows, but I didn't succeed. I can see that people on reddit try to run this code as they discuss in Deepdream for non-programmers thread. It seems that at the moment noone found a simple solution to do it on Windows.

Someone managed to set it up as a web service at psychic-vr-lab.com/deepdream, but uploading images is no longer available. I guess the server was overloaded.

Some videos started appearing on YouTube - #deepdream.

It seems that "Deep Deam" with become the official name for this effect. Other candidate names were "Inceptionism" (proposed by Google engineers in their original blog post) and "Large Scale Deep Neural Net" (proposed by the studends form Ghent University, probably because of its acronym ;)

I'd like to be able to experiment with this algorithm to teach it recognizing (and generating) some other patterns. So far it usually just draws dog faces. I can imagine it would look cool if it draws some plants, leaves etc. or some abstract, geometrical patterns, like fractals.

Update 2015-07-07:

reddit.com/r/deepdream/ is the ultimate go-to page to stay up-to-date with the subject. See especially the first, pinned post for an introductory FAQ and a collection of important links.

Ryan Kennedy prepared a self-contained, virtual environment to run this software using Docker, but it's still not that easy to set it up.

Update 2015-07-09:

I finally managed to run the program, thanks to the Newbie Guide for Windows, based on VirtualBox and Vagrant. Still not without problems though.

Understanding Neural Networks Through Deep Visualization - an article offering nice overview of this algorithm.

Update 2015-07-14:

The pinned post on /r/deepdream lists more and more online services that offer processing of images into this specific style, full of dogs :) There is even a successful Indiegogo campaign to setup such server - DreamDeeply.

Its "Tips & Tools" section also links to some description and illustration of specific layers, as well as deepdreamer - a tool for configuring the dreaming script.

DeepDream Group has been created on Facebook and it is very active.

I think that the ultimate solution would be to create a standalone application or a Photoshop/GIMP plugin that would apply this effect to images, but it seems that speeding up these calculations to anything less than minutes or training own neural network with something else than dogs won't be easy. Here is some discussion on the latter.

Comments | #artificial intelligence #google #rendering Share

# Lower-Level Graphics API - What Does It Mean?

Sat
06
Jun 2015

They say that the new, upcoming generation of graphics API-s (like DirectX 12 and Vulkan) will be lower-level, closer to the GPU. You may wonder what does it exactly mean or what is the purpose of it? Let me explain that with a picture that I have made few months ago and already shown on my two presentations.

Row 1: Back in the early days of computer graphics (like on Atari, Commodore 64), there were only applications (green rectangle), communicating directly with graphics hardware (e.g. by setting hardware registers).

Row 2: Hardware and software became more complicated. Operating systems started to separate applications from direct access to hardware. To make applications working on variety of devices available on the market, some standards had to be defined. Device drivers appeared as a separate layer (red rectangle).

Graphics API (Application Programming Interface), like every interface, is just the means of communication - standardized, documented definition of functions and other stuff that is used on the application's side and implemented by the driver. Driver translates these calls to commands specific to particular hardware.

Row 3: As games became more complex, it was no longer convenient to call graphics API directly from game logic code. Another layer appeared, called game engine (yellow rectangle). It is essentially a comprehensive library that provides some higher-level objects (like an entity, asset, material, camera, light) and implements them (in its graphical part) using lower-level commands of graphics API (like mesh, texture, shader).

Row 4: This is where we are now. Games, as well as game engines constantly become more complex and expensive to make. Less and less game development studios make their own engine technology, more prefer to use existing, universal engines (like Unity, Unreal Engine) and just focus on gameplay. These engines recently became available for free and on very attractive licenses, so this trend affects both AAA, as well as indie and amateur game developers.

Graphics drivers became incredibly complex programs as well. You may not see it directly, but just take a look at the size of their installers. They are not games - they don't contain tons of graphics and music assets. So guess what is inside? That is a lot of code! They have to implement all API-s (DirectX 9, 10, 11, OpenGL). In addition to that, these API-s have to backward compatible and not necessarily reflect how modern GPU-s work, so additional logic needed for that can introduce some performance overhead or contain some bugs.

Row 5: The future, with new generation of graphics API-s. Note that the sum width of the bars is not smaller than in the previous row. (Maybe it should be a bit smaller - see comment below.) That is because according to the concept of accidental complexity and essential complexity from famous book No Silver Bullet, stuff that is really necessary has to be done somewhere anyway. So lower-level API means just that driver could be smaller and simpler, while upper layers will have more responsibility of manually managing stuff instead of automatic facilities provided by the driver (for example, there is no more DISCARD or NOOVERWRITE flag when mapping a resource in DirectX 12). It also means API is again closer to the actual hardware. Thanks to all that, the usage of GPU can be optimized better by knowing all higher-level details about specific application on the engine level.

Question is: Will that make graphics programming more difficult? Yes, it will, but these days it will affect mostly a small group of programmers working directly on game engines or just passionate about this stuff (like myself) and not the rest of game developers. Similarly, there may be a concern about potential fragmentation. Time will show which API-s will be more successful than the others, but in case none of them will become standard across all platforms (Vulkan is a good candidate) and GPU/OS vendors succeed in convincing developers to use their platform-specific ones, it will also complicate life only for these engine developers. Successful games have to be multiplatform anyway and modern game engines do good job in hiding many of differences between platforms, so they can do the same with graphics.

Comments | #gpu #rendering #directx Share

# Nothing Renders - Why?

Sun
24
May 2015

"I have a blank screen" or "nothing is rendered" is probably the most frequent bug in graphics programming. It's also quite hard to debug because there are many possible causes. Graphics pipeline is compilated, so there are multiple things that can be wrong at each stage. Few years ago I've written a short article about this, in Polish, titled Nic nie widać. This is translation of that article. It provides a list of questions you should ask yourself while considering the most frequent reasons for why nothing appears on the screen. It is dedicated for Direct3D 9, but it can also be applied to OpenGL (only some things are named differently) and, to some degree, to newer graphics API-s.

It's black

First of all, please clear your background to some color other than black, e.g. gray or blue. Maybe your geometry is rendered, but it is black. It is a frequent bug, especially if you have lighting enabled (and it is enabled by default) while you didn't setup any lights.

Matrices

Are you sure you correctly setup all matrices - world, view and projection? Did you create them using correct functions? Is the camera located in the right place and looks in the desired direction? Maybe your object is in the same position as camera or behind the camera, which is pointing backward?

Position

Is the size and position of your object correct? Is your object too close or too far from the camera, relative to the minimum and maximum Z value set in projection matrix? Isn't it too small to be visible?

Errors

Do all the calls to DirectX functions return a value meaning success? Do you even check that value? Please also launch "DirectX Control Panel", enable Debug Layer for your application and analyze Output for any error or warning messages.

Vertex Format

Do you use correct vertex format? Did you define a structure describing your vertex correctly and compatible with the FVF/vertex declaration that you use? Are all the fields in the correct order and of the right type? Do you tell DirectX what vertex format you want to use by calling SetFVF/SetVertexDeclaration before rendering?

Draw Call

Do you pass correct parameters to the rendering function? In the most basic case, all offsets should be 0 and "stride" is the size of your vertex structure, in bytes, like sizeof(SMyVertex). Do you pass correct number of primitives to render?

Buffers

Do you fill your vertex and (optional) index buffer correctly? Do they have correct number of elements? Do you fill all of them? If you use transformed coordinates XYZRHW, the RHW component should be set to 1.0 and never to 0.0.

Alpha Channel

Maybe your geometry is totally transparent. Is the alpha channel set to maxium (1.0 or 0xFF, depending on type) and not to minimum in all of these: vertices, texture, material (only if you use lighting)?

Backface Culling

Maybe the triangles you want to render are ignored as "back facing" the camera, because they have wrong winding (clockwise or counterclockwise)? Try to disable backface culling to check that.

States

Did you setup blending on all texture stages correctly? Did you correctly setup all rest of the states of graphics pipeline? Maybe the problem appears only when you render some objects in a specific order? That means states set before rendering one object remain in the pipeline and break rendering of the next one.

Advanced Effects

If you use some advanced rendering features, your graphics card may not support them. Set reference software rasterizer during creation of the device object (D3DDEVTYPE_REF instead of HAL). Your program will run very slowly, but everything should be drawn as expected. Query device object for capabilities of your GPU (device caps).

Z-Buffer

If you use depth buffer, remember to clear it as well, together with backbuffer. In 3rd parameter of Clear function bitwise OR following flag: D3DCLEAR_ZBUFFER. Without it, you won't see anything on the screen or you will see artifacts. Value to clear Z-buffer to is 1.0f (not 0.0f).

Finally, there are ways you can actually debug how data and state look like on subsequent stages of the graphics pipeline while this bugged draw call is executed, using Graphics Diagnostics in Visual Sudio or other GPU debugging tool.

See also: How not to render 3D graphics: 40 ways to get a blank black screen

Comments | #directx #rendering Share

# Rendering Video Special Effects in GLSL

Mon
16
Jun 2014

Rendering real-time, hardware accelerated 3D graphics is one aspect of computer graphics, but there are others too. Recently I became interested in video editing. I wanted to add some special effects to a video and was looking for a technology to do that. Of course video editing software usually has some effects built-in, like different filters or transition effects, some borders or gradients. But I wanted something different. If I had and I knew how to use software like Adobe After Effects, I'm sure that would be the best and easiest way to make any effect imaginable. But as I don't, I decided to use what I already know - to write a shader :)

1. To run a shader, some hosting app is needed. Of course I could write one in C++, but for the purpose of this work it was enough to use Live Coding Compo Framework (a demoscene tool created by bonzaj, which was used during last year's WeCan demoparty). This simple and free package contains rendering application and preconfigured Visual Studio solution. Having VS installed (it works with Express version as well), all I needed to do was to edit "Run.bat" file to point to directory with VS installation in my system. Next, I just executed "Run.bat", and two programs were launched. On the left monitor I had fullscreen "Live Coding Preview", on the right: Visual Studio with special solution opened. I could then edit any of the GLSL fragment shaders contained in the solution. Every time I hit Compile (Ctrl+F7), the shader was compiled and displayed in the preview.

2. Being able to render my effect in real-time, next I needed to capture it to a video. Probably the most popular app for this is FRAPS. I ran it, set Video Capture Settings to frame rate that I was going to use in my final video (which was 29.97 fps) and then captured appropriate period of time of rendering my effect, starting and stopping recording with F9 hotkey.

3. Video captured by FRAPS is in full, original resolution and encoded with some strange codec, so next I needed to convert it to desired format. To do this, I used VLC media player. Some may think that it's just a video player, but in fact it's incredibly powerful and flexible video transmitting and processing software. (I once had an opportunity to work with libVLC - its features exposed as C library.) Its greatest advantage is that it has its own collection of codecs, so it doesn't care whether you have appropriate codecs installed in your system. To convert a video file, I selected: Media > Convert / Save..., selected my AVI file captured by FRAPS, pressed "Convert / Save" button, selected Profile: "Video - H.264 + MP3 (MP4)", customized it using "Edit selected profile" image button, selecting: Encapsulation = MP4/MOV, Video codec = MPEG-4 (on Resolution tab, I could also set new resolution to scale the content, my choice was 1280px x 720px), Audio disabled, Subtitles disabled. Then after pressing "Save", selecting path to destination file, pressing "Start" and waiting some time, I had my video converted to more standard MPEG-4 format (and more than 5 times smaller than the original one recorded by FRAPS).

4. Finally I could insert this video onto a new track in my video editing software and enable blending with underlying layer to achieve desired effect (I used "Overlay" blending mode and 50% opacity).

There are some details that I intentionally skipped here (like video bitrate) not to make this post even longer, but I hope you learned something new from it. My effect looked like this, and here is the source code: Low freq fx.glsl

By the way, here is another tutorial about how to make GIF like this from a video (using only free tools this time):

1. To capture video frames as images, use VLC media player:

 2. To merge images into animated GIF, use GIMP:

Comments | #rendering #video #tools Share

# Fluorescence

Mon
05
May 2014

The main and general formula in computer graphics is Rendering Equation. It can be simplified to say that perceived color on an opaque surface is: LightColor * MaterialColor. The variables are are (R, G, B) vectors and (*) is per-component multiplication. According to this formula:

There are many phenomena that go beyond this model. One of them is subsurface scattering (SSS), where light penetrates object and goes out from different place on the surface. Another one is fluorescence - a property of a material which absorbs some light wavelength and emits different wavelength in return. One particularly interesting kind of it is UV-activity - when material absorbs UV light (also called blacklight, which is invisible to people) and emits some visible color. This way an object, when lit with UV light, looks like it's glowing in the dark, despite it has no LED-s or power source.

I've never seen a need to simulate fluorescence in computer graphics, but in real life it is used e.g. in decorations for psytrance parties, like this installation on main stage on Tree of Life 2012 festival in Turkey:

So what types of materials are fluorescent? It's not so simple that you can take any vividly colored object and it will glow in the  UV. Some of them do, some don't. You can take a very colourful T-shirt and it may be not visible under UV at all. On the other hand, some substances glow while they would better not (like dandruff :) But there are some materials that are specially designed and sold to be fluorescent, like the Fluor series of Montana MNT 94 paints I used to paint my origami decorations.

 

Comments | #art #psytrance #rendering Share

# Four primary colors

Sun
04
May 2014

I've already posted about my origami decoration. My choice of colors is not random. Of course I could make it more colorful, but evey paint costs some money, so I decided to buy just four: red, green, yellow and blue. Why?

That's because I still keep in my mind the great article Color Wheels are wrong? How color vision actually works. It tells that although our eyes can see three colors: red, green and blue (RGB), our perception is not that simple and direct. Our vision first does the difference between R and G, so each color is even more red or more green. Next and more importantly, it does the difference between RG and B, so each color is either more yellow (or red, or green) also known as warm colors, or more blue, aka cool colors.

That's also how photo manipulation software works (e.g. Adobe Lightroom). Instead of scrollbars for RGB, you can find there two scrollbars: to choose between more red and more green (called tint) and between more yellow and more blue (called temperature).

That's why it could be said that for our vision, there are four primary colors: red, green, yellow and blue.

Comments | #rendering Share

# After WeCan 2013

Mon
23
Sep 2013

Last weekend I've been in Łódź at WeCan - multiplatform demoparty. It was great! - well organized, full of interesting stuff to watch and participate, as well as many nice people and of course a lot of beer :) Here is my small photo gallery from the event. On the first, as well as second day in the evening there were some concerts with various music (metal, drum'n'bass). ARM - one of the sponsors, delivered a talk about their mobile processors and GPU-s. They talked about tools they provide for game developers on their platform, like the one for performance profiling or offline shader compiler. On Saturday there were competitions in different categories: music (chip, tracker, streaming), game, wild/anim, gfx (oldschool, newschool), game, intro (256B, 1k/4k/64k any platform) and of course demo (any platform - there were demos for PC, Android, but the winning one was for Amiga!) I think the full compo results and prods will soon be published on WeCan 2013 :: pouet.net.

But in my opinion, most interesting from the whole party was the real-time coding competition. There were 3 stages. In each stage, pairs of programmers had to write a GLSL fragment shader in a special environment similar to Shadertoy. They could use some predefined input - several textures and constants, including data calculated real-time from music played by a DJ during the contest (array with FFT). Time was limited to 10-30 minutes for each stage. The goal was to generate some good looking graphics and animation. Who had louder applause at the end was the winner and advanced to next stage, where he could continue to improve his code. I didn't pass to the second stage, but anyway it was fun to participate in this compo.

Just as one could expect by looking at what is now state-of-the-art in 4k intros, winning strategy was to implement sphere tracing or something like that. Even if someone had just one sphere displayed on the screen after the first stage, from there he could easily make some amazing effects with interesting shapes, lighting, reflections etc. So it's not suprising many participants took this strategy. The winner was w23 from Russia.

I think that this real-time coding compo was an amazing idea. I've never seen anything like this before. Now I think that such competition is much better - more exciting and less time-consuming than any 8-hour long game development compo, which is traditional on Polish gamedev conferences. Of course that's just different thing. Not every game developer is a shader programmer. But on this year's WeCan, even those who don't code at all told me that the compo about real-time shader programming was very fun to watch.

Comments | #demoscene #events #competitions #rendering Share

Pages: > 1 2 3 4 ... 21 >

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2020