Tag: rendering

Entries for tag "rendering", ordered from most recent. Entry count: 158.

Warning! Some information on this page is older than 3 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: 1 2 3 ... 20 >

19:10
Fri
03
Jul 2015

DeepDream aka Inceptionism - Graphical Effect from Neural Networks

Convolutional neural networks are artificial intelligence algorithms used in image recognition. Two weeks ago engineers from Google showed how it can be used to generate or modify images in a novel style. They called it "inceptionism". Since then lots of people and websites posted about it.

I would like to play around with this technology myself, but I don't have much knowledge about artificial intelligence algorithms and I don't have much free time right now, so at least I hope to follow developments in this new graphics technique and update this list of related links.

Update 2015-07-06:

I came back after the weekend and I can see that the algorithm keeps spreading across the Internet like crazy. After Google shared their code on Github, people start using it. It is not easy though. It is a Python code and has many dependencies. Some libraries must be installed. So even if using it does not require in-depth knowledge about programming or neural networks, this is not just an application that we could download and launch.

I've spent today evening trying to set this up on my Windows, but I didn't succeed. I can see that people on reddit try to run this code as they discuss in Deepdream for non-programmers thread. It seems that at the moment noone found a simple solution to do it on Windows.

Someone managed to set it up as a web service at psychic-vr-lab.com/deepdream, but uploading images is no longer available. I guess the server was overloaded.

Some videos started appearing on YouTube - #deepdream.

It seems that "Deep Deam" with become the official name for this effect. Other candidate names were "Inceptionism" (proposed by Google engineers in their original blog post) and "Large Scale Deep Neural Net" (proposed by the studends form Ghent University, probably because of its acronym ;)

I'd like to be able to experiment with this algorithm to teach it recognizing (and generating) some other patterns. So far it usually just draws dog faces. I can imagine it would look cool if it draws some plants, leaves etc. or some abstract, geometrical patterns, like fractals.

Update 2015-07-07:

reddit.com/r/deepdream/ is the ultimate go-to page to stay up-to-date with the subject. See especially the first, pinned post for an introductory FAQ and a collection of important links.

Ryan Kennedy prepared a self-contained, virtual environment to run this software using Docker, but it's still not that easy to set it up.

Update 2015-07-09:

I finally managed to run the program, thanks to the Newbie Guide for Windows, based on VirtualBox and Vagrant. Still not without problems though.

Understanding Neural Networks Through Deep Visualization - an article offering nice overview of this algorithm.

Update 2015-07-14:

The pinned post on /r/deepdream lists more and more online services that offer processing of images into this specific style, full of dogs :) There is even a successful Indiegogo campaign to setup such server - DreamDeeply.

Its "Tips & Tools" section also links to some description and illustration of specific layers, as well as deepdreamer - a tool for configuring the dreaming script.

DeepDream Group has been created on Facebook and it is very active.

I think that the ultimate solution would be to create a standalone application or a Photoshop/GIMP plugin that would apply this effect to images, but it seems that speeding up these calculations to anything less than minutes or training own neural network with something else than dogs won't be easy. Here is some discussion on the latter.

Comments (3) | Tags: artificial intelligence google rendering | Author: Adam Sawicki | Share

06:51
Sat
06
Jun 2015

Lower-Level Graphics API - What Does It Mean?

They say that the new, upcoming generation of graphics API-s (like DirectX 12 and Vulkan) will be lower-level, closer to the GPU. You may wonder what does it exactly mean or what is the purpose of it? Let me explain that with a picture that I have made few months ago and already shown on my two presentations.

Row 1: Back in the early days of computer graphics (like on Atari, Commodore 64), there were only applications (green rectangle), communicating directly with graphics hardware (e.g. by setting hardware registers).

Row 2: Hardware and software became more complicated. Operating systems started to separate applications from direct access to hardware. To make applications working on variety of devices available on the market, some standards had to be defined. Device drivers appeared as a separate layer (red rectangle).

Graphics API (Application Programming Interface), like every interface, is just the means of communication - standardized, documented definition of functions and other stuff that is used on the application's side and implemented by the driver. Driver translates these calls to commands specific to particular hardware.

Row 3: As games became more complex, it was no longer convenient to call graphics API directly from game logic code. Another layer appeared, called game engine (yellow rectangle). It is essentially a comprehensive library that provides some higher-level objects (like an entity, asset, material, camera, light) and implements them (in its graphical part) using lower-level commands of graphics API (like mesh, texture, shader).

Row 4: This is where we are now. Games, as well as game engines constantly become more complex and expensive to make. Less and less game development studios make their own engine technology, more prefer to use existing, universal engines (like Unity, Unreal Engine) and just focus on gameplay. These engines recently became available for free and on very attractive licenses, so this trend affects both AAA, as well as indie and amateur game developers.

Graphics drivers became incredibly complex programs as well. You may not see it directly, but just take a look at the size of their installers. They are not games - they don't contain tons of graphics and music assets. So guess what is inside? That is a lot of code! They have to implement all API-s (DirectX 9, 10, 11, OpenGL). In addition to that, these API-s have to backward compatible and not necessarily reflect how modern GPU-s work, so additional logic needed for that can introduce some performance overhead or contain some bugs.

Row 5: The future, with new generation of graphics API-s. Note that the sum width of the bars is not smaller than in the previous row. (Maybe it should be a bit smaller - see comment below.) That is because according to the concept of accidental complexity and essential complexity from famous book No Silver Bullet, stuff that is really necessary has to be done somewhere anyway. So lower-level API means just that driver could be smaller and simpler, while upper layers will have more responsibility of manually managing stuff instead of automatic facilities provided by the driver (for example, there is no more DISCARD or NOOVERWRITE flag when mapping a resource in DirectX 12). It also means API is again closer to the actual hardware. Thanks to all that, the usage of GPU can be optimized better by knowing all higher-level details about specific application on the engine level.

Question is: Will that make graphics programming more difficult? Yes, it will, but these days it will affect mostly a small group of programmers working directly on game engines or just passionate about this stuff (like myself) and not the rest of game developers. Similarly, there may be a concern about potential fragmentation. Time will show which API-s will be more successful than the others, but in case none of them will become standard across all platforms (Vulkan is a good candidate) and GPU/OS vendors succeed in convincing developers to use their platform-specific ones, it will also complicate life only for these engine developers. Successful games have to be multiplatform anyway and modern game engines do good job in hiding many of differences between platforms, so they can do the same with graphics.

Comments (1) | Tags: gpu rendering directx | Author: Adam Sawicki | Share

00:17
Sun
24
May 2015

Nothing Renders - Why?

"I have a blank screen" or "nothing is rendered" is probably the most frequent bug in graphics programming. It's also quite hard to debug because there are many possible causes. Graphics pipeline is compilated, so there are multiple things that can be wrong at each stage. Few years ago I've written a short article about this, in Polish, titled Nic nie widaŠ. This is translation of that article. It provides a list of questions you should ask yourself while considering the most frequent reasons for why nothing appears on the screen. It is dedicated for Direct3D 9, but it can also be applied to OpenGL (only some things are named differently) and, to some degree, to newer graphics API-s.

It's black

First of all, please clear your background to some color other than black, e.g. gray or blue. Maybe your geometry is rendered, but it is black. It is a frequent bug, especially if you have lighting enabled (and it is enabled by default) while you didn't setup any lights.

Matrices

Are you sure you correctly setup all matrices - world, view and projection? Did you create them using correct functions? Is the camera located in the right place and looks in the desired direction? Maybe your object is in the same position as camera or behind the camera, which is pointing backward?

Position

Is the size and position of your object correct? Is your object too close or too far from the camera, relative to the minimum and maximum Z value set in projection matrix? Isn't it too small to be visible?

Errors

Do all the calls to DirectX functions return a value meaning success? Do you even check that value? Please also launch "DirectX Control Panel", enable Debug Layer for your application and analyze Output for any error or warning messages.

Vertex Format

Do you use correct vertex format? Did you define a structure describing your vertex correctly and compatible with the FVF/vertex declaration that you use? Are all the fields in the correct order and of the right type? Do you tell DirectX what vertex format you want to use by calling SetFVF/SetVertexDeclaration before rendering?

Draw Call

Do you pass correct parameters to the rendering function? In the most basic case, all offsets should be 0 and "stride" is the size of your vertex structure, in bytes, like sizeof(SMyVertex). Do you pass correct number of primitives to render?

Buffers

Do you fill your vertex and (optional) index buffer correctly? Do they have correct number of elements? Do you fill all of them? If you use transformed coordinates XYZRHW, the RHW component should be set to 1.0 and never to 0.0.

Alpha Channel

Maybe your geometry is totally transparent. Is the alpha channel set to maxium (1.0 or 0xFF, depending on type) and not to minimum in all of these: vertices, texture, material (only if you use lighting)?

Backface Culling

Maybe the triangles you want to render are ignored as "back facing" the camera, because they have wrong winding (clockwise or counterclockwise)? Try to disable backface culling to check that.

States

Did you setup blending on all texture stages correctly? Did you correctly setup all rest of the states of graphics pipeline? Maybe the problem appears only when you render some objects in a specific order? That means states set before rendering one object remain in the pipeline and break rendering of the next one.

Advanced Effects

If you use some advanced rendering features, your graphics card may not support them. Set reference software rasterizer during creation of the device object (D3DDEVTYPE_REF instead of HAL). Your program will run very slowly, but everything should be drawn as expected. Query device object for capabilities of your GPU (device caps).

Z-Buffer

If you use depth buffer, remember to clear it as well, together with backbuffer. In 3rd parameter of Clear function bitwise OR following flag: D3DCLEAR_ZBUFFER. Without it, you won't see anything on the screen or you will see artifacts. Value to clear Z-buffer to is 1.0f (not 0.0f).

Finally, there are ways you can actually debug how data and state look like on subsequent stages of the graphics pipeline while this bugged draw call is executed, using Graphics Diagnostics in Visual Sudio or other GPU debugging tool.

See also: How not to render 3D graphics: 40 ways to get a blank black screen

Comments (1) | Tags: directx rendering | Author: Adam Sawicki | Share

22:33
Mon
16
Jun 2014

Rendering Video Special Effects in GLSL

Rendering real-time, hardware accelerated 3D graphics is one aspect of computer graphics, but there are others too. Recently I became interested in video editing. I wanted to add some special effects to a video and was looking for a technology to do that. Of course video editing software usually has some effects built-in, like different filters or transition effects, some borders or gradients. But I wanted something different. If I had and I knew how to use software like Adobe After Effects, I'm sure that would be the best and easiest way to make any effect imaginable. But as I don't, I decided to use what I already know - to write a shader :)

1. To run a shader, some hosting app is needed. Of course I could write one in C++, but for the purpose of this work it was enough to use Live Coding Compo Framework (a demoscene tool created by bonzaj, which was used during last year's WeCan demoparty). This simple and free package contains rendering application and preconfigured Visual Studio solution. Having VS installed (it works with Express version as well), all I needed to do was to edit "Run.bat" file to point to directory with VS installation in my system. Next, I just executed "Run.bat", and two programs were launched. On the left monitor I had fullscreen "Live Coding Preview", on the right: Visual Studio with special solution opened. I could then edit any of the GLSL fragment shaders contained in the solution. Every time I hit Compile (Ctrl+F7), the shader was compiled and displayed in the preview.

2. Being able to render my effect in real-time, next I needed to capture it to a video. Probably the most popular app for this is FRAPS. I ran it, set Video Capture Settings to frame rate that I was going to use in my final video (which was 29.97 fps) and then captured appropriate period of time of rendering my effect, starting and stopping recording with F9 hotkey.

3. Video captured by FRAPS is in full, original resolution and encoded with some strange codec, so next I needed to convert it to desired format. To do this, I used VLC media player. Some may think that it's just a video player, but in fact it's incredibly powerful and flexible video transmitting and processing software. (I once had an opportunity to work with libVLC - its features exposed as C library.) Its greatest advantage is that it has its own collection of codecs, so it doesn't care whether you have appropriate codecs installed in your system. To convert a video file, I selected: Media > Convert / Save..., selected my AVI file captured by FRAPS, pressed "Convert / Save" button, selected Profile: "Video - H.264 + MP3 (MP4)", customized it using "Edit selected profile" image button, selecting: Encapsulation = MP4/MOV, Video codec = MPEG-4 (on Resolution tab, I could also set new resolution to scale the content, my choice was 1280px x 720px), Audio disabled, Subtitles disabled. Then after pressing "Save", selecting path to destination file, pressing "Start" and waiting some time, I had my video converted to more standard MPEG-4 format (and more than 5 times smaller than the original one recorded by FRAPS).

4. Finally I could insert this video onto a new track in my video editing software and enable blending with underlying layer to achieve desired effect (I used "Overlay" blending mode and 50% opacity).

There are some details that I intentionally skipped here (like video bitrate) not to make this post even longer, but I hope you learned something new from it. My effect looked like this, and here is the source code: Low freq fx.glsl

By the way, here is another tutorial about how to make GIF like this from a video (using only free tools this time):

1. To capture video frames as images, use VLC media player:

 2. To merge images into animated GIF, use GIMP:

Comments (1) | Tags: rendering video tools | Author: Adam Sawicki | Share

22:47
Mon
05
May 2014

Fluorescence

The main and general formula in computer graphics is Rendering Equation. It can be simplified to say that perceived color on an opaque surface is: LightColor * MaterialColor. The variables are are (R, G, B) vectors and (*) is per-component multiplication. According to this formula:

There are many phenomena that go beyond this model. One of them is subsurface scattering (SSS), where light penetrates object and goes out from different place on the surface. Another one is fluorescence - a property of a material which absorbs some light wavelength and emits different wavelength in return. One particularly interesting kind of it is UV-activity - when material absorbs UV light (also called blacklight, which is invisible to people) and emits some visible color. This way an object, when lit with UV light, looks like it's glowing in the dark, despite it has no LED-s or power source.

I've never seen a need to simulate fluorescence in computer graphics, but in real life it is used e.g. in decorations for psytrance parties, like this installation on main stage on Tree of Life 2012 festival in Turkey:

So what types of materials are fluorescent? It's not so simple that you can take any vividly colored object and it will glow in the  UV. Some of them do, some don't. You can take a very colourful T-shirt and it may be not visible under UV at all. On the other hand, some substances glow while they would better not (like dandruff :) But there are some materials that are specially designed and sold to be fluorescent, like the Fluor series of Montana MNT 94 paints I used to paint my origami decorations.

 

Comments (1) | Tags: rendering psytrance art | Author: Adam Sawicki | Share

13:23
Sun
04
May 2014

Four primary colors

I've already posted about my origami decoration. My choice of colors is not random. Of course I could make it more colorful, but evey paint costs some money, so I decided to buy just four: red, green, yellow and blue. Why?

That's because I still keep in my mind the great article Color Wheels are wrong? How color vision actually works. It tells that although our eyes can see three colors: red, green and blue (RGB), our perception is not that simple and direct. Our vision first does the difference between R and G, so each color is even more red or more green. Next and more importantly, it does the difference between RG and B, so each color is either more yellow (or red, or green) also known as warm colors, or more blue, aka cool colors.

That's also how photo manipulation software works (e.g. Adobe Lightroom). Instead of scrollbars for RGB, you can find there two scrollbars: to choose between more red and more green (called tint) and between more yellow and more blue (called temperature).

That's why it could be said that for our vision, there are four primary colors: red, green, yellow and blue.

Comments (1) | Tags: rendering | Author: Adam Sawicki | Share

23:40
Mon
23
Sep 2013

After WeCan 2013

Last weekend I've been in úód╝ at WeCan - multiplatform demoparty. It was great! - well organized, full of interesting stuff to watch and participate, as well as many nice people and of course a lot of beer :) Here is my small photo gallery from the event. On the first, as well as second day in the evening there were some concerts with various music (metal, drum'n'bass). ARM - one of the sponsors, delivered a talk about their mobile processors and GPU-s. They talked about tools they provide for game developers on their platform, like the one for performance profiling or offline shader compiler. On Saturday there were competitions in different categories: music (chip, tracker, streaming), game, wild/anim, gfx (oldschool, newschool), game, intro (256B, 1k/4k/64k any platform) and of course demo (any platform - there were demos for PC, Android, but the winning one was for Amiga!) I think the full compo results and prods will soon be published on WeCan 2013 :: pouet.net.

But in my opinion, most interesting from the whole party was the real-time coding competition. There were 3 stages. In each stage, pairs of programmers had to write a GLSL fragment shader in a special environment similar to Shadertoy. They could use some predefined input - several textures and constants, including data calculated real-time from music played by a DJ during the contest (array with FFT). Time was limited to 10-30 minutes for each stage. The goal was to generate some good looking graphics and animation. Who had louder applause at the end was the winner and advanced to next stage, where he could continue to improve his code. I didn't pass to the second stage, but anyway it was fun to participate in this compo.

Just as one could expect by looking at what is now state-of-the-art in 4k intros, winning strategy was to implement sphere tracing or something like that. Even if someone had just one sphere displayed on the screen after the first stage, from there he could easily make some amazing effects with interesting shapes, lighting, reflections etc. So it's not suprising many participants took this strategy. The winner was w23 from Russia.

I think that this real-time coding compo was an amazing idea. I've never seen anything like this before. Now I think that such competition is much better - more exciting and less time-consuming than any 8-hour long game development compo, which is traditional on Polish gamedev conferences. Of course that's just different thing. Not every game developer is a shader programmer. But on this year's WeCan, even those who don't code at all told me that the compo about real-time shader programming was very fun to watch.

Comments (0) | Tags: demoscene events competitions rendering | Author: Adam Sawicki | Share

21:16
Mon
21
Jan 2013

Mesh of Box

Too many times I had to come up with triangle mesh of a box to hardcode it in my program, written just from memory or with help of a sheet of paper. It's easy to make a mistake and have a box with one face missing or something like that. So in case me or somebody in the future will need it, here it is. Parameters:

Box spanning from (-1, -1, -1) to (+1, +1, +1). Contains 3D positions and normals. Topology is triangle strip, using strip-cut index. Backface culling can be used, front faces are clockwise (using Direct3D coordinate system).

// H file

struct SVertex {
    vec3 Position;
    vec3 Normal;
};

const size_t BOX_VERTEX_COUNT = 6 * 4;
const size_t BOX_INDEX_COUNT  = 6 * 5;
extern const SVertex BOX_VERTICES[];
extern const uint16_t BOX_INDICES[];

// CPP file

const SVertex BOX_VERTICES[] = {
    // -X
    { vec3(-1.f, -1.f,  1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f,  1.f,  1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f, -1.f, -1.f), vec3(-1.f,  0.f,  0.f) },
    { vec3(-1.f,  1.f, -1.f), vec3(-1.f,  0.f,  0.f) },
    // -Z
    { vec3(-1.f, -1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3(-1.f,  1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3( 1.f, -1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 0.f,  0.f, -1.f) },
    // +X
    { vec3( 1.f, -1.f, -1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f, -1.f,  1.f), vec3( 1.f,  0.f,  0.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 1.f,  0.f,  0.f) },
    // +Z
    { vec3( 1.f, -1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3(-1.f, -1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    { vec3(-1.f,  1.f,  1.f), vec3( 0.f,  0.f,  1.f) },
    // -Y
    { vec3(-1.f, -1.f,  1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3(-1.f, -1.f, -1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3( 1.f, -1.f,  1.f), vec3( 0.f, -1.f,  0.f) },
    { vec3( 1.f, -1.f, -1.f), vec3( 0.f, -1.f,  0.f) },
    // +Y
    { vec3(-1.f,  1.f, -1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3(-1.f,  1.f,  1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3( 1.f,  1.f, -1.f), vec3( 0.f,  1.f,  0.f) },
    { vec3( 1.f,  1.f,  1.f), vec3( 0.f,  1.f,  0.f) },
};

const uint16_t BOX_INDICES[] = {
     0,  1,  2,  3, 0xFFFF, // -X
     4,  5,  6,  7, 0xFFFF, // -Z
     8,  9, 10, 11, 0xFFFF, // +X
    12, 13, 14, 15, 0xFFFF, // +Z
    16, 17, 18, 19, 0xFFFF, // -Y
    20, 21, 22, 23, 0xFFFF, // +Y
};

Comments (2) | Tags: rendering | Author: Adam Sawicki | Share

Pages: 1 2 3 ... 20 >

STAT NO AD [Stat] [Admin] [STAT NO AD] [pub] [Mirror] Copyright © 2004-2017 Adam Sawicki
Copyright © 2004-2017 Adam Sawicki