Tag: graphics

Entries for tag "graphics", ordered from most recent. Entry count: 41.

Pages: 1 2 3 ... 6 >

18:27
Mon
07
Aug 2017

Understanding Vulkan objects

An important part of learning the Vulkan® API – just like any other API – is to understand what types of objects are defined in it, what they represent and how they relate to each other. To help with this, we’ve created a diagram that shows all of the Vulkan objects and some of their relationships, especially the order in which you create one from another.

Read more: Understanding Vulkan objects @ GPUOpen

Comments (0) | Tags: graphics vulkan | Author: Adam Sawicki | Share

12:40
Wed
26
Jul 2017

Slavic Game Jam 2017 and my talk

There are many game jams all around the world. Global Game Jam is probably the biggest and most popular one, but it is a global event that happens at different sites. This weekend Slavic Game Jam takes place - the biggest game jam in Eastern Europe, happening in just one site in Warsaw, Poland.

I will be there not only as a participant, but I will also give a talk, because AMD is a sponsor of the event. My talk will be on Friday at 2 PM. Its title is "Rendering in Your Game - Debugging and Profiling". I will provide some basic information and show some tools useful for analyzing performance of a game, including live demo. This information may be useful no matter if you develop your own engine or use existing one like Unity or Unreal. If you have a ticket for the event (tickets are already sold out), I invite you to come on Friday earlier than for the official start of the jam.

Comments (0) | Tags: teaching events competitions graphics | Author: Adam Sawicki | Share

13:03
Tue
11
Jul 2017

Vulkan Memory Allocator - a library on GPUOpen & GitHub

Vulkan is hard. One of the difficulties is the responsibility of a developer to manually manage GPU memory. Various GPU vendors expose various set of memory heaps and types and you need to choose right ones. It is also recommended to allocate larger memory blocks and assign parts of them to individual buffers and images. Now there is a library that simplifies these tasks - a one that I developed as part of my job duties. It has just been announced on GPUOpen blog and published on GitHub:

It is a single-header C++ library with a simple C Vulkan-style interface documented using Doxygen-style comments. It is available on MIT license.

Comments (0) | Tags: graphics vulkan | Author: Adam Sawicki | Share

21:15
Mon
15
May 2017

Vulkan Bits and Pieces: Synchronization in a Frame

One of the things that you must implement when using Vulkan is basic structure of a rendering frame, which consists of filling a command buffer, submitting it to a command queue, acquiring image from swap chain and finally presenting it (order not preserved here). Things happen in parallel, as GPU works asynchronously to the CPU, so you must explicitly synchronize all these things. In this post I’d like to present basic structure of the rendering frame, with all necessary synchronization.

There are actually two objects that we must take care of simultaneously. First is command buffer. One time it is being filled on the CPU by using vkBeginCommandBuffer, vkEndCommandBuffer and everything in between (like starting render passes and posting all the vkCmd* commands). Another time (after being submitted to a queue using vkQueueSubmit) it waits for execution or it is being executed on the GPU. These things cannot happen at the same time, so we need synchronization. Because it’s a GPU-to-CPU synchronization, we use fence for that. It’s better to have multiple command buffers and all these synchronization objects, stored in array and used in a round-robin fashion (with help of index variable) because this way one can be filled while the other one is consumed at the same time.

Second object is an image acquired from the swapchain. One time it is being rendered to during command buffer execution. Another time it is presented on the screen. Again, we need to synchronize it so these states happen in a sequence. Because it’s a GPU-to-GPU synchronization, we use semaphores for that. There are multiple images as well because swapchain consists of several of them (not necessarily used in round-robin fashion, we need to obtain index of a new image using vkAcquireNextImageKHR function, so there is separate imageIndex).

Having following objects already initialized:

VkDevice g_Device;
VkQueue g_GraphicsQueue;
VkSwapchainKHR g_Swapchain;
VkImageView g_SwapchainImageViews[MAX_SWAPCHAIN_IMAGES];

const uint32_t COUNT = 2;
uint32_t g_NextIndex = 0;
VkCommandBuffer g_CmdBuf[COUNT];
VkFence g_CmdBufExecutedFences[COUNT]; // Create with VK_FENCE_CREATE_SIGNALED_BIT.
VkSemaphore g_ImageAvailableSemaphores[COUNT];
VkSemaphore g_RenderFinishedSemaphores[COUNT];

Structure of a frame may look like this:

uint32_t index = (g_NextIndex++) % COUNT;
VkCommandBuffer cmdBuf = g_CmdBuf[index];

vkWaitForFences(g_Device, 1, &g_CmdBufExecutedFences[index], VK_TRUE, UINT64_MAX);

VkCommandBufferBeginInfo cmdBufBeginInfo = { VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO };
cmdBufBeginInfo.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
vkBeginCommandBuffer(cmdBuf, &cmdBufBeginInfo);

// Post some rendering commands to cmdBuf.

uint32_t imageIndex;
vkAcquireNextImageKHR(g_Device, g_Swapchain, UINT64_MAX, g_ImageAvailableSemaphores[index], VK_NULL_HANDLE, &imageIndex);

// Post those rendering commands that render to final backbuffer:
// g_SwapchainImageViews[imageIndex]

vkEndCommandBuffer(cmdBuf);

vkResetFences(g_Device, 1, &g_CmdBufExecutedFences[index]);

VkPipelineStageFlags submitWaitStage = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT;
VkSubmitInfo submitInfo = { VK_STRUCTURE_TYPE_SUBMIT_INFO };
submitInfo.waitSemaphoreCount = 1;
submitInfo.pWaitSemaphores = &g_ImageAvailableSemaphores[index];
submitInfo.pWaitDstStageMask = &submitWaitStage;
submitInfo.commandBufferCount = 1;
submitInfo.pCommandBuffers = &cmdBuf;
submitInfo.signalSemaphoreCount = 1;
submitInfo.pSignalSemaphores = &g_RenderFinishedSemaphores[index];
vkQueueSubmit(g_GraphicsQueue, 1, &submitInfo, g_CmdBufExecutedFences[index]);

VkPresentInfoKHR presentInfo = { VK_STRUCTURE_TYPE_PRESENT_INFO_KHR };
presentInfo.waitSemaphoreCount = 1;
presentInfo.pWaitSemaphores = &g_RenderFinishedSemaphores[index];
presentInfo.swapChainCount = 1;
presentInfo.pSwapchains = &g_Swapchain;
presentInfo.pImageIndices = &imageIndex;
vkQueuePresentKHR(g_GraphicsQueue, &presentInfo);

At least that’s what I currently believe is the correct and efficient way. If you think I’m wrong, please leave a comment or write me an e-mail.

That’s just one aspect of using Vulkan. There are still more things to do before you can even render your first triangle, like transitioning swapchain image layout between VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL and VK_IMAGE_LAYOUT_PRESENT_SRC_KHR.

Comments (0) | Tags: vulkan graphics | Author: Adam Sawicki | Share

23:38
Thu
27
Apr 2017

I Implemented Video and GIF Playback with FFmpeg

I started my music visualization project with a possibility to load a collection of static images, accompanied with metadata that describe their parameters like dominant colors, whether it uses alpha transparency or whether it is tileable (description of these parameters could be a topic of another blog post). I then render them as textures, moving, rotating and changing colors randomly, blended together, with a possibility of feedback from previous frame. This mathod may sometimes generate quite interesting images, but it has its own limitations and becomes boring after some time.

Generating some interesting graphics procedurally from scratch is my ultimate goal. But while writing shaders is fun and can give amazing results (as we can see on ShaderToy), it's also the hard way. So to be able to show something interesting, for now I've implemented playback of videos and animated GIFs, using FFmpeg library.

FFmpeg is a free tool that contains its own codecs for various video and audio formats (so it doesn't use the codecs installed in Windows). It is known as a command-line program that can convert about any video format, but it's also a software library that offers this encoding/decoding features to developers. I learned to use this library because I needed to implement video playback for one of my shows.

Later I discovered that it can play animated GIFs as well. This is a great feature, because having hundreds of such GIFs downloaded and being able to switch between them in an instant can make quite interesting visuals. There are many abstract, geometric, psychedelic animations shared all around the Internet, like on Op Art or Fractalgasm Facebook pages. At the same time, possibility to play all popular video formats is much more comfortable than what Resolume offers, which requires converting all the footage to its own codec, called DXDI (by the way, FFmpeg is able to play this as well).

I won't show any source code this time, but if you are a developer and consider implementing support for video playback or encoding, I recommend FFmpeg library. Other option is libVLC - a library behind popular VLC media player, which also has its own set of codecs. I also used it some time ago. Playing anmimated GIF-s is also possible through Windows Imaging Component (WIC), which is part of standard Windows API.

Comments (0) | Tags: graphics video vj | Author: Adam Sawicki | Share

00:34
Thu
06
Apr 2017

Vulkan Bits and Pieces: Writing to an Attachment

One of the reasons why new generation graphics APIs (DirectX 12 and Vulkan) are so complicated, is that they have so many levels of indirection in referring to anything. For example, when you render pixels to a color attachment (also known as “render target” in other APIs), the path is as follows:

  1. GLSL fragment shader writes to a variable with some NAME, e.g. “outColor”:
    outColor = vec4(1.0, 0.0, 0.0, 1.0);
  2. The variable is defined earlier in this shader as output and bound to a specific LOCATION, e.g. number 0:
    layout(location = 0) out vec4 outColor;
  3. Switching to C/C++ code, this location is actually index to array pointed by VkSubpassDescription::​pColorAttachments - member of the structure that describes rendering subpass.
  4. Each element of this array in its member VkAttachmentReference::​attachment provides another INDEX, this time to an array pointed by VkRenderPassCreateInfo::​pAttachments - member of the structure that describes rendering pass.
  5. Elements of this array of type VkAttachmentDescription provide just few parameters, like format.
  6. But this index also refers to elements of array pointed by VkFramebufferCreateInfo::​pAttachments - member of a structure filled when creating a framebuffer that is going to be pointed by VkRenderPassBeginInfo::​framebuffer when starting actual execution of the render pass.
  7. Rest is business as usual. Elements of this array are of type VkImageView, so each of them is a VIEW to an image, pointed by VkImageViewCreateInfo::​image - member of a structure used when creating the view.
  8. The IMAGE (type VkImage) is either obtained from swap chain using function vkGetSwapchainImagesKHR, or created manually using function vkCreateImage.
  9. In the latter case, you must also allocate MEMORY (type VkDeviceMemory) with function vkAllocateMemory and bind its fragment to the image using function vkBindImageMemory. This is the memory that will be actually written.

Yeah, Vuklan is hard…

Comments (0) | Tags: vulkan graphics | Author: Adam Sawicki | Share

15:50
Sat
11
Mar 2017

How to change display mode using WinAPI?

If you write a graphics application or a game, you may want to make it fullscreen and set specific screen resolution. In DirectX there are functions for that, but if you use OpenGL or Vulkan, you need another way to accomplish that. I've researched the topic recently and I've found that Windows API supports enumerating display devices and modes with functions: EnumDisplayDevices, EnumDisplaySettings, as well as changing mode with function ChangeDisplaySettingsEx. It's a programatic access to more or less the same set of features that you can access manually by going to "Display settings" system window.

I've prepared an example C program demonstrating how to use these functions:

DisplaySettingsTest - github.com/sawickiap

First you may want to enumerate available Adapters. To do this, call function EnumDisplayDevices multiple times. Pass NULL as first parameter (LPCWSTR lpDevice). As the second parameter pass subsequent DWORD Adapter index, starting from 0. Enumeration should continue as long as the function returns BOOL nonzero. When it returns zero, it means there are no more Adapters and that Adapter with given index and any higher index could not be retrieved.

For each successfully retrieved Adapter, DISPLAY_DEVICE structure is filled by the function. It contains following members:

There is a second level: Adapters contain Display Devices. To enumerate them, use the same function EnumDisplayDevices, but this time pass Adapter DeviceName as first parameter. This way you will enumerate Display Devices inside that Adapter, described by the same structure DISPLAY_DEVICE. For example, my system returns DeviceName = "\\.\DISPLAY1\Monitor0", DeviceString = "Generic PnP Monitor".

The meaning and the difference between "Adapter" and "Display Device" is not fully clear to me. You may think that Adapter is a single GPU (graphics card), but it turns out not to be the case. I have a single graphics card and yet my system reports 6 Adapters, each having 0 or 1 Display Device. That can mean Adapter is more like a single monitor output (e.g. HDMI, DisplayPort, VGA) on the graphics card. This seems true unless you have two monitors running in "Duplicate" mode - then two Display Devices are reported inside one Adapter.

Then there is a list of supported Display Settings (or Modes). You can enumerate them in similar fashion using EnumDisplaySettings function, which fills DEVMODE structure. It seems that Modes belong to an Adapter, not a Display Device, so as first parameter to this function you must to pass DISPLAY_DEVICE::DeviceName returned by EnumDisplayDevices(NULL, ...), not EnumDisplaySettings(adapter.DeviceName, ...). The structure is quite complex, but the function fills only following members:

I have a single graphics card (AMD Radeon RX 480) with two Full HD (1920 x 1080) monitors connected. You can see example output of the program from my system here: ExampleOutput.txt.

To change display mode, use function ChangeDisplaySettingsEx.

The function returns DISP_CHANGE_SUCCESSFUL if display mode was successfully changed and one of other DISP_CHANGE_* constants if it failed.

To restore original display mode, call the function like this:

ChangeDisplaySettingsEx(targetDeviceName, NULL, NULL, 0, NULL);

Unfortunately, display mode changed in the way described here is not automatically restored after user switches to some other application (e.g. using Alt+Tab), like in DirectX fullscreen mode, but you can handle it yourself. Good news is that if you pass CDS_FULLSCREEN flag to ChangeDisplaySettingsEx, the previous mode is automatically restored by the system when your application exits or crashes.

Comments (0) | Tags: windows graphics | Author: Adam Sawicki | Share

14:07
Wed
21
Dec 2016

3 Rules to Make You Image Looking Good on a Projector

As I go to conferences (and sometimes present my slides), do music visualizations on parties and attend demoscene parties, I started noticing what looks good and what doesn't when presented on a big screen using a projector. It's often not the same thing as you expect when preparing your content and see it on a monitor. The problem is that while monitors usually do pretty good job in representing all shades of colors and brightnesses and we adjust them to see the image from the right distance and in good lighting conditions, with image presented from a projector it's often not the case. You never know what to expect from the quality of the image before you actually enter the venue where you are going to present. That's why I have prepared three simple rules to follow if you want your content to look good on a projector:

1. Use High Contrast

White (or almost white) objects on black background are OK. Black (or almost black) objects on white background are also OK. Gray objects on gray background are not OK. In other words: don't rely on subtle differences in brightness.

Why? Because they may not be visible on the big screen. Sometimes the projector is not powerful enough to ensure sufficient contrast and so everything looks very dark. Sometimes the lighting conditions are such that everything looks very bright. Sometimes the gamma of the image is very different than you experience when preparing your content. (By gamma, I basically mean the curve showing how perceived brightness depends on the brightness value of a pixel.) As a result, all the dark colors may be indistinguishable and completely black, or all the bright colors may be indistinguishable and completely white, or conversely - small difference in pixel brightness can make large difference on the big screen.

So basically use high contrast for all your images. Using subtle differences in pixel brightness for adding some details is OK, but make sure the image looks good and conveys all the key information also without them, relying only on the difference between black and white.

2. Don't Rely on Colors

Monitors can represent most of the colors from sRGB space more or less correctly, but you shouldn't expect the same from a projector. Some colors may seem brighter or darker than expected. Sometimes colors just look completely different on the big screen.

That's why you shouldn't rely on them. Showing some details using colors is OK, but make sure your image looks good and conveys all the key information also without them, relying on brightness alone.

I can remember watching a business presentation where some data was shown on a graph using a yellow line on white background. It was completely invisible on the big screen, so the presenter turned his laptop for us so we could see it. Another time I needed to present on a screen that was lit by a blue lamp and there was no way to turn it off. All parts of the image looked more or less blue and there was no point in using any colors. I even seen a situation where one of RGB components didn't work at all because of the broken cable!

3. Use Big Objects

Use only large font size, prefer big objects of uniform brightness/color and make all lines thick. Don't show any important things as small objects or characters and don't use single-pixel width lines.

Why? Because they may not be readable or visible at all. Sometimes the screen is too small or some viewers sit too far from it. Some people have bad vision and might forget their glasses. Sometimes the image gets downscaled. Many projectors have low resolution. The image quality may just be bad - for example long VGA (D-sub) cable introduces some horizontal blur. I sometimes connect my laptop to a HDMI video switcher that presents itself to the laptop as Full HD (1080p) display, while its output is actually connected to a projector having native resolution of 1024 x 768. I once needed to present on an old projector that didn't have either D-sub or HDMI input, only Composite, S-Video and... a DVD player. Not only I needed a special converter, but also image quality was very bad because standard definition PAL TV signal is only 576i.

For good examples, see demos from Cocoon group (see the group on pouet.net, search YouTube). I think they are not only masterpiece of both technology and art, but they are also deliberately prepared to look great on a big screen by following the rules similar to what I described here.

Comments (2) | Tags: graphics | Author: Adam Sawicki | Share

Pages: 1 2 3 ... 6 >

STAT NO AD [Stat] [Admin] [STAT NO AD] [pub] [Mirror] Copyright © 2004-2017 Adam Sawicki
Copyright © 2004-2017 Adam Sawicki