Tag: graphics

Entries for tag "graphics", ordered from most recent. Entry count: 44.

Warning! Some information on this page is older than 5 years now. I keep it for reference, but it probably doesn't reflect my current knowledge and beliefs.

Pages: > 1 2 3 4 ... 6 >

# Vulkan Bits and Pieces: Writing to an Attachment

00:34
Thu
06
Apr 2017

One of the reasons why new generation graphics APIs (DirectX 12 and Vulkan) are so complicated, is that they have so many levels of indirection in referring to anything. For example, when you render pixels to a color attachment (also known as “render target” in other APIs), the path is as follows:

  1. GLSL fragment shader writes to a variable with some NAME, e.g. “outColor”:
    outColor = vec4(1.0, 0.0, 0.0, 1.0);
  2. The variable is defined earlier in this shader as output and bound to a specific LOCATION, e.g. number 0:
    layout(location = 0) out vec4 outColor;
  3. Switching to C/C++ code, this location is actually index to array pointed by VkSubpassDescription::​pColorAttachments - member of the structure that describes rendering subpass.
  4. Each element of this array in its member VkAttachmentReference::​attachment provides another INDEX, this time to an array pointed by VkRenderPassCreateInfo::​pAttachments - member of the structure that describes rendering pass.
  5. Elements of this array of type VkAttachmentDescription provide just few parameters, like format.
  6. But this index also refers to elements of array pointed by VkFramebufferCreateInfo::​pAttachments - member of a structure filled when creating a framebuffer that is going to be pointed by VkRenderPassBeginInfo::​framebuffer when starting actual execution of the render pass.
  7. Rest is business as usual. Elements of this array are of type VkImageView, so each of them is a VIEW to an image, pointed by VkImageViewCreateInfo::​image - member of a structure used when creating the view.
  8. The IMAGE (type VkImage) is either obtained from swap chain using function vkGetSwapchainImagesKHR, or created manually using function vkCreateImage.
  9. In the latter case, you must also allocate MEMORY (type VkDeviceMemory) with function vkAllocateMemory and bind its fragment to the image using function vkBindImageMemory. This is the memory that will be actually written.

Yeah, Vuklan is hard…

Comments | #vulkan #graphics Share

# How to change display mode using WinAPI?

15:50
Sat
11
Mar 2017

If you write a graphics application or a game, you may want to make it fullscreen and set specific screen resolution. In DirectX there are functions for that, but if you use OpenGL or Vulkan, you need another way to accomplish that. I've researched the topic recently and I've found that Windows API supports enumerating display devices and modes with functions: EnumDisplayDevices, EnumDisplaySettings, as well as changing mode with function ChangeDisplaySettingsEx. It's a programatic access to more or less the same set of features that you can access manually by going to "Display settings" system window.

I've prepared an example C program demonstrating how to use these functions:

DisplaySettingsTest - github.com/sawickiap

First you may want to enumerate available Adapters. To do this, call function EnumDisplayDevices multiple times. Pass NULL as first parameter (LPCWSTR lpDevice). As the second parameter pass subsequent DWORD Adapter index, starting from 0. Enumeration should continue as long as the function returns BOOL nonzero. When it returns zero, it means there are no more Adapters and that Adapter with given index and any higher index could not be retrieved.

For each successfully retrieved Adapter, DISPLAY_DEVICE structure is filled by the function. It contains following members:

There is a second level: Adapters contain Display Devices. To enumerate them, use the same function EnumDisplayDevices, but this time pass Adapter DeviceName as first parameter. This way you will enumerate Display Devices inside that Adapter, described by the same structure DISPLAY_DEVICE. For example, my system returns DeviceName = "\\.\DISPLAY1\Monitor0", DeviceString = "Generic PnP Monitor".

The meaning and the difference between "Adapter" and "Display Device" is not fully clear to me. You may think that Adapter is a single GPU (graphics card), but it turns out not to be the case. I have a single graphics card and yet my system reports 6 Adapters, each having 0 or 1 Display Device. That can mean Adapter is more like a single monitor output (e.g. HDMI, DisplayPort, VGA) on the graphics card. This seems true unless you have two monitors running in "Duplicate" mode - then two Display Devices are reported inside one Adapter.

Then there is a list of supported Display Settings (or Modes). You can enumerate them in similar fashion using EnumDisplaySettings function, which fills DEVMODE structure. It seems that Modes belong to an Adapter, not a Display Device, so as first parameter to this function you must to pass DISPLAY_DEVICE::DeviceName returned by EnumDisplayDevices(NULL, ...), not EnumDisplaySettings(adapter.DeviceName, ...). The structure is quite complex, but the function fills only following members:

I have a single graphics card (AMD Radeon RX 480) with two Full HD (1920 x 1080) monitors connected. You can see example output of the program from my system here: ExampleOutput.txt.

To change display mode, use function ChangeDisplaySettingsEx.

The function returns DISP_CHANGE_SUCCESSFUL if display mode was successfully changed and one of other DISP_CHANGE_* constants if it failed.

To restore original display mode, call the function like this:

ChangeDisplaySettingsEx(targetDeviceName, NULL, NULL, 0, NULL);

Unfortunately, display mode changed in the way described here is not automatically restored after user switches to some other application (e.g. using Alt+Tab), like in DirectX fullscreen mode, but you can handle it yourself. Good news is that if you pass CDS_FULLSCREEN flag to ChangeDisplaySettingsEx, the previous mode is automatically restored by the system when your application exits or crashes.

Comments | #windows #graphics Share

# 3 Rules to Make You Image Looking Good on a Projector

14:07
Wed
21
Dec 2016

As I go to conferences (and sometimes present my slides), do music visualizations on parties and attend demoscene parties, I started noticing what looks good and what doesn't when presented on a big screen using a projector. It's often not the same thing as you expect when preparing your content and see it on a monitor. The problem is that while monitors usually do pretty good job in representing all shades of colors and brightnesses and we adjust them to see the image from the right distance and in good lighting conditions, with image presented from a projector it's often not the case. You never know what to expect from the quality of the image before you actually enter the venue where you are going to present. That's why I have prepared three simple rules to follow if you want your content to look good on a projector:

1. Use High Contrast

White (or almost white) objects on black background are OK. Black (or almost black) objects on white background are also OK. Gray objects on gray background are not OK. In other words: don't rely on subtle differences in brightness.

Why? Because they may not be visible on the big screen. Sometimes the projector is not powerful enough to ensure sufficient contrast and so everything looks very dark. Sometimes the lighting conditions are such that everything looks very bright. Sometimes the gamma of the image is very different than you experience when preparing your content. (By gamma, I basically mean the curve showing how perceived brightness depends on the brightness value of a pixel.) As a result, all the dark colors may be indistinguishable and completely black, or all the bright colors may be indistinguishable and completely white, or conversely - small difference in pixel brightness can make large difference on the big screen.

So basically use high contrast for all your images. Using subtle differences in pixel brightness for adding some details is OK, but make sure the image looks good and conveys all the key information also without them, relying only on the difference between black and white.

2. Don't Rely on Colors

Monitors can represent most of the colors from sRGB space more or less correctly, but you shouldn't expect the same from a projector. Some colors may seem brighter or darker than expected. Sometimes colors just look completely different on the big screen.

That's why you shouldn't rely on them. Showing some details using colors is OK, but make sure your image looks good and conveys all the key information also without them, relying on brightness alone.

I can remember watching a business presentation where some data was shown on a graph using a yellow line on white background. It was completely invisible on the big screen, so the presenter turned his laptop for us so we could see it. Another time I needed to present on a screen that was lit by a blue lamp and there was no way to turn it off. All parts of the image looked more or less blue and there was no point in using any colors. I even seen a situation where one of RGB components didn't work at all because of the broken cable!

3. Use Big Objects

Use only large font size, prefer big objects of uniform brightness/color and make all lines thick. Don't show any important things as small objects or characters and don't use single-pixel width lines.

Why? Because they may not be readable or visible at all. Sometimes the screen is too small or some viewers sit too far from it. Some people have bad vision and might forget their glasses. Sometimes the image gets downscaled. Many projectors have low resolution. The image quality may just be bad - for example long VGA (D-sub) cable introduces some horizontal blur. I sometimes connect my laptop to a HDMI video switcher that presents itself to the laptop as Full HD (1080p) display, while its output is actually connected to a projector having native resolution of 1024 x 768. I once needed to present on an old projector that didn't have either D-sub or HDMI input, only Composite, S-Video and... a DVD player. Not only I needed a special converter, but also image quality was very bad because standard definition PAL TV signal is only 576i.

For good examples, see demos from Cocoon group (see the group on pouet.net, search YouTube). I think they are not only masterpiece of both technology and art, but they are also deliberately prepared to look great on a big screen by following the rules similar to what I described here.

Comments | #graphics Share

# Color Temperature of Your Lighting

14:54
Sun
02
Oct 2016

In photography, video and all graphics in general there are so much more parameters to consider than just exposure, meaning lighter or darker image. One of them is color temperature, or white balance. It's about what we consider "white" - a frame of reference, especially concerning light source and so all the objects lit by it. It's not real temperature, but we measure it in Kelvins. Paradoxically, lower color temperature values (like 3000K) mean colors that we call "warmer" - more towards yellow, orange or red. Higher temeratures (like 7000K) mean "cooler" colors - more towards blue. Values like 6500K are considered equivalent of a sunlight during the day, while light bulbs usually have around 3000K. Color temperature of your lighting is important when you work with colors on a computer. They recommend to use 6500K light source for that purpose.

I decided to make an experiment. Below you can see photos of part of my room, with a test screen displayed on my monitor, piece of my wall (behind it, supposed to be white) and a piece of furniture (the right part, also should be white). The monitor is LG 23MA73D-PZ, with IPS panel, calibrated to what I believe should be around 6500K (setting Colour Temperature = Warm2).

Left column shows a photo taken in the middle of the night, with lighting by LED lamps having 3000K color temperature. Middle column is the same scene lit by different LED lamps having 6500K. Finally, right column show a photo taken during the day, using only sunlight.

The only remainig variable is white balance of the photo itself. That's why I introduced two rows. Top row show all three photos calibrated to same white balance = 6500K. As you can see, the image on the screen looks pretty much the same on all of them, because monitor emits its own light. But the wall and the furniture, lit by a specific light source, seems orange or reddish on the first photo, while on the other photos it's more or less gray.

Our eyes, as well as cameras can adjust to changing color temperature to compensate for it and make everything looking neutral-white again. So the second row shows same photos calibrated to white balance of the specific light source. Now the wall and the furniture looks neutral gray on all of them, but notice what happened with the image on the screen when light was 3000K - it completely changed colors of the picture, making everything looking blue.

That's why it's so important to consider color temperature of your light sources when working with color correction and grading of photos, videos or some other graphics. Otherwise you can produce an image that looked good at the time of making, but turns out to have some color cast when seen under different lighting conditions. Of course, if you just work with text or code, it doesn't matter that much. It is more important then to just have a pleasant lighting that doesn't cause eye strain, which would probably be something more like the 3000K lamps.

Comments | #video #photography #graphics Share

# Blender 2.5 - Programming Export Plugin

21:24
Fri
22
Oct 2010

Today I've been familiarizing myself with Blender 2.5. This new version is still in Beta stage, but as far as I can see everything is aleady in place and the program works OK. I only had to solve one problem at the beginning - my Blender was crashing at startup, because it was looking for PYTHONPATH environmental variable and thus trying to use old version of Python language that I had installed, while new Blender needs Python 3.1. The solution is to clear this environmental variable so Blender can use its bundled Python distribution. I've created a batch script to do this, called "Run Blender.bat":

set PYTHONPATH=
"c:\Program Files\Blender Foundation\Blender\blender.exe"

Many changes have been made in new new Blender version. Fundamental concepts stay the same, so we still have Object Mode, Edit Mode, select objects with right mouse button, grab with G, rotate with R and scale with S. User interface is still nonstandard with all its non-overlapping and non-modal philosophy. But they introduced some new features and made quite big changes to the interface, so it took me some time to play around and discover where is everything now and how it works. Readings on this topic I'd recommend are the new manual for Blender 2.5 and especially Blender 2.5 Changes - a document that highlights what's new.

The function that I've spend much time searching for was showing normals, so in case you also need this and can't find it: 1. You must be in Edit Mode 2. Click on the small (+) symbol in the right-top corner of the 3D View to expand a panel with some parameters 3. In this panel, look for Mesh Display / Normals section and select the checkboxes you want.

When it comes to writing plugins, Python API has been completely redesigned. Information about it can be found in Introduction, Manual and finally Reference. I couldn't find any article about writing export plugin for Blender 2.5, but basing on the information I could find and the source code of the existing plugins I've managed to code one. Here it is: io_export_json.py. It exports some of the data (scenes, objects, meshes, vertices, faces, edges, normals, texture coordinates, vertex colors) in JSON format.

BTW do you know any good program to view JSON documents as a tree? The best I've found so far is JSON Viewer, but it's not ideal. For example, it can't open files so the document has to be pasted via the clipboard.

Blender keeps Python plugins in the user's directory. In my Windows 7 it is C:\Users\%MY_LOGIN%\AppData\Roaming\Blender Foundation\Blender\2.54\scripts\addons. To start using new plugin, you have to:

  1. Copy the .py file to this directory.
  2. Restart Blender or hit F8 to reaload scripts.
  3. Go to File / User Preferences
  4. Navigate to Add-Ons tab.
  5. Find the add-on you want to use. You can filter add-ons by type with buttons on the left.
  6. Select checkbox next to the add-on.
  7. It should now be available in File / Export menu.

Comments | #blender #graphics Share

# How to Equally Blend Multiple Images in GIMP

19:55
Sun
01
Aug 2010

During my today's walk on Warsaw I took many different photos. Some of them were series of photos of a road from exactly same place, with a purpose of merging them together into a single photo. My idea was to just average the color of each pixel, so it should be like:

out = 1/4 * (img[0] + img[1] + img[2] + img[3])

But how to do this in GIMP? There is no such filter AFAIK. An obvious solution is to use blending - Mode and Opacity settings for layers - but it turns out to be not so simple. Each layer blends with a merged image from beneath it, so instead of average formula above we have to refer to a formula for blending (linear interpolation), which is:

out[i] = t[i] * img[i] + (1-t[i]) * out[i+1]

Where:
i is a layer index 0..(n-1), indexed from the topmost layer,
t[i] is Opacity parameter for layer i.

After expanding it for 4 layers, the formula becomes:

out =
t[0] * img0 + (1-t[0]) * (
  t[1] * img1 + (1-t[1]) * (
    t[2] * img2 + (1-t[2]) * (
      t[3] * img3 + (1-t[3]) * 0
    )
  )
)

Finally, after doing some math with pen and paper, I calculated that to equally blend (average) n images in GIMP, you should:

For example, I have 5 photos so I give them Opacity: 20, 25, 33, 50, 100. Here is the result: an image that could be called "Road of Ghosts" :)

By the way, I've found a website where panorama photos can be uploaded for free and interactively viewed using just Flash. Here is my profile: reg | Panogio.

Comments | #gallery #graphics Share

# Color Names in .NET - CheatSheet

20:59
Mon
05
Jul 2010

Some color values used in computer science have their names, like "Red" (#FF0000) or "Navy" (#000080). You probably know them if you've written anything in HTML. But there are more of them than just several most popular ones, made of values 0x00, 0x80 and 0xFF. I've prepared (or rather, to be honest, copied from MSDN Library) a table of color names available in .NET standard library, as static variables in System.Drawing.Color, System.Drawing.Pens and System.Drawing.Brushes classes. Here is my "Color Names in .NET" CheatSheet:

Color_Names_in_DotNet.pdf
Color_Names_in_DotNet.odt

Comments | #rendering #graphics #.net Share

# Tiny Planet

11:51
Sat
03
Jul 2010

Tiny Planet is an interesting effect to be made from a photo or drawing. I saw it for the first time at Wojciech Toman's devlog. Here are some tiny planets made by me recently:

Warsaw - Chomiczˇwka:

My house:

To make a tiny planet, you have to first take a 360 degrees panorama photo of some landscape. For stiching photos into single panoramic one I recommend free application from Microsoft Research called Microsoft Image Composite Editor (ICE). Of course there are many others availble. Then the process involves some manual graphics work and/or smart usage of some filters, where the crucial one is converting image to polar coordinates. In GIMP you can find the appropriate menu command in Filters / Distorts / Polar Coordinates.

The biggest question when making such images appears to be how to fill the inside and the outside of the circle forming surface of the planet. Do you have any ideas better than the ones I used here?

Comments | #gallery #graphics Share

Pages: > 1 2 3 4 ... 6 >

STAT NO AD
[Stat] [STAT NO AD] [Download] [Dropbox] [pub] [Mirror]
Copyright © 2004-2017