How to Correctly Interpolate Vertex Attributes on a Parallelogram Using Modern GPUs?

Thu
13
Feb 2020

This is probably first guest post ever on my blog. It has been written by my friend Ɓukasz Izdebski Ph.D.

In a nutshell, today’s graphics cards render meshes using only triangles, as shown on the picture below.

I don’t want to describe how it is done (a lot of information on this topic can be easily found on the Internet) and describe the whole graphics pipeline, but for a short recap: In the first programmable part of the rendering pipeline, vertex shader receives a single vertex (with assigned data about it that I will later refer to) and outputs one vertex which is transformed by the shader. Usually it is a transformation from 3D coordinate system to Normalized Device Coordinates (NDC) (information about NDC can be found in Coordinate Systems). After primitive clipping, perspective divide, and viewport transform, vertices are projected on the 2D screen, which is drawn to the monitor output.

But this is only half the story. I was describing what is happening with vertices, but at the beginning I mentioned triangles, with three vertices forming one triangle. After vertex processing, Primitive Assembly follows, then the next stage is Rasterization. This is very important because in this stage the generation of fragments (pixels) happens, which are lying inside the triangle, as shown on the picture below.

How are colors of the pixels inside the triangle generated? Each vertex can contain not only one set of data - 3D coordinates in the virtual world, but also additional data called attributes. Those attributes can be color of the vertex, normal vector, texture coordinate etc.

How then those rainbow colors are generated as shown on the picture above, while as I said only one color can be set on three vertices of the triangle? The answer is interpolation. As we can read in Wikipedia, interpolation in mathematics is a type of estimation, a method that can be used to generate new data points between a discrete set of known data points. In the described problem it’s about generating interpolated colors inside the rendered triangle.

The way to achieve this is by using barycentric coordinates. (Those coordinates not only can be used for interpolation but additionally define if a fragment lies inside the triangle or not. More on this topic can be read in Rasterization: a Practical Implementation). In short, barycentric coordinate system is a set of three points λ1, λ2, λ3 ≥ 0 (for convex polygons) where λ1 + λ2 + λ3 = 1. When a triangle is rasterized, for every fragment of the triangle proper barycentric coordinates are calculated. Then the color of the fragment C can be calculated by weighted sum of colors at each vertex C1, C2, C3, and weights are of course barycentric coordinates: C = C1 * λ1 + C2 * λ2 + C3 * λ3

This way not only color can be interpolated, but any vertex attribute. (This functionality can be disabled by using proper Interpolation Qualifier on an attribute in vertex shader source code, like flat).

When dealing with triangle meshes, this way of attribute interpolation gives correct results, but when rendering 2D sprites or font glyphs, some disadvantages may occur under specific circumstances. When we want to render a gradient which starts in one of the corners of a sprite (see the picture below), we can see quite ugly results :(

This is happening because interpolation occurs on two triangles independent of each other. Graphics cards can work only on triangles, not quads. In this case we want the interpolation to occur on a quad not triangle, as pictured below:

How to trick the graphics card and force quadrilateral attributes interpolation? One way to render such type of gradients is by using tessellation – subdivide quad geometry. Tessellation shader is available starting from DirectX 11, OpenGL 4.0, and Vulkan 1.0. A simple example of how it will look like depending on different parameters of tessellation (more details about tessellation can be found in Tessellation Stages) can be seen in the animated picture below.

As we can see, when the quad is subdivided to more than 16 pieces, we are getting the desired visual result, but as my high school teacher used to say: “Don’t shoot a fly with a cannon”, and this solution for rendering such a simple thing by using tessellation is an overkill.

This way I developed a new technique that will be helpful to achieve this goal. First, we need to get access to the barycentric coordinates in the fragment shader. DirectX 12 HLSL gives us those coordinates when using SV_Barycentrics. In Vulkan those coordinates are available in AMD VK_AMD_shader_explicit_vertex_parameter extension and Nvidia VK_NV_fragment_shader_barycentric extension. Maybe in the near future it will be available in the core spec of Vulkan and more importantly from all hardware vendors.

If we are not fortunate to have those coordinates as built-in functions, we can generate them by adding some additional data: one new vertex attribute and one new uniform (constant) value. Here are the details of this solution. Consider a quadrilateral built from four vertices and two triangles as shown in the picture below.

Additional attribute Barycentric in vertices is a 2D vector and should contain following values:

A = (1,0) B = (0,0) C = (0,1) D = (0,0)

Next step is to calculate extra constant data for the parameter to be interpolated as shown in the picture above (in this case color attribute), using the equation:

ExtraColorData = - ColorAtVertexA +  ColorAtVertexB - ColorAtVertexC + ColorAtVertexD

The fragment shader that renders the interpolation we are looking for looks like this:

/////////////////////////////////////////////////////////////////////
//GLSL Fragment Shader
/////////////////////////////////////////////////////////////////////
#version 450
#extension GL_ARB_separate_shader_objects : enable

layout(binding = 0) uniform CONSTANT_BUFFER 
{
    vec4 ExtraColorData;
} cbuffer;

in block
{
    vec4 Color;
    vec2 Barycentric;
} PSInput;

layout(location = 0) out vec4 SV_TARGET;

void main()
{
    SV_TARGET = PSInput.Color + PSInput.Barycentric.x * PSInput.Barycentric.y * cbuffer.ExtraColorData;
}

/////////////////////////////////////////////////////////////////////
//HLSL Pixel Shader
/////////////////////////////////////////////////////////////////////
cbuffer CONSTANT_BUFFER : register(b0)
{
    float4 ExtraColorData;
};

struct PSInput
{
    float4 color : COLOR;
    float2 barycentric : TEXCOORD0;
};

float4 PSMain(PSInput input) : SV_TARGET
{
    return input.color + input.barycentric.x * input.barycentric.y * ExtraColorData;
}

That’s all. As we can see, it should not have a big performance overhead. When barycentric coordinates will be more available, then memory overhead will also be minimal.

Probably the reader will ask the question if this is looking good in the 3D perspective scenario, not only when a triangle is parallel to the screen?

As shown in the picture above, hardware properly interpolates data using additional computation as describe here (Rasterization: a Practical Implementation), so this new method works with perspective as it should.

Does this method give proper results only on squares?

This is a good question! The solution described above works on all parallelograms. I’m now working on a solution for all convex quadrilaterals.

What else can we use this method for?

One more usage comes to my mind: a post-process fullscreen quad. As I mentioned earlier, graphics cards do not render quads, but triangles. To simulate proper interpolation of attributes, 3D engines render one BIG triangle which covers the whole screen. With this new approach, rendering quad built from two triangles can be available and attributes which are needed to be quadrilateral interpolated can be calculated in the way shown above.

Comments | #math #rendering Share

Comments

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2024