Wednesday, May 20, 2015

Aliasing: Part 2

Geometry aliasing is well known and is a complex (and terrible) monster that can take different forms, let's see them:

The staircase form


We have seen that texture filtering and mipmapping can "avoid" aliasing by blurring the input signal. Unfortunately, with geometry, we don't have such a luxury (at least not directly at the hardware level). Here's a simple example :

Let's rasterize a triangle :


Then shade it in red :


Here it is: The staircase monster !

To solve it we usually blur along the edges as a post-process. Here are two very common algorithms about it :
FXAA (not very expensive, but blur too much)
SMAA (work better but more expensive)


The diffuse lighting form

So we fought well and defeated the staircase form without too much damage to our scene, but aliasing is still there lurking in the dark. It waits and then pops as a fully lit pixel just to hide in the dark again in the next frame ! 

This form of aliasing is more tricky, it needs the camera to be in motion to be clearly visible. Here is an example :

* Let's say that we want to simulate breathing of the main character with the camera. The player stay still. The view remains "almost" the same. However it is translated a little bit back and forth every frame from A to B.

* Let's now imagine that there is a bezel in the scene, this bezel is defined with geometry only.

* Finally let's apply a simple diffuse lighting assuming the light is above the bezel (local normal dot up vector in our case)




When camera create the A projection, the pixel on the bezel is purely red.
When camera create the B projection, the pixel on the bezel is almost black !

By going back and forth, the camera is thus creating temporal aliasing.
Thus, even something as simple as diffuse lighting is exposed to aliasing :(

Will vertex interpolation save the day?

One can say that I have oversimplified the problem, actually yes :P. Most of the time vertices will not be duplicated and vertex interpolation will happens between faces, thus making pixel shading smoother  :

blue lines = vertices and associated normals
yellow lines = interpolated normals
green = blue + yellow lines


Yeah vertex interpolation save the day !
Humm actually not, it pretty much kills the lighting... not really a solution then :(


Let's try to add more vertices !

blue lines = vertices and associated normals
yellow lines = interpolated normals
green = blue + yellow lines

Nope, still not good. Temporal aliasing is still lurking :(

Actually having aliasing by using more tessellated geometry is one of the inherent flaw of the rendering pipeline. The sampling rate is too low compared to the frequency of the information carried by the geometry, and thus we cannot recreate the initial signal properly.

A possible help here could be to feed current shading with the one(s) from the previous frame(s), this is called temporal antialiasing. And this can be merged with geometry antialiasing (FXAA/SMAA/...) as well.

TXAA from NVidia is such a technique

A word about Specular aliasing

Specular aliasing can be awful and thus tend to be more known than the diffuse one. Fortunately it usually relies on a gloss map that can be pre-filtered in correlation with the normal rate of change (ie variance).

In other words specular can be made less glossy when normal changes a lot in the neighborhood of a shaded pixel. This is very effective but it's still an approximation.

Here is an excellent article by Stephen Hills about it :
Specular-showdown


However, in the real world we often approximate this.
The theory requires the
normal variance to be known offline and the gloss map to be linked to it. However in reality normal maps are often in tangent space and the maps will be used on many different meshes with different geometry. Furthermore details maps variance should be taken into account if used.




No comments:

Post a Comment