Wednesday, May 20, 2015

Aliasing: Part 2

Geometry aliasing is well known and is a complex (and terrible) monster that can take different forms, let's see them:

The staircase form


We have seen that texture filtering and mipmapping can "avoid" aliasing by blurring the input signal. Unfortunately, with geometry, we don't have such a luxury (at least not directly at the hardware level). Here's a simple example :

Let's rasterize a triangle :


Then shade it in red :


Here it is: The staircase monster !

To solve it we usually blur along the edges as a post-process. Here are two very common algorithms about it :
FXAA (not very expensive, but blur too much)
SMAA (work better but more expensive)


The diffuse lighting form

So we fought well and defeated the staircase form without too much damage to our scene, but aliasing is still there lurking in the dark. It waits and then pops as a fully lit pixel just to hide in the dark again in the next frame ! 

This form of aliasing is more tricky, it needs the camera to be in motion to be clearly visible. Here is an example :

* Let's say that we want to simulate breathing of the main character with the camera. The player stay still. The view remains "almost" the same. However it is translated a little bit back and forth every frame from A to B.

* Let's now imagine that there is a bezel in the scene, this bezel is defined with geometry only.

* Finally let's apply a simple diffuse lighting assuming the light is above the bezel (local normal dot up vector in our case)




When camera create the A projection, the pixel on the bezel is purely red.
When camera create the B projection, the pixel on the bezel is almost black !

By going back and forth, the camera is thus creating temporal aliasing.
Thus, even something as simple as diffuse lighting is exposed to aliasing :(

Will vertex interpolation save the day?

One can say that I have oversimplified the problem, actually yes :P. Most of the time vertices will not be duplicated and vertex interpolation will happens between faces, thus making pixel shading smoother  :

blue lines = vertices and associated normals
yellow lines = interpolated normals
green = blue + yellow lines


Yeah vertex interpolation save the day !
Humm actually not, it pretty much kills the lighting... not really a solution then :(


Let's try to add more vertices !

blue lines = vertices and associated normals
yellow lines = interpolated normals
green = blue + yellow lines

Nope, still not good. Temporal aliasing is still lurking :(

Actually having aliasing by using more tessellated geometry is one of the inherent flaw of the rendering pipeline. The sampling rate is too low compared to the frequency of the information carried by the geometry, and thus we cannot recreate the initial signal properly.

A possible help here could be to feed current shading with the one(s) from the previous frame(s), this is called temporal antialiasing. And this can be merged with geometry antialiasing (FXAA/SMAA/...) as well.

TXAA from NVidia is such a technique

A word about Specular aliasing

Specular aliasing can be awful and thus tend to be more known than the diffuse one. Fortunately it usually relies on a gloss map that can be pre-filtered in correlation with the normal rate of change (ie variance).

In other words specular can be made less glossy when normal changes a lot in the neighborhood of a shaded pixel. This is very effective but it's still an approximation.

Here is an excellent article by Stephen Hills about it :
Specular-showdown


However, in the real world we often approximate this.
The theory requires the
normal variance to be known offline and the gloss map to be linked to it. However in reality normal maps are often in tangent space and the maps will be used on many different meshes with different geometry. Furthermore details maps variance should be taken into account if used.




Friday, May 8, 2015

Aliasing

Hi,

I'm starting a serie of posts about the different forms of aliasing. Geometry aliasing is a well known subject, however aliasing is much more than that.

Aliasing takes its evilness in one of the most fundamental notion of realtime rendering : rasterization. Let's see why.

Rasterization


When rendering an image we do it pixel by pixel, usually doing the shading only once for every pixel, and using the center of the pixel as the point to render to:


The trouble is that pixels are not points at all. Would they be infinitely small that would be a perfectly fine approach. However even in 1080p, pixels are quite big. But quite big against what ? 

* Aliasing is certainly not worst on larger TV.
* On the other hand aliasing is worst if we reduce the resolution.

Let's view the scene as a collection of varying information that we can sample at any pixel position :
* Z position
* albedo
* normal
* etc

And let's now target one of these values (albedo for example) and see it how it is sampled :

Aliasing from texture sampling


Let's say our shading will simply set the pixel's value to the color from the texture :




Seems great ! 

But what happens if the red channel changes faster ? 




Something strange is happening here ! A low frequency pattern is appearing on our pixels while the source was actually very high frequency ! Bad bad !!!!

In other words : When sampling a signal, if the sampling frequency is too low (called undersampling), we can't reconstruct it properly.. 

Here is another example (the well known moiré pattern) :

High enough sampling

Undersampled


In our case the sampling frequency is driven by our resolution, and the signal frequency is driven by the input data -> geometry, texture, texel ratio and sampling method (from point to anisotropic sampling).

That seems a complex problem, however the Nyquist-Shannon sampling theorem is here to help us: 

In short it says that we can properly reconstruct a signal if the sampling frequency is at least twice the sampled signal maximum frequency (for proper intensity reconstruction target more like 4).


To simplify, let's agree that :
* triangle will always be far bigger than a pixel
* texel ratio (texel/pixel) will neither exceed one

Then our remaining worst case scenario is a "screenspace oriented" triangle with a texel ratio of one.

If we go back to our red shading case the maximum frequency in the texture is thus less than 2 pixels. Meaning any pattern less or equal to 2 pixels can cause problem. 

In real life we will probably have cases with texel ratio > 1, triangle subpixel, and/or a complex shading. 

However all is not lost : texture filtering  will help a lot as mipmapping will soften the input signal. Finaly this form of aliasing is mostly visible on repeating patterns (for example tiled details maps).

Additional food for thought (Thanks Jean-Michel!): To be sure to reconstruct the input signal properly (without losing it's intensity) one should sample at 3 to 4 times the input signal frequency. Furthermore using nearest sampling can have the effect of undersampling even if a lot of pixels are shaded. The texture being already a discrete information itself.

Geometry undersampling


For the sake of sanity we have agreed that :
* triangle will always be far bigger than a pixel
* texel ratio will never exceed one

Unfortunatly this is not always the case.

We have seen that texture texel ratio and content (frequency) can create aliasing. However this is not limited to texture. Every input signal can create aliasing if its undersampled.

Geometry is a very good candidate, Especially now that triangle are becoming smaller and smaller with the new generation ! That's gonna be the subject of next post :)