This time at Unity its with the awesome folks from Grenoble ML team that we did a blog post and this is about the Real-time on the fly style transfer.
Check it out : https://blogs.unity3d.com/2020/11/25/real-time-style-transfer-in-unity-using-deep-neural-networks/
Florent's Tech Blog
About lighting and stuff
Tuesday, January 26, 2021
Real-time style transfer in Unity using deep neural networks
Monday, May 20, 2019
Unity GPU lightmapper: A technical deep dive.
The awesome folks at the Lighting team at Unity and I have been working on a blog post about the GPU lightmapper.
Check it out : https://blogs.unity3d.com/2019/05/20/gpu-lightmapper-a-technical-deep-dive/
Check it out : https://blogs.unity3d.com/2019/05/20/gpu-lightmapper-a-technical-deep-dive/
Wednesday, May 20, 2015
Aliasing: Part 2
Geometry aliasing is well known and is a complex (and terrible) monster that can take different forms, let's see them:
The staircase form
We have seen that texture filtering and mipmapping can "avoid" aliasing by blurring the input signal. Unfortunately, with geometry, we don't have such a luxury (at least not directly at the hardware level). Here's a simple example :
Let's rasterize a triangle :
Then shade it in red :
Here it is: The staircase monster !
To solve it we usually blur along the edges as a post-process. Here are two very common algorithms about it :
FXAA (not very expensive, but blur too much)
SMAA (work better but more expensive)
The diffuse lighting form
So we fought well and defeated the staircase form without too much damage to our scene, but aliasing is still there lurking in the dark. It waits and then pops as a fully lit pixel just to hide in the dark again in the next frame !
This form of aliasing is more tricky, it needs the camera to be in motion to be clearly visible. Here is an example :
* Let's say that we want to simulate breathing of the main character with the camera. The player stay still. The view remains "almost" the same. However it is translated a little bit back and forth every frame from A to B.
* Let's now imagine that there is a bezel in the scene, this bezel is defined with geometry only.
* Finally let's apply a simple diffuse lighting assuming the light is above the bezel (local normal dot up vector in our case)
When camera create the A projection, the pixel on the bezel is purely red.
When camera create the B projection, the pixel on the bezel is almost black !
By going back and forth, the camera is thus creating temporal aliasing.
Thus, even something as simple as diffuse lighting is exposed to aliasing :(
Yeah vertex interpolation save the day !
Humm actually not, it pretty much kills the lighting... not really a solution then :(
Let's try to add more vertices !
Nope, still not good. Temporal aliasing is still lurking :(
Actually having aliasing by using more tessellated geometry is one of the inherent flaw of the rendering pipeline. The sampling rate is too low compared to the frequency of the information carried by the geometry, and thus we cannot recreate the initial signal properly.
A possible help here could be to feed current shading with the one(s) from the previous frame(s), this is called temporal antialiasing. And this can be merged with geometry antialiasing (FXAA/SMAA/...) as well.
TXAA from NVidia is such a technique
In other words specular can be made less glossy when normal changes a lot in the neighborhood of a shaded pixel. This is very effective but it's still an approximation.
Here is an excellent article by Stephen Hills about it :
Specular-showdown
* Let's say that we want to simulate breathing of the main character with the camera. The player stay still. The view remains "almost" the same. However it is translated a little bit back and forth every frame from A to B.
* Let's now imagine that there is a bezel in the scene, this bezel is defined with geometry only.
* Finally let's apply a simple diffuse lighting assuming the light is above the bezel (local normal dot up vector in our case)
When camera create the A projection, the pixel on the bezel is purely red.
When camera create the B projection, the pixel on the bezel is almost black !
By going back and forth, the camera is thus creating temporal aliasing.
Thus, even something as simple as diffuse lighting is exposed to aliasing :(
Will vertex interpolation save the day?
One can say that I have oversimplified the problem, actually yes :P. Most of the time vertices will not be duplicated and vertex interpolation will happens between faces, thus making pixel shading smoother :blue lines = vertices and associated normals yellow lines = interpolated normals green = blue + yellow lines |
Yeah vertex interpolation save the day !
Humm actually not, it pretty much kills the lighting... not really a solution then :(
Let's try to add more vertices !
blue lines = vertices and associated normals yellow lines = interpolated normals green = blue + yellow lines |
Nope, still not good. Temporal aliasing is still lurking :(
Actually having aliasing by using more tessellated geometry is one of the inherent flaw of the rendering pipeline. The sampling rate is too low compared to the frequency of the information carried by the geometry, and thus we cannot recreate the initial signal properly.
A possible help here could be to feed current shading with the one(s) from the previous frame(s), this is called temporal antialiasing. And this can be merged with geometry antialiasing (FXAA/SMAA/...) as well.
TXAA from NVidia is such a technique
A word about Specular aliasing
Specular aliasing can be awful and thus tend to be more known than the diffuse one. Fortunately it usually relies on a gloss map that can be pre-filtered in correlation with the normal rate of change (ie variance).In other words specular can be made less glossy when normal changes a lot in the neighborhood of a shaded pixel. This is very effective but it's still an approximation.
Here is an excellent article by Stephen Hills about it :
Specular-showdown
However,
in the real world we often approximate this.
The theory requires the normal variance to be known offline and the gloss map to be linked to it. However in reality normal maps are often in tangent space and the maps will be used on many different meshes with different geometry. Furthermore details maps variance should be taken into account if used.
The theory requires the normal variance to be known offline and the gloss map to be linked to it. However in reality normal maps are often in tangent space and the maps will be used on many different meshes with different geometry. Furthermore details maps variance should be taken into account if used.
Friday, May 8, 2015
Aliasing
Hi,
I'm starting a serie of posts about the different forms of aliasing. Geometry aliasing is a well known subject, however aliasing is much more than that.
Aliasing takes its evilness in one of the most fundamental notion of realtime rendering : rasterization. Let's see why.
Aliasing takes its evilness in one of the most fundamental notion of realtime rendering : rasterization. Let's see why.
Rasterization
When rendering an image we do it pixel by pixel, usually doing the shading only once for every pixel, and using the center of the pixel as the point to render to:
The trouble is that pixels are not points at all. Would they be infinitely small that would be a perfectly fine approach. However even in 1080p, pixels are quite big. But quite big against what ?
* Aliasing is certainly not worst on larger TV.
* On the other hand aliasing is worst if we reduce the resolution.
Let's view the scene as a collection of varying information that we can sample at any pixel position :
* Z position
* albedo
* normal
* etc
And let's now target one of these values (albedo for example) and see it how it is sampled :
Aliasing from texture sampling
Let's say our shading will simply set the pixel's value to the color from the texture :
Seems great !
But what happens if the red channel changes faster ?
Something strange is happening here ! A low frequency pattern is appearing on our pixels while the source was actually very high frequency ! Bad bad !!!!
In other words : When sampling a signal, if the sampling frequency is too low (called undersampling), we can't reconstruct it properly..
Here is another example (the well known moiré pattern) :
High enough sampling |
Undersampled |
In our case the sampling frequency is driven by our resolution, and the signal frequency is driven by the input data -> geometry, texture, texel ratio and sampling method (from point to anisotropic sampling).
That seems a complex problem, however the Nyquist-Shannon sampling theorem is here to help us:
In short it says that we can properly reconstruct a signal if the sampling frequency is at least twice the sampled signal maximum frequency (for proper intensity reconstruction target more like 4).
Full details here : Wikipedia Nyquist-Shannon sampling theorem
To simplify, let's agree that :
* triangle will always be far bigger than a pixel
* texel ratio (texel/pixel) will neither exceed one
Then our remaining worst case scenario is a "screenspace oriented" triangle with a texel ratio of one.
If we go back to our red shading case the maximum frequency in the texture is thus less than 2 pixels. Meaning any pattern less or equal to 2 pixels can cause problem.
In real life we will probably have cases with texel ratio > 1, triangle subpixel, and/or a complex shading.
However all is not lost : texture filtering will help a lot as mipmapping will soften the input signal. Finaly this form of aliasing is mostly visible on repeating patterns (for example tiled details maps).
Additional food for thought (Thanks Jean-Michel!): To be sure to reconstruct the input signal properly (without losing it's intensity) one should sample at 3 to 4 times the input signal frequency. Furthermore using nearest sampling can have the effect of undersampling even if a lot of pixels are shaded. The texture being already a discrete information itself.
Geometry undersampling
For the sake of sanity we have agreed that :
* triangle will always be far bigger than a pixel
* texel ratio will never exceed one
Unfortunatly this is not always the case.
We have seen that texture texel ratio and content (frequency) can create aliasing. However this is not limited to texture. Every input signal can create aliasing if its undersampled.
Geometry is a very good candidate, Especially now that triangle are becoming smaller and smaller with the new generation ! That's gonna be the subject of next post :)
Wednesday, April 15, 2015
Links collection
Hi.
This post is a collection of links collected overtime for reference, its mostly about rendering. Hopefully it can be useful to someone else :).
PS : More link will be added in the future.
CPU benchmark
Renderdoc (DX11 debugger)
GRemedy (Opengl debugger)
Indexed file search
Disk space visualizer
Text editors color schemes
Gaussian kernel calculator
Mercurial timelapse view
A programming language for Visualisation
Chart/Diagram web app
Regular expression nice tool
c0de517e Blog
Diary Of A Graphics Programmer
Sebastien Lagarde blog
The ryg blog
Beyond3D blog
Casual-effects blog
Filmicworlds blog
Filmicgames blog
RememberMe rendering
RememberMe rendering 2
Simon Tech Blog
Stephen Hill Siggraph 2018 links
Stephen Hill Siggraph 2017 links
Stephen Hill Siggraph 2016 links
Stephen Hill Siggraph 2015 links
Stephen Hill Siggraph 2014 links
Stephen Hill Siggraph 2013 links
Stephen Hill Siggraph 2012 links
HPG 2018
Jare GDC 2014 links
Jare GDC 2013 links
Humus papers
Dice papers
Square-Enix papers
Michal Valient papers
AMD Papers
Crytek papers
Magnus Wrenninge (volume rendering) papers
Blur comparison by intel
Eurographics 2016
Free paper search engine
GraphicGems books
GpuPro books
Realtimerendering book
GPUGems1
GPUGems2
GPUGems3
Raytracing (in a weekend, the next week, the rest of your life)
Physically Based Rendering:From Theory To Implementation
Ray Tracing gems
Immersive Math : interactive online book
DeepLearning.ai youtube channel
The 9 DL network you need to know about
Open Image Denoiser (OIDN)
General video course
Backpropagation explained
Overview of gradient descent optimization algorithm
Shallow ML for everyday programming (GDC2019)
Dimensionality reduction
How NN understand images
Color palettes for data
Google dataset search
Understanding CNN with visualizations
State-of-the-Art Survey (January 2019)
Deep learning note from Andrew NG Course
Guide to convolution arithmetic for deep learning
Synthetic data to train D: A review
Neurips 2020
Implementing Strassen’s Algorithm with CUTLASS on NVIDIA Volta GPUs
SGEMM optimized at assembly level on Maxwell hardware
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Geexlab code_samples
Glsl sandbox
Frustum planes from the projection matrix
Math tutorials
2d Interpolations
Barycentric tetrahedrons interpolation
Color conversion
Color conversion 2
Stupid Spherical Harmonics tricks
BitHacks
Bicubic upscaler
Fast tonemapping/inverse tonemapping
Fast tonemapping/inverse tonemapping v2
Oblique projection matrices
Oblique projection matrices in Unity
Efficient depth linearization of oblique projection matrices
Good write up on SH
Point on hemisphere computed in shaders code
Building an orthonormal basis
Building an orthonormal basis with improved accuracy
A Collection of math formula for GI
Deringing spherical harmonics
SH visualization
Even faster math functions
The matrix cookbook (including derivatives)
Vector, Matrix and Tensor derivative
CgTutorial Book
Yaldex opengl reference
Matrix transformations explained
DX9 hacks
DX9 Half pixel offset explained
DX9 Half pixel offset explained 2
DX ShaderModel asm reference
DX12 video from microsoft
DX12, Vulcan, etc
Vulcan example
DX11 hacks
DX11.3 Specification
DX12 Specification
OpenCL 1.2 specs
OpenCL 1.2 ref
OpenCL 1.2 quick ref card
OpenCL 2.0 specs
OpenCL 2.0 C specs
OpenCL 2.0 ref
OpenCL 2.0 quick ref card
Intel checklist for OpenCL optimizations
OpenCL AMD optimizations guides and also
Cuda bank conflict optim
Low level gpu doc
AMD GCN performance hints
Shader optimization
Shader optimization2 (GCN/DX11)
Optimal grid rendering
Depth Upsampling
Math lib for GCN: ShaderFastLib
GPU Scalarization
Unit vectors encoding
Normals encoding
Get rid of tangent and binormal
BCN formats explained
Normal mal blending
Tightening the Precision of Perspective Rendering
Depth-precision-visualized
Tiled shading
Clustered shading
Practical Clustered Shading
Practical Clustered 2013
Nvidia tiled demo (see download links)
Tiled based rendering
Tiled based rendering & Deferred
Improved culling for tiled and clustered rendering
Dual paraboloid 2
Nvidia PCSS
Shadow acne/filtering
Shadow acne 2 (normal offset)
Shadow technics
Shadow technics2
AMD ShadowFX
VSM
RealtimeShadows the book
Soft shadow from spherical harmonics
A good collection of links
Adaptive depth bias
Playing with realtime shadow Crytec
Beyond MSM
Improved MSM (soft shadows and scattering)
Caustics
A scalable real-time many-light shadowing system
Light Probe Interpolation Using Tetrahedral
GI on the cloud
FC3 GI
GI on GPU
Lightmap compression on metal
Packing lightmap
Spherical harmonic lighting
Various link on ray tracing
Correlated Multi-Jittered sampling
Overview of sampling techniques
Siggraph course
MLAA
SMAA
MSAA
MSAA 2
Temporal antialiasing
TAA jitter pattern
Spec antialias : LEAN mapping
Spec antialias : mipmapping normal maps
Spec antialiasing methods
Text antialiasing
Checkboard rendering explained
Dashed line antialiasing
IBL and Parallax correct cubemap
Envmap lighting
Being more wrong parallax corrected
SSLR implementation from GPU Pro 5
Removing banding (p122)
SSRR interleaved
PostFX in bounds
Intel ASSAO
Auto exposure
SSLR in kill zone 2
Maths of PBR
Understanding the shading shadow function
Specular brdf reference
UE4 PBR
Bioshock PBR
Unity PBR
Moving frostbyte to PBR
Specular occlusion hack
Specular occlusion hack v2 p77 listing 26
History of the RGB color model
Horizon based specular occlusion
Separable Subsurface scattering (skin)
Fast subsurface scattering approx by Pixar
Good tutorial
GGX for half/medium
PBR Encyclopedia
Allegorithmic PBR guide vol. 1
Google filament renderer
Painterly Rendering with Curved Brush Strokes of Multiple Sizes
Customizing Painterly Rendering Styles Using Stroke Processes
Paint by relaxation
Abstract Painting with Interactive Control of Perceptual Entropy
Gpu image flow
Stable dithering in return of the Obra Dinn
Intel fluid simulation 2
Fluidic code repository
Fluid simulation for dummies
Data oriented design
Ubi GPU Pipe
Ubi GPU Pipe 2
GPU Voxelisation
Summed area table
Parallel reduction
Parallel reduction 2
GPU Sorting
Compute shader pipe explained 1
Compute shader pipe explained 2
Compute shader pipe explained 3
Effectively integrating rtx ray tracing for realtime
Texture Level-of-Detail Strategies for Real-Time Ray Tracing
GLSL Shader code to convert color from various space
Color for Graphics Programmers
Rendering of insight
Coding pirates material with Unity
Skin shading
Parallax occlusion mapping
White Noise in GLSL
High quality capture of eyes
Eye rendering
A tour of the graphics pipeline
Double VS tripple buffering android
How to bake normal maps for unity
This post is a collection of links collected overtime for reference, its mostly about rendering. Hopefully it can be useful to someone else :).
PS : More link will be added in the future.
Tools
Equation writter and doc (use $$ to start an equation)
GPU benchmarkCPU benchmark
Renderdoc (DX11 debugger)
GRemedy (Opengl debugger)
Indexed file search
Disk space visualizer
Text editors color schemes
Gaussian kernel calculator
Mercurial timelapse view
A programming language for Visualisation
Chart/Diagram web app
Regular expression nice tool
Blogs
A very good review of existing blogs/forumc0de517e Blog
Diary Of A Graphics Programmer
Sebastien Lagarde blog
The ryg blog
Beyond3D blog
Casual-effects blog
Filmicworlds blog
Filmicgames blog
RememberMe rendering
RememberMe rendering 2
Simon Tech Blog
Paper collections
Siggraph advanced rendering course 2018
Siggraph advanced rendering course 2017
Siggraph advanced rendering course 2016
Siggraph advanced rendering course 2015
Kesen Siggraph/i3d/Eurographics/etc links (huge collections)Siggraph advanced rendering course 2017
Siggraph advanced rendering course 2016
Siggraph advanced rendering course 2015
Stephen Hill Siggraph 2018 links
Stephen Hill Siggraph 2017 links
Stephen Hill Siggraph 2016 links
Stephen Hill Siggraph 2015 links
Stephen Hill Siggraph 2014 links
Stephen Hill Siggraph 2013 links
Stephen Hill Siggraph 2012 links
HPG 2018
Jare GDC 2014 links
Jare GDC 2013 links
Humus papers
Dice papers
Square-Enix papers
Michal Valient papers
AMD Papers
Crytek papers
Magnus Wrenninge (volume rendering) papers
Blur comparison by intel
Eurographics 2016
Free paper search engine
Books
Realtime rendering the redux bookGraphicGems books
GpuPro books
Realtimerendering book
GPUGems1
GPUGems2
GPUGems3
Raytracing (in a weekend, the next week, the rest of your life)
Physically Based Rendering:From Theory To Implementation
Ray Tracing gems
Immersive Math : interactive online book
Deep Learning / Machine Learning
Deep Neural Network From ScratchDeepLearning.ai youtube channel
The 9 DL network you need to know about
Open Image Denoiser (OIDN)
General video course
Backpropagation explained
Overview of gradient descent optimization algorithm
Shallow ML for everyday programming (GDC2019)
Dimensionality reduction
How NN understand images
Color palettes for data
Google dataset search
Understanding CNN with visualizations
State-of-the-Art Survey (January 2019)
Deep learning note from Andrew NG Course
Guide to convolution arithmetic for deep learning
Synthetic data to train D: A review
Neurips 2020
Deep Learning optimisation
Discreet FFT for 3x3 convolution paperImplementing Strassen’s Algorithm with CUTLASS on NVIDIA Volta GPUs
SGEMM optimized at assembly level on Maxwell hardware
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Shader libraries
ShadertoyGeexlab code_samples
Glsl sandbox
Maths
Inigo quilez math tutorialsFrustum planes from the projection matrix
Math tutorials
2d Interpolations
Barycentric tetrahedrons interpolation
Color conversion
Color conversion 2
Stupid Spherical Harmonics tricks
BitHacks
Bicubic upscaler
Fast tonemapping/inverse tonemapping
Fast tonemapping/inverse tonemapping v2
Oblique projection matrices
Oblique projection matrices in Unity
Efficient depth linearization of oblique projection matrices
Good write up on SH
Point on hemisphere computed in shaders code
Building an orthonormal basis
Building an orthonormal basis with improved accuracy
A Collection of math formula for GI
Deringing spherical harmonics
SH visualization
Even faster math functions
The matrix cookbook (including derivatives)
Vector, Matrix and Tensor derivative
APIs
OpenGL specificationCgTutorial Book
Yaldex opengl reference
Matrix transformations explained
DX9 hacks
DX9 Half pixel offset explained
DX9 Half pixel offset explained 2
DX ShaderModel asm reference
DX12 video from microsoft
DX12, Vulcan, etc
Vulcan example
DX11 hacks
DX11.3 Specification
DX12 Specification
OpenCL 1.2 specs
OpenCL 1.2 ref
OpenCL 1.2 quick ref card
OpenCL 2.0 specs
OpenCL 2.0 C specs
OpenCL 2.0 ref
OpenCL 2.0 quick ref card
Intel checklist for OpenCL optimizations
OpenCL AMD optimizations guides and also
Realtime ray tracing
RTX on battlefield V (GDC2019)
RTX in metro exodus (GDC2019)
RTX Ray traced water caustics (GDC2019)
Optimization
NVidia compute shader optimCuda bank conflict optim
Low level gpu doc
AMD GCN performance hints
Shader optimization
Shader optimization2 (GCN/DX11)
Optimal grid rendering
Depth Upsampling
Math lib for GCN: ShaderFastLib
GPU Scalarization
Buffers/Textures
ZBuffer precisionUnit vectors encoding
Normals encoding
Get rid of tangent and binormal
BCN formats explained
Normal mal blending
Tightening the Precision of Perspective Rendering
Depth-precision-visualized
Tiled/clustered lighting
Light indexed deferred renderingTiled shading
Clustered shading
Practical Clustered Shading
Practical Clustered 2013
Nvidia tiled demo (see download links)
Tiled based rendering
Tiled based rendering & Deferred
Improved culling for tiled and clustered rendering
Shadow
Dual paraboloidDual paraboloid 2
Nvidia PCSS
Shadow acne/filtering
Shadow acne 2 (normal offset)
Shadow technics
Shadow technics2
AMD ShadowFX
VSM
RealtimeShadows the book
Soft shadow from spherical harmonics
A good collection of links
Adaptive depth bias
Playing with realtime shadow Crytec
Beyond MSM
Improved MSM (soft shadows and scattering)
Caustics
A scalable real-time many-light shadowing system
Global illumination
Voxel cone tracingLight Probe Interpolation Using Tetrahedral
GI on the cloud
FC3 GI
GI on GPU
Lightmap compression on metal
Packing lightmap
Spherical harmonic lighting
Various link on ray tracing
Correlated Multi-Jittered sampling
Overview of sampling techniques
Antialiasing
Overview of antialiasingSiggraph course
MLAA
SMAA
MSAA
MSAA 2
Temporal antialiasing
TAA jitter pattern
Spec antialias : LEAN mapping
Spec antialias : mipmapping normal maps
Spec antialiasing methods
Text antialiasing
Checkboard rendering explained
Dashed line antialiasing
Cubemap/Images Effects
Parallax corrected cubemapIBL and Parallax correct cubemap
Envmap lighting
Being more wrong parallax corrected
SSLR implementation from GPU Pro 5
Removing banding (p122)
SSRR interleaved
PostFX in bounds
Intel ASSAO
Auto exposure
SSLR in kill zone 2
Physically Based Rendering
Minute Physics: Linear color spaceMaths of PBR
Understanding the shading shadow function
Specular brdf reference
UE4 PBR
Bioshock PBR
Unity PBR
Moving frostbyte to PBR
Specular occlusion hack
Specular occlusion hack v2 p77 listing 26
History of the RGB color model
Horizon based specular occlusion
Separable Subsurface scattering (skin)
Fast subsurface scattering approx by Pixar
Good tutorial
GGX for half/medium
PBR Encyclopedia
Allegorithmic PBR guide vol. 1
Google filament renderer
Painterly & non realistic rendering
Gloss Perception in Painterly and Cartoon RenderingPainterly Rendering with Curved Brush Strokes of Multiple Sizes
Customizing Painterly Rendering Styles Using Stroke Processes
Paint by relaxation
Abstract Painting with Interactive Control of Perceptual Entropy
Gpu image flow
Stable dithering in return of the Obra Dinn
Fluid Simulation
Intel fluid simulationIntel fluid simulation 2
Fluidic code repository
Fluid simulation for dummies
Multithreading
LockfreeData oriented design
GPU Driven pipeline
Rainbow6 SiegeUbi GPU Pipe
Ubi GPU Pipe 2
Tesselation/Compute
A survey of tesselation technicsGPU Voxelisation
Summed area table
Parallel reduction
Parallel reduction 2
GPU Sorting
Compute shader pipe explained 1
Compute shader pipe explained 2
Compute shader pipe explained 3
Realtime ray tracing
NVidia intro to real raytracingEffectively integrating rtx ray tracing for realtime
Texture Level-of-Detail Strategies for Real-Time Ray Tracing
Color spaces
Color interpolation in various spacesGLSL Shader code to convert color from various space
Color for Graphics Programmers
Volume & scattering
Misc
Debugging optimized code in VS studio 2012 and upRendering of insight
Coding pirates material with Unity
Skin shading
Parallax occlusion mapping
White Noise in GLSL
High quality capture of eyes
Eye rendering
A tour of the graphics pipeline
Double VS tripple buffering android
How to bake normal maps for unity
Thursday, April 9, 2015
A little history : from Phong to BRDF
In this post i will start from the well known Phong reflection model and explain why we are now (mostly) moving to physically based rendering.
Phong reflection model
This shading model was proposed by Bui Tuong Phong, who published it in his 1973 Ph.D at the University of Utah.
Where :
Ambient = AmbiantColor * Albedo
Diffuse = clamp(N.L, 0, 1) * Albedo
Spec = clamp(V.R,0,1)^shininess
It's a very very efficient model and yet a good approximation. There is genius behind it !
Blinn-Phong
Few years latter in 1977 we get Blinn-Phong.
It's really close to the Phong model but for the specular :
Where:
Spec = clamp(N.H,0,1)^shininess
It is more accurate at steep angle & faster. Neat !
Furthermore it was implemented in hardware up to DX 9c.
This model was thus very efficient and heavily used by the time :)
Diffuse VS Specular
So Phong and Blinn-Phong are both doing a clear separation between Ambient, Diffuse and Specular. Let's explain that a little :
Ambient : is the light that bounce around so much that you loss any directional information.
Diffuse : is the light that come from the light source and get reflected in all direction.
Specular : is the light that come from the light source and get reflected "more" in the direction of the reflection vector.
From "perfect" specular to diffuse. |
That separation may seems a little bold ? However it's not a bad approximation to be better convinced let's take a look at John Hable work:
And
Impressive isn't it ?
Morality:
- Specular is much more present than we think
- Specular might be colored
- Blinn-Phong is a lie (ok it was more than expected)
Light is uber-complex
So obviously light is much more complex :) Here are some example :
Scattering
When light travel inside a solid and lit it "from the inside".
Can be seen on wax, human skin, clouds in the sky, ice etc etc !
Caustics
When light is reflected/refracted via a curved surface, creating cool "drawing" of concentrated light.
Can be seen on glass construct, ice, basically anything curved and semi-transparent.
Bounce
When light bounce around its color is altered (some wavelength get absorbed) based on the materials it bounce on. Here is the ultra-famous cornel-box were the effect can be seen.
Take a look at the right face of the right box for example.
And there are a lot more ! Especially if you consider the complexity of human perception/eyes on top of the complexity of light.
So :
Do we want scattering ? Yes sometime (usually faked) !
Do we want caustics ? No. It's way too expensive!
Do we want bounces ? Yes (but that's another story)
Do we want more cool stuff ? Depend on cost.
Improving over Blinn-Phong specular model
So we cannot even hope to simulate light correctly at runtime, but we still want something cool ! At this point we need to choose were we want to spend our computation power.
Caustics and Scattering are a far fetch, post effect are of subject so for the purpose of this blog post we will answers that we want a good reflection model first.
So what is a good reflection model ?
It's one that work for you !
If you want to go cartoon shading why not ! If you want to go realistic it's good too ! However in a lot of case you will probably want to have a good specular reflection. As we have seen it's very important to define the shape and details of your objects.
PS : Actually the raise of more complex BRDF is linked to the evolution of the hardware too : GPU are more and more and more powerfull, while memory is not following at the same speed. In other words the ratio of available math/per parameters have raised.
Fresnel
The first step to improve the specular is to take fresnel into account. What is that ? Please follow the link ! Another awesome post from John Hable !
Everything has fresnel
By now you should be convinced about the need to add some fresnel to our lighting.
Everything has fresnel
By now you should be convinced about the need to add some fresnel to our lighting.
First option : Add some factor/power of N.V, for example:
Shading += (1-(N.V)^FresnelHack) * Color- This have a few advantages :
- Better than nothing it tweaked correctly
- May be used to hack your BRDF for funky material (velvet for example)
- This have a few draw back:
- Specular will need to be tweaked down to compensate
- It scrap your lighting as crazing angle become emissive
- This is actually not fresnel at all even if sometime it may be called so (see below)
Second option : use Fresnel-Schlick approximation
Fresnel = F0 + (1- F0) * (1-V.H)^5
were F0 is the reflectance of the material when looking perpendicular at it.
Specular *= Fresnel
- This have a few advantages :
- F0 is a well known value from the real world and can be measured !
- Will work no matter what the lighting is (no emissive stuff).
- This have a few draw back:
- You may end up adding a hacky N.V term anyway because of "velvet like" materials...
Second option is thus much more interresting, furthermore it's energy conservative ...
Energy conservation ?
When light bounce on a material it can be either absorbed or reflected. Right ?
Thus for a given point on a surface the amount of reflected light should be inferior or equal to the amount of received light. Right ?
To go further in that direction let's first introduce the "BRDF"
BRDF function |
For “Bidirectional Reflectance Distribution Function”
It's function taking the incoming and outgoing vectors (each described by two angles) that allow to compute the lighting on a pixel.
Its the “reflected light VS incoming light function”
Is Blinn-Phong energy conservative ? The short answers is no
- Diffuse and Specular are additive and we get no garanty that there sum is below L.
- Diffuse and Specular are not even energy conservative individually !
Can we transform Blinn-Phong to be Energy conservative ? yes !
- For Diffuse it's actually a matter dividing the albedo by Pi ! Easy enough !
- For Specular what we gonna do is to normalize it. It other word no matter the specular power it will always return the amount of light defined by the specular color ! Usefull !
- The final step is to ensure albedo and specular color dont sum to more than one !
I promised not to get into the math, but if you want them you can read this very good post by Rory Driscoll : Energy conservation in games
Some pictures:
Top row is a Blinn-Phong specular (and diffuse) Bottom row is a normalized Blinn-Phong specular (and diffuse) |
Energy conservative Specular and Diffuse where (albedo + specular color <=1) |
Ok but what about the fresnel tweak we have done before ?
Its fine as it can only reduce the amount of specular ! ouf :)
Physically Based Rendering
Let’s recap important concepts :
- Diffuse
- Its the light that go to the object and bounce in any direction
- Specular
- Its the light that go to the object and bounce "more" toward the reflection vector
- Fresnel
- It adjust the specular to take crazing angle into account.
- Energy conservation
- It ensure a material does not emit more light than it receive.
- BRDF (bidirectional)
- Its the actual function that is responsible for all of that.
PBR is all about finding a consistent (ie math/physically based) way of simulating light !
Micro-facet theory
The basis of physically based rendering is to see the surface as a lot of little face, called micro-facets.
These micro-facets are so many and so tiny that we will not defined them one by one using textures. Instead we will define them using two probability functions called D and G.
For sure these probability function will themselves being define using parameters fetched from textures.
Here is the general equation : F(Fresnel) * G * D / (4 * N.L * N.V )
- D is the distribution term or "How the microfacet normal are distributed around a given direction” it can be seen as mirrorness.
Distribution term |
- G is the geometry/visibility term (can be name both) or “How much the microfacet is blocked by other microfacets” it can be seen as the roughness.
There are many BRDF build around the microfacet theory ! For example :
GGX, Ashikmin-Shirley, Beckman, Cook-Torrance, Blinn-Phong...
PBR Blinn-Phong
What ? Blinn-Phong ? Actually yes ! If we take the building blocks we have seen above our Blinn-Phong is really not far from being physically based and can be made to match the microfacet theory !
Let's say that :
D = (Pi+2)/2Pi * (N.H)^specularPower
G = N.L*N.V
F = fresnel term as seen before
Then you have a PBR Blinn-Phong ! There are different way to write it, here is one :
Diffuse = clamp(N.L, 0, 1) * Albedo / Pi
Specular = (Pi+2)/2Pi * (N.H)^specularPower * F * specularColor
With (Albedo+SpecularColor < 1)
Finally here is a little webgl shader to play with, feel free to experiment !
PS : You will need a welgb enabled browser (chrome for example).
PS2: When you can latest driver are a must.
PS3 : Be sure to take a look at all the awesome shaders on shadertoy !!! :)
Tuesday, March 31, 2015
Intro
Hi,
I recently had the pleasure to do a talk about physically based rendering to a wonderful group of video games students. (Hello DDJV 10th cohorte !) It make me realize something :
* Rendering knowledge evolve all the time.
* It usually evolves on top of already complex stuff.
* Some of it get deprecated over time.
* But more is added than deprecated over time.
Finally the firsts steps into realtime rendering can be harsh. There is a lot of knowledge to absorb upfront!
For sure there is a lot of excellent documentation on the internet, however it is mostly written for industry professional or for people with a good background in rendering.
I'm thus creating this blog and targeting it at students/newcomers in the wonderful world of realtime rendering. I will post about current techniques while trying to simplify the approach as much as possible.
See you soon for first post !
I recently had the pleasure to do a talk about physically based rendering to a wonderful group of video games students. (Hello DDJV 10th cohorte !) It make me realize something :
* Rendering knowledge evolve all the time.
* It usually evolves on top of already complex stuff.
* Some of it get deprecated over time.
* But more is added than deprecated over time.
Finally the firsts steps into realtime rendering can be harsh. There is a lot of knowledge to absorb upfront!
For sure there is a lot of excellent documentation on the internet, however it is mostly written for industry professional or for people with a good background in rendering.
I'm thus creating this blog and targeting it at students/newcomers in the wonderful world of realtime rendering. I will post about current techniques while trying to simplify the approach as much as possible.
See you soon for first post !
Subscribe to:
Posts (Atom)