-
Notifications
You must be signed in to change notification settings - Fork 338
pbr_math
Empirical illumination models like Phong reflection model have been used in real-time graphics for a long time due to their simplicity, convincing look and affordable performance. Before programmable pipeline has been introduced, graphics cards implemented Gouraud shading as part of fixed-function Transformation & Lighting (T&L) hardware blocks. Nowadays, however, numerous trade-offs of this simplicity (like lighting partially baked into object material properties and others) pushed developers to Physically-Based Rendering (PBR) illumination models.
PBR models try to fit surface shading formulas into constrains of physical laws of light propagation / absorption / reflection - hence, called "physically-based". There are two main categories of PBR illumination:
- Non-real-time renderer (cinematic).
- Real-time renderer.
The main objective of cinematic renderer is uncompromised quality, so that it relies on ray-tracing (path-tracing) rendering pipeline. Although performance of current graphics hardware does not make it possible using computationally-intensive path-tracing renderer in real-time graphics, it can be used in interactive fashion.
"Physically-based" does not necessarily mean physically-correct/precise. The main objective of real-time PBR renderer is to be fast enough even on low-end graphics hardware. So that in contrast, it hardly relies on rasterization rendering pipeline, various approximations and tricks making it applicable in real-time, while looking good enough and preserving some physical properties.
OCCT 3D Viewer provides both kinds of PBR renderers, and although they share some details in common, this article is devoted to real-time PBR metallic-roughness illumination model. This article describes the math underneath PBR shading in OCCT 3D Viewer and its GLSL programs. However, this article does not clarifies related high-level APIs nor PBR material creation pipelines, as this is another topic.
normal (on surface) | ||
view direction | ||
light | ||
half vector | ||
metallic factor | ||
roughness factor | ||
index of refraction | ||
albedo color |
The main goal of illumination model is to calculate outgoing light radiance
Where
- Positivity:
$f(v,l) \geq 0$ - Helmholtz reciprocity:
$f(v,l) = f(l, v)$ (follows from 2nd Law of Thermodynamics) - Energy conservation:
$\displaystyle \forall v , \int\limits_H f(v,l) \cos\theta_l , \mathrm{d}l = 1$ (in order not to reflect more light than came)
It is worth to be mentioned that
Where
PBR theory is based on Cook-Torrance specular BRDF Link. It imagines surface as set of perfectly reflected micro faces distributed on area in different ways which is pretty good model approximation of real world materials. If this area is small enough not to be able to recognize separate micro surfaces the results becomes a sort of averaging or mixing of every micro plane illumination contribution. In that level it allows to work with micro faces in statistical manner manipulating only probabilities distributions of micro surfaces parameters such as normals, height, pattern, orientation etc. In computer graphics pixels are units of images and it usually covers a relatively large areas of surfaces so that micro planes can be considered to be unrecognizable. Going back to the BRDF the Cook-Torrance approach has the following expression:
Three parts presented in nominator have its own meaning but can have different implementation with various levels of complexity and physical accuracy.
In that paper only one certain implementation is used. The
Where
The second
Where
The last component
Here
For pure dielectrics with
BRDF described above has one important trait making computations easier called isotropy. Isotropy in this case means independence from rotation about normal resulting from supposition of uniform micro faces distribution at any direction along a surface. It allows to simplify random samples generation during Monte-Carlo integrals calculation and reduce dimensions of some lookup tables, which will be discussed in following chapters. Of course, isotropic materials form only subset of all real world's materials, but this subset covers majority of cases. There are special models considering special anisotropic traits of surfaces like a grinding of metal or other with dependency on rotation about normal; these models require special calculation tricks and additional parameters and are out of scope of this paper.
The only thing left to do is to define
This part of ingoing light is assumed to be refracted in depth of surface and variety of events may happen there. A sequence of absorptions, reflections and reemissions more or less leads to light's subsurface scattering. Some part of this scattered light can go back outside but in modified form and in pretty unpredictable directions and positions. For opaque materials this part is noticeable and forms it's own color. If subsurface's paths of light are small enough and points of output are distributed locally around the input point it's possible to work in statistical way similar to the micro faces. This assumption covers a big amount of real world opaque materials. Other materials like skin, milk etc. with noticeable effect of subsurface scattering usually presented in form of partial translucency and some kind of self emission have more widely distributed output points and require more accurate and complicate ways of modeling with maybe some theory and techniques from volumetric rendering. The simple but visually enough assuming for statistically driven type of materials is just the same radiance for any direction. It results to Lambertian's BRDF:
Where
So that all parts described above can be combined into united diffuse BRDF:
In this chapter one possible implementation of illumination model reflecting main PBR principles has been defined. The next step is using of it in practice.
It's time to apply deduced illumination model in practice. And the first step of it is separation of direction based light sources from illumination integral. Directional nature of such light sources means possibility to calculate it's influence to point of surface using only one direction and its intensity. Usually sources of this type do not have physical size and are represented only by position in space (for point or spot lights) or by direction itself (direction light imagined to be too far point sources like sun). This is just a kind of abstraction, while real world light emitters have noticeably sizes. But sources with realistic form and size cannot be presented in discrete term and require continuous integrals calculations or special approximations in order to be accurately injected to the model. In most cases direct based light sources in form of emitting points in space or just certain directions are good approximations and are enough for beginning. Having finite discrete amount of it in scene and considering only single direction from every of these lights, the integral is transformed just to the sum:
Where
It includes influence of light reflected or scattered from other points and environment's contribution. It's impossible to achieve photorealistic results without this component, but it is also very difficult to compute. While the cross point light interaction cannot be calculated in a simple way (especially in real time rendering), the environment illumination has some options to be realized via precomputational work before visualization. But right now lets summarize the practical application of illumination model. At this moment the output radiance is represented as:
Where
The next goal after
Substituting the BRDF by its expression allows to split indirect illumination into diffuse and specular components:
This splitting seems not to lead to simplicity of calculation but these two parts will be computed in slightly different ways in future. Lets write down this separately:
Next transformations of these expressions require understanding of numerical way to find hemisphere integral and also its performance optimization techniques. And that the topic of the next chapter.
Monte-Carlo is one of numeric methods to find integral.
It is based on idea of mathematical expectation calculation.
In one dimensional case if
Also this expectation can be approximated in statistical term using certain sequence of random variable
It can be used in general definite integrals calculations.
Just valid
Where
The main questions are choosing
Where
So that
Where
Where:
So that joint probability density in new variables looks like:
This variable transfer rule of Probability Density Function (PDF) will be useful in following chapters, when integral calculation optimization techniques will be being told.
Having
The final step is sequence generation itself.
In order to be able to generate values with arbitrary distributions it is helpful to start from uniform numbers in range of
Lets find CDF for
The CDF maps
If substitute uniform distributed in range
That is the key of this random values generation technique. All steps described above can be also done for hemisphere:
Mote-Carlo integration cannot guarantee exact estimation of convergence speed with using random generated samples.
There is only probability estimation of it.
But this algorithm is pretty universal and relatively simple to be applied to almost any function using at least uniform distributed points.
Moreover special
Lets go back to the image based lighting and the figure of specular component. As was defined before that is hemisphere integral with following expression:
The Monte-Carlo integration algorithm can be directly applied:
Where the second brackets represent approximation of integral so that the expression can be rewritten as:
This integral is exact
The rest part also can be saved to image. Lets unroll its expression:
This integral is not actually a scalar.
That is RGB value due to only
This form may not look easier, but it has several advantages.
The first one is independence from globally defined
Current result for
Current goal is to speed up Monte-Carlo integration of Cook-Torrance like integrals with following expression:
Where
Frankly speaking
First of all this factor must be positive:
But the total area of micro faces landscape is at least equal to origin surface but even bigger in general:
This trait does not allow to use
Which means that total area of micro faces projected to any direction must be the same as projected area of origin macro surface.
It is pretty tricky trait in
So that
$$F(\theta_h) = \int\limits_0^{\theta_h} \frac{2 \alpha^2 \cos\theta'_h\sin\theta'_h}{(\cos^2\theta'_h(\alpha^2-1) + 1)^2}, \mathrm{d}\theta'h = \int\limits{\theta_h}^0 \frac{2 \alpha^2}{(\cos^2\theta'_h(\alpha^2-1) + 1)^2}, \mathrm{d}(\cos^2\theta'_h) = \frac{\alpha^2}{\alpha^2-1}\int\limits_0^{\theta_h} \mathrm{d}\frac{1}{\cos^2\theta'_h(\alpha^2-1)+1} =$$
In order to apply inverse transform sampling the
So that there is no more obstacles to generate
That is practical side of light direction generation.
But the theoretical one is needed to be resolved to calculate sum.
It is time to find
Where
And finally
Lets go back to the Monte-Carlo sum and insert found result to it:
Here
Summarizing importance sampling strategy described above the convergence of Monte-Carlo integration can be improved using special PDF correlated with integrated function.
In case of BRDF with normal distribution functions
The situation with BRDF part of
As was mentioned this sum must be calculated for every normal direction using the same samples generation principles as in numeric integration computation.
This principles require two scalar parameters
It is not a complete solution but practice shows that it is enough to get plausible illumination with sacrificing of lengthy reflections at grazing angles which exist in fact if everything is honestly computed.
The problem is that for
Anyway, there are just two dimensions in radiance look-up table remain.
The rest one with
The first one is using textures with smaller resolutions for larger roughnesses.
The point is that smaller
After reducing pixels count it is turn for samples number.
And again correlation with roughness can be noticed.
For example map for completely mirroring materials with
It is approximate expression representing only estimated general proportionality so that cases of
In addition to optimizations mentioned before another one can be applied in order to help to reduce samples numbers as previous one.
Using less samples produces image noise due to discrete nature of Monte-Carlo approximation.
But it can be slightly smoothed using some prefiltration.
The idea is that for the directions with small PDF or in other words for rare directions the samples near of it is unlikely to be generated.
So that this direction represents the averaged illumination from relatively big area on hemisphere but approximate it by just a constant.
It wold be better to get from such direction already averaged over bigger area environment.
It can be achieved using mip levels of origin
Circumstance of
Of course all zero divisions must be avoided by clamping, for example. After that the area covered by one pixel of environment map is calculated. In case of a cube map it looks like:
Where
The mathematics connected with mip levels sizes lie behind it but this is out of scope of this paper.
In combination with previous optimization technique this approach allows
That is not all possible optimization tricks but at least these three significantly reduces compute efforts and brings radiance map calculation to weak devices or even to dynamic environments in real time but in reasonable limits.
In that way
Lets go back to diffuse indirect illumination component represented by following formula:
Of course, Monte-Carlo algorithm can be applied directly and hemisphere integral can be precalculated for every normal direction
but dependence from
It differs from origin one and loses accuracy a little bit but now there is no light direction inside
so that it can be considered as kind of screen space defined Fresnel's factor (
The resulted expression without
Where
Function basis with such traits is known and is described by following formulas Link:
Here
Fact that all calculation happen over a sphere but not over hemisphere right now is important not to be missed. That was example of spherical function decomposition but not a solution for original task which looks like:
First of all, lets transform this integral to be defined not over hemisphere but sphere:
Where
Resulted expression can be considered as convolution in terms of spherical functions where
Where
Starting from about the third
All
Finally expression of irradiance map approximation can be defined:
Where
Summarizing all mathematics above spherical harmonics decomposition boils down irradiance map to only 9 values which is needed to be precalculated as integrals. As practice shows this is very good approximation of diffuse indirect illumination component.
TODO
TODO
Duvenhage13 | Bernardt Duvenhage, Kadi Bouatouch, D.G. Kourie, "Numerical Verification of Bidirectional Reflectance Distribution Functions for Physical Plausibility", Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference, October 2013. |
Cook81 | Robert Cook, Kenneth Torrance, "A Reflectance Model for Computer Graphics", SIGGRAPH '81: Proceedings of the 8th annual conference on Computer graphics and interactive techniques, August 1981, pp. 307-316. |
Karis13 | Brian Karis, "Real Shading in Unreal Engine 4", SIGGRAPH 2013 Presentation Notes. |
Heitz14 | Eric Heitz, "Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs", Journal of Computer Graphics Techniques, Vol. 3, No. 2, 2014. |
Schlick94 | Christophe Schlick, "An inexpensive BRDF model for physically-based rendering", Computer Graphics Forum 13, 1994, pp. 233-246. |
Lazanyi05 | Istvan Lazanyi, Laszlo Szirmay-Kalos, "Fresnel term approximations for Metals", January 2005. |
Lagarde13 | Sebastien Lagarde, "Memo on Fresnel equations", Blog post: https://seblagarde.wordpress.com/2013/04/29/memo-on-fresnel-equations/. |
Walter07 | Bruce Walter, Stephen Marschner, Hongsong Li, Kenneth Torrance, "Microfacet Models for Refraction through Rough Surfaces", Proceedings of Eurographics Symposium on Rendering, 2007. |
Cao15 | Jiayin Cao, "Sampling microfacet BRDF", November 1, 2015, Blog post: https://agraphicsguy.wordpress.com/2015/11/01/sampling-microfacet-brdf/. |
Schutte18 | Joe Schutte, "Sampling techniques for GGX with Smith Masking-Shadowing: Part 1", March 7, 2018, Blog post: https://schuttejoe.github.io/post/ggximportancesamplingpart1/. |
Colbert07 | Mark Colbert, Jaroslav Krivanek, "GPU-Based Importance Sampling", NVIDIA GPU Gems 3, Chapter 20, 2007. |
Guy18 | Romain Guy, Mathias Agopian, "Physically Based Rendering in Filament", Part of Google's Filament project documentation: https://google.github.io/filament/. |
Aguilar17 | Orlando Aguilar, "Spherical Harmonics", Blog post: http://orlandoaguilar.github.io/sh/spherical/harmonics/irradiance/map/2017/02/12/SphericalHarmonics.html. |
Ramamoorthi01 | Ravi Ramamoorthi, Pat Hanrahan, "An Efficient Representation for Irradiance Environment Maps", SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, August 2001, pp. 497-500. |
© 2024 Open CASCADE Technology. All rights reserved.