Shading is at the centre of many of 3D rendering components. After surface intersections have been found along viewing rays, a colour needs to be computed based on the [geometry's material](/12/rendering/geometry/) at the intersection and the [lights](/15/rendering/lights/) in the scene. Shading is a process that links materials and lights, solving or approximating the [rendering equation](https://en.wikipedia.org/wiki/Rendering_equation). While this is where the name comes from it is often generalized to computing colour in a rendered image, which is not always a physically based operation.
The rendering equation describes how light interacts at at point on a surface, being absorbed, reflected, refracted and sometimes emitted. It includes a bidirectional reflectance distribution function (BRDF) which is the primary part of most materials. For a given incoming light colour and direction, the BRDF gives the amount of light reflected for all other directions. Typically this function is sampled once, i.e. incoming light from a light source and an outgoing direction towards the camera. The function itself can take a number of forms. It may be a physically based model to approximate certain lighting effects, which is often very efficient, for example the Lambertian diffuse shading below. It may be measured from a real material, which involves taking many photos from different directions under controlled lighting conditions. The memory requirements for this are quite large and it is used more for offline rendering. The two approaches can also be combined, measuring the BRDF by rendering a detailed model of a surface instead of taking photos.
In real-time rendering with dedicated graphics processors, *shaders* are small programs which are executed on the GPU to compute shading. A colour may be computed for vertices of a mesh and interpolated during [rasterization](/16/rendering/rasterization/), or alternatively computed after rasterization at the pixel level, giving name to vertex and fragment shaders. These shader programs are in turn executed on shader cores, which are the parallel processors of the GPU. In offline (non-real-time) or software rendering, the components which compute shading are also called shaders.
While the shading process covers all possible shading operations, a monolithic physically based shader, which can compute results for interactions between all light and material combinations, is normally too expensive and cumbersome to maintain. Shaders are normally small dedicated programs which handle just a few cases. When other cases occur, with different materials or lights, different shaders are used to compute the results. The colour, or light intensity, computed by many shaders can be summed together to produce a more complete image.
#Diffuse Shading
[Diffuse](https://en.wikipedia.org/wiki/Diffuse_reflection) shading, which calculates the amount of directly scattered light, is the most common lighting calculation a shader will do. The image below shows how light becomes spread out over a surface the further away from the light it faces. A widely used model is [Lambertian relfectance](https://en.wikipedia.org/wiki/Lambertian_reflectance), which scales the light intensity by the cosine of the angle between the surface normal and light direction to give the amount of scattered light.
![enter image description here][1]
$$ \mathsf{colour} = \cos(\hat{\mathbf{N}} \cdot \hat{\mathbf{L}}) \times \mathsf{light\,colour} \times \mathsf{material\,colour} $$
#Material Effects
Diffuse shading models just one property of light interacting with geometry. Below is a list of other common graphics effects performed by shaders. These general effects are combined to create complex materials and approximate multiple surface layers. For example a diffuse and reflection effect can model a glazed surface even though it's combined of a thin glass layer over ceramic.
## Specular Highlights
Modelling specular highlights is among the first effects commonly discussed after diffuse shading. Shiny objects are somewhat reflective and the most noticeable reflections are the light sources themselves, which specular highlights model. Computing specular highlights is much cheaper than a full reflection as only information about the light source is needed rather than potential secondary light from the entire scene. Many objects have slightly rough surfaces, giving a partly diffused look with larger and softer highlights, which are often adjusted by a "roughness" or "shininess" material attribute.
## Reflection
Reflections are more complex than specular highlights as light comes from many parts of the scene, rather than just the light sources, i.e. diffused light from other objects is reflected as well as the direct light. A raytracer may simply continue tracing a reflected ray and use the colour of the surface this reflected ray hits instead. A rasterizer only computes the first intersection so reflections are often implemented by sampling an image pre-rendered from the point of view of the object.
The amount of light reflected changes based on the angle it hits the surface. For example you can see the bottom of a lake looking directly down but towards the horizon all you can see is the reflection of the sky. This effect is described by the [Fresnel equations](https://en.wikipedia.org/wiki/Fresnel_equations) and changes based on the materials refractive index, or how dense it is. Also, how dense the air is or whatever material the light is traveling through before hitting the boundary between the two.
### Glossy Reflection
Reflective materials with rough or scratched surfaces produce a blurry looking reflection, as light mostly reflects but the reflection direction is perturbed slightly. In the case of brushed metal, the scratches are all in one direction, and the reflection is blurred much more perpendicular to the scratches. In this case the glossy effect is said to be anisotropic, which is often an adjustable attribute of the material. The same effect applies to refraction of rough surfaces.
## Indirect Lighting
Indirect lighting, or indirect illumination models light bouncing multiple times before being captured by a camera.
For example, diffuse or reflected light from an object becomes a light source for another diffuse object, which in turn scatters light to yet more objects etc. The term global illumination (GI) is used too, but often used to imply a more complete lighting system that includes many forms of indirect light. It is much more expensive to compute than direct light and until ~2010 was not used in games and a typically a property of better-looking offline rendering.
A raytracer may model indirect illumination by generating secondary rays at diffuse surfaces. [Radiosity][2] is a rendering technique to model light transfer at the geometry level. Preprocessing the scene to create some form of indirect light field and then providing this information to a shader allows much faster rendering, although often the discretized storage gives reduced accuracy and introduces light "bleeding" issues. [Cascaded light propagation volumes](https://www.youtube.com/watch?v=vPQ3BbuYVh8) is one such technique.
## Refraction
So far, purely opaque materials have been discussed. In the case of glass or water, light passes through the surface and travels inside it. However rather than simply continuing it bends a little depending on the densities of the materials either side of the surface, e.g. air to glass or vice versa. The angle it bends by is given by [Snell's law](https://en.wikipedia.org/wiki/Snell's_law). Different coloured light refracts in slightly different directions, responsible for the colours in rainbows and chromatic aberration in optical systems. Refraction is often paired with reflection, as the Fresnel equations linked above give a ratio between the two.
## Absorption
Partially transparent or see-through materials absorb light as it passes through it. [Beers law](https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law) models the attenuation of the light, or how quickly it is absorbed over the distance it travels. For example, sand under the water at the beach looks green as the seawater absorbs more red and blue light. The deeper the water, the more light is absorbed and the darker it looks. The same effect occurs for coloured glass. In cases such as fog and smoke, or where opaque particles are suspended in the material absorption is often paired with scattering as although direct light is reduced, the light is diffused by the particles and reaches other parts of the material through secondary paths.
### Transparency
Transparency, specifically alpha transparency as there are a number of interpretations, can be physically seen as an instantaneous absorption effect which occurs at a surface. An alpha value specifies a ratio of light that can pass through the surface to that which is reflected by other methods as above.
## Scattering
Scattering is light deviating from its path as it travels through a material. For example, full beam headlights into fog appears very bright due to the light hitting particles and being reflected. Scattering in the atmosphere is responsible for the sky being blue.
Sub-surface scattering refers to scattering within solid objects, often with high density so the range is limited. A common example is skin, which is a very tricky materials to make look realistic. Particularly as skin has many layers which light reaches and is scattered by before returning to the surface.
#Camera Effects
Light doesn't just interact with materials. Both real cameras and the human eye exhibit a number of optical phenomenon. Sometimes these effects aren't desirable as they hide detail in the image, but like a lot of the material shading, we're used to seeing these effects which generally makes the final image more relateable. *Post-processing* shaders, named as they run after rendering the geometry, operate on the final image.
## Exposure and White Balance
Cameras have light meters to estimate how long to expose the image for. Our eyes also adapt to the amount of light around us, but have a much wider range of intensities they can see at once. A virtual scene can be constructed in such a way that materials and light intensities always produce the right brightness for the fixed number and range of colours in our display devices. However the real world has much more contrast than this allows. Adjusting the image brightness to better show detail after rendering can allow an animated sequence to better portray high contrasts. The term *high dynamic range* (HDR) refers to storing a larger range of contrast than display devices have, and dynamic exposure and tone mapping are used to display it.
A similar phenomenon occurs when there are large differences in colours and our eyes adjust to an average. In addition to altering the total brightness, a shift in the red, green and blue intensities can show more detail without losing realism.
## Bloom
Light behaves like a wave and passes past objects, not even intersecting with them, it spreads out and *diffracts* (see the [double slit experiment](https://en.wikipedia.org/wiki/Double-slit_experiment)). This happens at the aperture of a camera and even objects that are perfectly focused do not produce a perfectly sharp image. The distribution of light from a focused point produces an [Airy disk](https://en.wikipedia.org/wiki/Airy_disk) on the image, which actually extends indefinitely but falls off very quickly. Normally this isn't noticed as the diffracted light is of such low relative intensity. However when the light source is overexposed and bright enough, the scattered light appears as a glow around the bright areas. The bloom effect models this behaviour by artificially blurring the image. HDR is often used to provide the large intensities needed, although tone mapping can be used at the cost of precision.
## Depth of Field
The depth of field is the range of distances at which objects appear sufficiently in focus with a lens based camera. However the name as a graphics effect often refers to the presence of out of focus blurry areas of an image, which are expensive to compute.
## Lens Flares
Instead of light purely refracting through a lens, lens flares are an artefact of light being reflected back and forth by camera lens elements (individual lenses forming a lens system) and the barrel of the lens. The effect is normally very weak, although for very bright light sources such as the sun can be strong enough to see or even obscure parts of the image. Lens flares can add realism and a better sense of direction, due to their animated movement.
[1]: /u/img/238650ddd4a5.svg
[2]: https://en.wikipedia.org/wiki/Radiosity_%28computer_graphics%29
Shading is at the centre of many of 3D rendering components. After surface intersections have been found along viewing rays, a colour needs to be computed based on the [geometry's material](/12/rendering/geometry/) at the intersection and the [lights](/15/rendering/lights/) in the scene. Shading is a process that links materials and lights, solving or approximating the [rendering equation](https://en.wikipedia.org/wiki/Rendering_equation). While this is where the name comes from it is often generalized to computing colour in a rendered image, which is not always a physically based operation.
The rendering equation describes how light interacts at at point on a surface, being absorbed, reflected, refracted and sometimes emitted. It includes a bidirectional reflectance distribution function (BRDF) which is the primary part of most materials. For a given incoming light colour and direction, the BRDF gives the amount of light reflected for all other directions. Typically this function is sampled once, i.e. incoming light from a light source and an outgoing direction towards the camera. The function itself can take a number of forms. It may be a physically based model to approximate certain lighting effects, which is often very efficient, for example the Lambertian diffuse shading below. It may be measured from a real material, which involves taking many photos from different directions under controlled lighting conditions. The memory requirements for this are quite large and it is used more for offline rendering. The two approaches can also be combined, measuring the BRDF by rendering a detailed model of a surface instead of taking photos.
In real-time rendering with dedicated graphics processors, *shaders* are small programs which are executed on the GPU to compute shading. A colour may be computed for vertices of a mesh and interpolated during [rasterization](/16/rendering/rasterization/), or alternatively computed after rasterization at the pixel level, giving name to vertex and fragment shaders. These shader programs are in turn executed on shader cores, which are the parallel processors of the GPU. In offline (non-real-time) or software rendering, the components which compute shading are also called shaders.
While the shading process covers all possible shading operations, a monolithic physically based shader, which can compute results for interactions between all light and material combinations, is normally too expensive and cumbersome to maintain. Shaders are normally small dedicated programs which handle just a few cases. When other cases occur, with different materials or lights, different shaders are used to compute the results. The colour, or light intensity, computed by many shaders can be summed together to produce a more complete image.
#Diffuse Shading
[Diffuse](https://en.wikipedia.org/wiki/Diffuse_reflection) shading, which calculates the amount of directly scattered light, is the most common lighting calculation a shader will do. The image below shows how light becomes spread out over a surface the further away from the light it faces. A widely used model is [Lambertian relfectance](https://en.wikipedia.org/wiki/Lambertian_reflectance), which scales the light intensity by the cosine of the angle between the surface normal and light direction to give the amount of scattered light.
![enter image description here][1]
$$ \mathsf{colour} = \cos(\hat{\mathbf{N}} \cdot \hat{\mathbf{L}}) \times \mathsf{light\,colour} \times \mathsf{material\,colour} $$
#Material Effects
Diffuse shading models just one property of light interacting with geometry. Below is a list of other common graphics effects performed by shaders. These general effects are combined to create complex materials and approximate multiple surface layers. For example a diffuse and reflection effect can model a glazed surface even though it's combined of a thin glass layer over ceramic.
## Specular Highlights
Modelling specular highlights is among the first effects commonly discussed after diffuse shading. Shiny objects are somewhat reflective and the most noticeable reflections are the light sources themselves, which specular highlights model. Computing specular highlights is much cheaper than a full reflection as only information about the light source is needed rather than potential secondary light from the entire scene. Many objects have slightly rough surfaces, giving a partly diffused look with larger and softer highlights, which are often adjusted by a "roughness" or "shininess" material attribute.
## Reflection
Reflections are more complex than specular highlights as light comes from many parts of the scene, rather than just the light sources, i.e. diffused light from other objects is reflected as well as the direct light. A raytracer may simply continue tracing a reflected ray and use the colour of the surface this reflected ray hits instead. A rasterizer only computes the first intersection so reflections are often implemented by sampling an image pre-rendered from the point of view of the object.
The amount of light reflected changes based on the angle it hits the surface. For example you can see the bottom of a lake looking directly down but towards the horizon all you can see is the reflection of the sky. This effect is described by the [Fresnel equations](https://en.wikipedia.org/wiki/Fresnel_equations) and changes based on the materials refractive index, or how dense it is. Also, how dense the air is or whatever material the light is traveling through before hitting the boundary between the two.
### Glossy Reflection
Reflective materials with rough or scratched surfaces produce a blurry looking reflection, as light mostly reflects but the reflection direction is perturbed slightly. In the case of brushed metal, the scratches are all in one direction, and the reflection is blurred much more perpendicular to the scratches. In this case the glossy effect is said to be anisotropic, which is often an adjustable attribute of the material. The same effect applies to refraction of rough surfaces.
## Indirect Lighting
Indirect lighting, or indirect illumination models light bouncing multiple times before being captured by a camera.
For example, diffuse or reflected light from an object becomes a light source for another diffuse object, which in turn scatters light to yet more objects etc. The term global illumination (GI) is used too, but often used to imply a more complete lighting system that includes many forms of indirect light. It is much more expensive to compute than direct light and until ~2010 was not used in games and a typically a property of better-looking offline rendering.
A raytracer may model indirect illumination by generating secondary rays at diffuse surfaces. [Radiosity][2] is a rendering technique to model light transfer at the geometry level. Preprocessing the scene to create some form of indirect light field and then providing this information to a shader allows much faster rendering, although often the discretized storage gives reduced accuracy and introduces light "bleeding" issues. [Cascaded light propagation volumes](https://www.youtube.com/watch?v=vPQ3BbuYVh8) is one such technique.
## Refraction
So far, purely opaque materials have been discussed. In the case of glass or water, light passes through the surface and travels inside it. However rather than simply continuing it bends a little depending on the densities of the materials either side of the surface, e.g. air to glass or vice versa. The angle it bends by is given by [Snell's law](https://en.wikipedia.org/wiki/Snell's_law). Different coloured light refracts in slightly different directions, responsible for the colours in rainbows and chromatic aberration in optical systems. Refraction is often paired with reflection, as the Fresnel equations linked above give a ratio between the two.
## Absorption
Partially transparent or see-through materials absorb light as it passes through it. [Beers law](https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law) models the attenuation of the light, or how quickly it is absorbed over the distance it travels. For example, sand under the water at the beach looks green as the seawater absorbs more red and blue light. The deeper the water, the more light is absorbed and the darker it looks. The same effect occurs for coloured glass. In cases such as fog and smoke, or where opaque particles are suspended in the material absorption is often paired with scattering as although direct light is reduced, the light is diffused by the particles and reaches other parts of the material through secondary paths.
### Transparency
Transparency, specifically alpha transparency as there are a number of interpretations, can be physically seen as an instantaneous absorption effect which occurs at a surface. An alpha value specifies a ratio of light that can pass through the surface to that which is reflected by other methods as above.
## Scattering
Scattering is light deviating from its path as it travels through a material. For example, full beam headlights into fog appears very bright due to the light hitting particles and being reflected. Scattering in the atmosphere is responsible for the sky being blue.
Sub-surface scattering refers to scattering within solid objects, often with high density so the range is limited. A common example is skin, which is a very tricky materials to make look realistic. Particularly as skin has many layers which light reaches and is scattered by before returning to the surface.
#Camera Effects
Light doesn't just interact with materials. Both real cameras and the human eye exhibit a number of optical phenomenon. Sometimes these effects aren't desirable as they hide detail in the image, but like a lot of the material shading, we're used to seeing these effects which generally makes the final image more relateable. *Post-processing* shaders, named as they run after rendering the geometry, operate on the final image.
## Exposure and White Balance
Cameras have light meters to estimate how long to expose the image for. Our eyes also adapt to the amount of light around us, but have a much wider range of intensities they can see at once. A virtual scene can be constructed in such a way that materials and light intensities always produce the right brightness for the fixed number and range of colours in our display devices. However the real world has much more contrast than this allows. Adjusting the image brightness to better show detail after rendering can allow an animated sequence to better portray high contrasts. The term *high dynamic range* (HDR) refers to storing a larger range of contrast than display devices have, and dynamic exposure and tone mapping are used to display it.
A similar phenomenon occurs when there are large differences in colours and our eyes adjust to an average. In addition to altering the total brightness, a shift in the red, green and blue intensities can show more detail without losing realism.
## Bloom
Light behaves like a wave and passes past objects, not even intersecting with them, it spreads out and *diffracts* (see the [double slit experiment](https://en.wikipedia.org/wiki/Double-slit_experiment)). This happens at the aperture of a camera and even objects that are perfectly focused do not produce a perfectly sharp image. The distribution of light from a focused point produces an [Airy disk](https://en.wikipedia.org/wiki/Airy_disk) on the image, which actually extends indefinitely but falls off very quickly. Normally this isn't noticed as the diffracted light is of such low relative intensity. However when the light source is overexposed and bright enough, the scattered light appears as a glow around the bright areas. The bloom effect models this behaviour by artificially blurring the image. HDR is often used to provide the large intensities needed, although tone mapping can be used at the cost of precision.
## Depth of Field
The depth of field is the range of distances at which objects appear sufficiently in focus with a lens based camera. However the name as a graphics effect often refers to the presence of out of focus blurry areas of an image, which are expensive to compute.
## Lens Flares
Instead of light purely refracting through a lens, lens flares are an artefact of light being reflected back and forth by camera lens elements (individual lenses forming a lens system) and the barrel of the lens. The effect is normally very weak, although for very bright light sources such as the sun can be strong enough to see or even obscure parts of the image. Lens flares can add realism and a better sense of direction, due to their animated movement.
[1]: /u/img/238650ddd4a5.svg
[2]: https://en.wikipedia.org/wiki/Radiosity_%28computer_graphics%29