A light is a source of light which bounces around a scene of objects and ultimately enters the camera to form a 2D image.
Lights have colour and brightness and are often categorized into directional, point, spot, area and volume lights. In these cases lights are quite separate to geometry, affecting geometry without being visible itself. Alternatively a material (e.g. for surface or volume geometry) with an emissive property can also be a light source. Having geometry with emissive properties in the scene matches reality better as light is emitted from matter, however supporting such generalized light sources can be expensive especially for rasterizers and especially when supporting [shadows](/17/rendering/shadows/).
## Directional Lights
Directional lights are lights with infinitely far position and no attenuation, or reduction of intensity, over distance. For example, the sun is so far away that relative changes in position are insignificant. Because of this approximation, a globally constant light direction vector can be used in the lighting calculations. In the first real-time graphics applications, small optimizations such as this was important, particularly for Blinn-Phong specular highlights where both an infinite viewer and infinite light allow a constant half-vector.
## Point Lights
Point lights have a world space position and a light direction vector is calculated at every shaded point. An attenuation is often applied to point lights, not because of any for or absorption effects, but because the light is "spread thin" as it moves outwards from a point. Think of a sphere growing in size, being the wavefront of traveling light. As it grows, the same amount of light on its surface must cover a greater area. This area increases with the square of the distance, $d^2$. Likewise the intensity is scaled down by $\frac{1}{d^2}$.
## Spot Lights
A spot light is really just a point light with a cover so that light only goes in one direction, which although more expensive could be modeled with geometry and shadows. A direction vector and angle for the width of the beam are its attributes. Rather than a sharp cutoff for points outside the angle, a ramp is often used to simulate soft shadows at the spotlight edges.
#Area Lights
Area lights are an extension of point lights, forming a flat surface. They are much more expensive and produce soft shadows. A common area light is a disc to model the silhouette of a sphere such as the sun or light bulb. Rather than a single point, the light from them is the integral over the lights area. They could be approximated as a collection of point lights distributed over the area. A raytracer essentially does this, numerically integrating the light by tracing rays to many points over the area. An emissive material applied to a triangle mesh of a light bulb could be considered many area lights, one for each triangle.
A volume light is the same thing, but with light coming from everywhere within a volume. An example may be the flame of a candle, where the gas is so hot it emits light.
To support arbitrary materials emitting light, a raytracer can employ multiple importance sampling, where rays are generated at higher density in the direction of known emissive surfaces, avoiding samples which do not affect the result.
# Rendering
Multiple lights can be added to the scene by simply adding the colour/intensity from each one. I.e. sum the result of all lighting equations. Lighting can be quite expensive with many lights. An easily solution was to turn on only the most significant lights for a given view.
Computing the affect of a light on geometry can happen during [shading](https://www.heuristic42.com/14/rendering/shading/) while rendering, known as *forward rendering*. Alternatively, materials and other information can be saved to compute lighting effects in a subsequent stage, called *deferred shading*.
A direct approach to lighting was to simply apply a lighting equation for every light during forward rendering. Of course not all lights significantly affect all surfaces. Additionally not all surfaces are visible.
Deferred shading typically stores only visible surface information. This avoids some unnecessary lighting computation. However, small lights may only affect a little part of the image. Rather than check all lights per geometry sample, deferred shading allows computation for each light. I.e. for each light, work out and apply lighting to only those pixels it affects. These pixels, and even depth ranges, can be found by rendering simple bounding geometry for each light.
A spatial data structure can be used during forward rendering to provide similar improvements to the above, where a search finds a list of lights affecting each geometry sample.
The basic idea is don't do unnecessary work. Even checking to see if more work should be done is work itself. This becomes very important with lots of lights.
n computer graphics is a light source, and a fundamental component of rendering. It's light bounces around a scene of objects and ultimately enters the camera to form a 2D image.
Lights have colour and brightness and are often categorized by their geometry into directional, point, spot, area and volume lights. In these cases lights are quite separate to geometry, affecting geometry without being visible itself. Alternatively a material (e.g. for surface or volume geometry) with an emissive property can also be a light source. Having geometry with emissive properties in the scene matches reality better as light is emitted from matter, however supporting such generalized light sources can be expensive especially for rasterizers and especially when supporting [shadows](/17/rendering/shadows/).
## Directional Lights
Directional lights are lights with infinitely far position and no attenuation, or reduction of intensity, over distance. For example, the sun is so far away that relative changes in position are insignificant. Because of this approximation, a globally constant light direction vector can be used in the lighting calculations. In the first real-time graphics applications, small optimizations such as this was important, particularly for Blinn-Phong specular highlights where both an infinite viewer and infinite light allow a constant half-vector.
## Point Lights
Point lights have a world space position and a light direction vector is calculated at every shaded point. An attenuation is often applied to point lights, not because of any for or absorption effects, but because the light is "spread thin" as it moves outwards from a point. Think of a sphere growing in size, being the wavefront of traveling light. As it grows, the same amount of light on its surface must cover a greater area. This area increases with the square of the distance, $d^2$. Likewise the intensity is scaled down by $\frac{1}{d^2}$.
## Spot Lights
A spot light is really just a point light with a cover so that light only goes in one direction, which although more expensive could be modeled with geometry and shadows. A direction vector and angle for the width of the beam are its attributes. Rather than a sharp cutoff for points outside the angle, a ramp is often used to simulate soft shadows at the spotlight edges.
##Area Lights
Area lights are an extension of point lights, forming a flat surface. They are much more expensive and produce soft shadows. A common area light is a disc to model the silhouette of a sphere such as the sun or light bulb. Rather than a single point, the light from them is the integral over the lights area. They could be approximated as a collection of point lights distributed over the area. A raytracer essentially does this, numerically integrating the light by tracing rays to many points over the area. An emissive material applied to a triangle mesh of a light bulb could be considered many area lights, one for each triangle.
A volume light is the same thing, but with light coming from everywhere within a volume. An example may be the flame of a candle, where the gas is so hot it emits light.
To support arbitrary materials emitting light, a raytracer can employ multiple importance sampling, where rays are generated at higher density in the direction of known emissive surfaces, avoiding samples which do not affect the result.
# Rendering
Multiple lights can be added to the scene by simply adding the colour/intensity from each one. I.e. sum the result of all lighting equations. Lighting can be quite expensive with many lights. An easily solution was to turn on only the most significant lights for a given view.
Computing the affect of a light on geometry can happen during [shading](https://www.heuristic42.com/14/rendering/shading/) while rendering, known as *forward rendering*. Alternatively, materials and other information can be saved to compute lighting effects in a subsequent stage, called *deferred shading*.
A direct approach to lighting was to simply apply a lighting equation for every light during forward rendering. Of course not all lights significantly affect all surfaces. Additionally not all surfaces are visible.
Deferred shading typically stores only visible surface information. This avoids some unnecessary lighting computation. However, small lights may only affect a little part of the image. Rather than check all lights per geometry sample, deferred shading allows computation for each light. I.e. for each light, work out and apply lighting to only those pixels it affects. These pixels, and even depth ranges, can be found by rendering simple bounding geometry for each light.
A spatial data structure can be used during forward rendering to provide similar improvements to the above, where a search finds a list of lights affecting each geometry sample.
The basic idea is don't do unnecessary work. Even checking to see if more work should be done is work itself. This becomes very important with lots of lights.
A light is a source of light which bounces around a scene of objects and ultimately enters the camera to form a 2D image.
Lights have colour and brightness and are often categorized into directional, point, spot, area and volume lights. In these cases lights are quite separate to geometry, affecting geometry without being visible itself. Alternatively a material (e.g. for surface or volume geometry) with an emissive property can also be a light source. Having geometry with emissive properties in the scene matches reality better as light is emitted from matter, however supporting such generalized light sources can be expensive especially for rasterizers and especially when supporting [shadows](/17/rendering/shadows/).
## Directional Lights
Directional lights are lights with infinitely far position and no attenuation, or reduction of intensity, over distance. For example, the sun is so far away that relative changes in position are insignificant. Because of this approximation, a globally constant light direction vector can be used in the lighting calculations. In the first real-time graphics applications, small optimizations such as this was important, particularly for Blinn-Phong specular highlights where both an infinite viewer and infinite light allow a constant half-vector.
## Point Lights
Point lights have a world space position and a light direction vector is calculated at every shaded point. An attenuation is often applied to point lights, not because of any for or absorption effects, but because the light is "spread thin" as it moves outwards from a point. Think of a sphere growing in size, being the wavefront of traveling light. As it grows, the same amount of light on its surface must cover a greater area. This area increases with the square of the distance, $d^2$. Likewise the intensity is scaled down by $\frac{1}{d^2}$.
## Spot Lights
A spot light is really just a point light with a cover so that light only goes in one direction, which although more expensive could be modeled with geometry and shadows. A direction vector and angle for the width of the beam are its attributes. Rather than a sharp cutoff for points outside the angle, a ramp is often used to simulate soft shadows at the spotlight edges.
#Area Lights
Area lights are an extension of point lights, forming a flat surface. They are much more expensive and produce soft shadows. A common area light is a disc to model the silhouette of a sphere such as the sun or light bulb. Rather than a single point, the light from them is the integral over the lights area. They could be approximated as a collection of point lights distributed over the area. A raytracer essentially does this, numerically integrating the light by tracing rays to many points over the area. An emissive material applied to a triangle mesh of a light bulb could be considered many area lights, one for each triangle.
A volume light is the same thing, but with light coming from everywhere within a volume. An example may be the flame of a candle, where the gas is so hot it emits light.
To support arbitrary materials emitting light, a raytracer can employ multiple importance sampling, where rays are generated at higher density in the direction of known emissive surfaces, avoiding samples which do not affect the result.
# Rendering
Multiple lights can be added to the scene by simply adding the colour/intensity from each one. I.e. sum the result of all lighting equations. Lighting can be quite expensive with many lights. An easily solution was to turn on only the most significant lights for a given view.
Computing the affect of a light on geometry can happen during [shading](https://www.heuristic42.com/14/rendering/shading/) while rendering, known as *forward rendering*. Alternatively, materials and other information can be saved to compute lighting effects in a subsequent stage, called *deferred shading*.
A direct approach to lighting was to simply apply a lighting equation for every light during forward rendering. Of course not all lights significantly affect all surfaces. Additionally not all surfaces are visible.
Deferred shading typically stores only visible surface information. This avoids some unnecessary lighting computation. However, small lights may only affect a little part of the image. Rather than check all lights per geometry sample, deferred shading allows computation for each light. I.e. for each light, work out and apply lighting to only those pixels it affects. These pixels, and even depth ranges, can be found by rendering simple bounding geometry for each light.
A spatial data structure can be used during forward rendering to provide similar improvements to the above, where a search finds a list of lights affecting each geometry sample.
The basic idea is don't do unnecessary work. Even checking to see if more work should be done is work itself. This becomes very important with lots of lights.
n computer graphics is a light source, and a fundamental component of rendering. It's light bounces around a scene of objects and ultimately enters the camera to form a 2D image.
Lights have colour and brightness and are often categorized by their geometry into directional, point, spot, area and volume lights. In these cases lights are quite separate to geometry, affecting geometry without being visible itself. Alternatively a material (e.g. for surface or volume geometry) with an emissive property can also be a light source. Having geometry with emissive properties in the scene matches reality better as light is emitted from matter, however supporting such generalized light sources can be expensive especially for rasterizers and especially when supporting [shadows](/17/rendering/shadows/).
## Directional Lights
Directional lights are lights with infinitely far position and no attenuation, or reduction of intensity, over distance. For example, the sun is so far away that relative changes in position are insignificant. Because of this approximation, a globally constant light direction vector can be used in the lighting calculations. In the first real-time graphics applications, small optimizations such as this was important, particularly for Blinn-Phong specular highlights where both an infinite viewer and infinite light allow a constant half-vector.
## Point Lights
Point lights have a world space position and a light direction vector is calculated at every shaded point. An attenuation is often applied to point lights, not because of any for or absorption effects, but because the light is "spread thin" as it moves outwards from a point. Think of a sphere growing in size, being the wavefront of traveling light. As it grows, the same amount of light on its surface must cover a greater area. This area increases with the square of the distance, $d^2$. Likewise the intensity is scaled down by $\frac{1}{d^2}$.
## Spot Lights
A spot light is really just a point light with a cover so that light only goes in one direction, which although more expensive could be modeled with geometry and shadows. A direction vector and angle for the width of the beam are its attributes. Rather than a sharp cutoff for points outside the angle, a ramp is often used to simulate soft shadows at the spotlight edges.
##Area Lights
Area lights are an extension of point lights, forming a flat surface. They are much more expensive and produce soft shadows. A common area light is a disc to model the silhouette of a sphere such as the sun or light bulb. Rather than a single point, the light from them is the integral over the lights area. They could be approximated as a collection of point lights distributed over the area. A raytracer essentially does this, numerically integrating the light by tracing rays to many points over the area. An emissive material applied to a triangle mesh of a light bulb could be considered many area lights, one for each triangle.
A volume light is the same thing, but with light coming from everywhere within a volume. An example may be the flame of a candle, where the gas is so hot it emits light.
To support arbitrary materials emitting light, a raytracer can employ multiple importance sampling, where rays are generated at higher density in the direction of known emissive surfaces, avoiding samples which do not affect the result.
# Rendering
Multiple lights can be added to the scene by simply adding the colour/intensity from each one. I.e. sum the result of all lighting equations. Lighting can be quite expensive with many lights. An easily solution was to turn on only the most significant lights for a given view.
Computing the affect of a light on geometry can happen during [shading](https://www.heuristic42.com/14/rendering/shading/) while rendering, known as *forward rendering*. Alternatively, materials and other information can be saved to compute lighting effects in a subsequent stage, called *deferred shading*.
A direct approach to lighting was to simply apply a lighting equation for every light during forward rendering. Of course not all lights significantly affect all surfaces. Additionally not all surfaces are visible.
Deferred shading typically stores only visible surface information. This avoids some unnecessary lighting computation. However, small lights may only affect a little part of the image. Rather than check all lights per geometry sample, deferred shading allows computation for each light. I.e. for each light, work out and apply lighting to only those pixels it affects. These pixels, and even depth ranges, can be found by rendering simple bounding geometry for each light.
A spatial data structure can be used during forward rendering to provide similar improvements to the above, where a search finds a list of lights affecting each geometry sample.
The basic idea is don't do unnecessary work. Even checking to see if more work should be done is work itself. This becomes very important with lots of lights.