Heuristic42
Blog
Opengl
Meta
Rendering
1
comment
Dec 15 at 4:39
Matrices
Hello! Are you looking to elevate your online business with a p…
–
anonymous
comment
Nov 27 at 0:30
DerBard: Custom Split Mechanical Keyboard Prototype
hello
–
anonymous
comment
Nov 19 at 15:47
Matrices
[deleted]
–
anonymous
created
Oct 20 at 20:30
Iterators: pointers vs cursors
You're already doing both of these by hand. This post emphaisze…
–
pknowles
comment
Oct 10 at 10:27
Matrices
[deleted]
–
anonymous
comment
Oct 4 at 19:12
Matrices
[deleted]
–
anonymous
comment
Sep 30 at 18:51
Matrices
[deleted]
–
anonymous
comment
Sep 23 at 16:15
Matrices
[deleted]
–
anonymous
comment
Sep 21 at 6:52
Contributing
I kind of predicted what was bound to happen when my favourite …
–
anonymous
comment
Sep 7 at 1:21
Route contention when running docker and a VPN
Thank you for this. Between this and the overwriting of iptabl…
–
anonymous
comment
Sep 6 at 17:57
Making a real EMF Reader
Sorry for the random quoted text comments. I am one of those p…
–
anonymous
comment
Sep 6 at 17:48
Making a real EMF Reader
["ove! Play a tone with a buzzer and has 5 LEDs to show the “EM…
–
anonymous
comment
Sep 6 at 17:47
Making a real EMF Reader
["easure direction Measure the magnetic fie"](#q107-644-685)
–
anonymous
comment
Aug 20 at 17:01
Matrices
[deleted]
–
anonymous
comment
Aug 11 at 22:32
Matrices
[deleted]
–
anonymous
edited
Jun 8 at 22:29
Rethinking writing files with memory mapping and C++
This post introduces the motivation behind the [decodless C++ o…
–
admin
created
Jun 8 at 22:16
Rethinking writing files with memory mapping and C++
This post introduces the motivation behind the [decodless C++ o…
–
pknowles
comment
Jun 5 at 13:36
Contributing
[deleted]
–
anonymous
comment
Apr 19 at 11:24
Matrices
[deleted]
–
anonymous
comment
Apr 13 at 0:25
Matrices
[deleted]
–
anonymous
comment
Apr 5 at 9:43
Matrices
[deleted]
–
anonymous
comment
Mar 27 at 17:19
Matrices
[deleted]
–
anonymous
comment
Mar 25 at 4:59
Matrices
[deleted]
–
anonymous
comment
Mar 5 at 15:39
Matrices
[deleted]
–
anonymous
…
View All
Log in
Projection Matrix
leave this field blank to prove your humanity
Slug
*
A URL path component
Parent page
<root>
rendering/:Article2:3D Rendering (Computer Graphics)
--- rendering/cameras/:Article11:Cameras
--- rendering/matrices/:Article12:Matrices
------ rendering/matrices/projection/:Article14:Projection Matrix
--- rendering/vectors/:Article13:Vectors
--- rendering/geometry/:Article62:3D Geometry
------ rendering/geometry/triangle_meshes/:None
--- rendering/shading/:Article64:Shading
------ rendering/shading/transparency/:Article70:Transparency and Alpha Blending
--- rendering/lights/:Article65:Lights
--- rendering/rasterization/:None
------ rendering/rasterization/deepimage/:Article72:Deep Image
--- rendering/shadows/:Article67:Shadows
--- rendering/spaces/:Article68:Vector Spaces
------ rendering/spaces/tangent_space/:Article69:Tangent Space
------ rendering/spaces/clip_space/:Article89:Clip Space
--- rendering/rotations/:None
--- rendering/images/:Article74:<unset>:Images
------ rendering/images/mipmapping/:Article75:<unset>:Mipmapping
--- rendering/materials/:None
opengl/:Article3:OpenGL Tutorials
--- opengl/oit/:Article7:Order Independent Transparency (OIT)
--- opengl/framebuffer/:Article71:The Framebuffer
meta/:Article4:Pages About This Site
--- meta/contribute/:Article5:Contributing
--- meta/bugs/:Article9:Bugs
--- meta/about/:Article10:Why does this website exist?
The parent page this belongs to.
Article title
*
Article revisions must have a non-empty title
Article body
*
A projection is fundamental to [cameras](/8/rendering/cameras/), mapping a 3D space onto a 2D image to render geometry. The projection [matrix](/9/rendering/matrices/) is widely used in computer graphics to encodes such a transform between spaces. It is a linear transform preserving straight lines, which both looks natural and is important for fast rasterizaiton, unlike a fisheye projection for example which is non-linear. A projection matrix is a 4×4 homogeneous matrix and can be pre-multiplied with other transformation matrices. Multiplying a point in world space, of the form $(x, y, z, 1)$, by a projection matrix produces *clip space* coordinates. For some projection matrices, a "*perspective normalise*" divide is required to convert from clip space to normalised device coordinates (NDC). The $(x, y)$ coordinates can be scaled by the image resolution for coordinates in pixels. This process is included in the discussion of [spaces](/18/rendering/spaces/). The projection matrix implicitly defines a viewing volume and image bounaries at the -1 to 1 planes in NDC. That is, the image is formed by $(x, y)$ points inside the -1 to 1 range after being transformed by the projection matrix. Geometry outside the range is "clipped", discussed later. $z$ may also be constrained to the -1 to 1 range for precision reasons with depth testing. When projected back into world space, these boundaries create a typical cube or frustum shaped viewing volume shown in many projection visualizations. There are two common projection matrices used in 3D graphics: orthographic and perspective. An orthographic projection is commonly seen in mathematical diagrams as it preserves relative lengths in addition to straight lines. It is also useful in modelling packages to align geometry. The projection matrix is more natural and objects in the distance become smaller, just like typical rectilinear lenses and the human eye. # Orthographic The orthographic matrix gives a cuboid viewing volume and is really just a scale matrix to frame the scene. The image is eventually formed by geometry in the -1 to 1 range after the projection matrix is applied. To define an orthographic projection, left, right, top and bottom distances ($L$, $R$, $T$, $B$) are chosen to map to the image borders. An orthographic matrix scales these down to the -1 to 1 range. It also performs a translation if the borders are not symmetric. Near and far distances ($N$, $F$) for the depth range are also chosen, particularly so that objects behind a camera are not drawn, but also to help hidden surface removal methods such as the depth buffer. $$ \begin{bmatrix} \frac{2}{R-L} & 0 & 0 & \frac{L+R}{L-R} \\ 0 & \frac{2}{T-B} & 0 & \frac{B+T}{B-T} \\ 0 & 0 & \frac{2}{N-F} & \frac{N+F}{N-F} \\ 0 & 0 & 0 & 1 \end{bmatrix} $$ The component $\frac{2}{N-F}$ is negative which inverts $z$ components so that the camera looks towards $-Z$. Because the bottom row is $(0, 0, 0, 1)$ the depth range is scaled linearly, unlike in the projection matrix later. This affects the depth buffer's precision. In some cases it is desirable to create a projection which matches world space units with width and height in pixels of the final image, as in the following matrix. Note $B=\mathsf{height}$ rather than $T$ so that $+Y$ is down. While this doesn't make much sense for computer graphics it is essential for text alignment as we read from top to bottom. $$ \begin{array}{2} L=0 & R=\mathsf{width} \\ B=\mathsf{height} & T=0 \\ N=-1 & F=1 \end{array} $$ # Perspective A [perspective projection](/8/rendering/cameras/#projection) computes intersections between a plane and lines from the origin to points in the 3D world, or, "projects" those points onto a plane. Imagine drawing what you see through a sheet of glass, held exactly one unit away from you. The maths is easy: divide points by their $z$ coordinate. An alternative interpretation is to shrink distant objects by scaling down their x and y coordinates by their distance, although that's more the effect than the intention of a projection. With the perspective divide convention, a perspective projection can be represented in a 4×4 matrix. A perspective viewing volume is typically symmetric, defined by a field of view, $\mathsf{fov}$, an aspect ratio $a$, which is discussed later, and near and far ($N$, $F$) boundaries. Rarely is a perspective projection asymmetric and more general frustum viewing volume is not provided here. $$ f = \operatorname{cot}(\frac{\mathsf{fov}_y}{2}) = \frac{1}{\tan(\frac{\mathsf{fov}_y}{2})} $$ $$ a = \frac{\mathsf{width}}{\mathsf{height}} $$ $$ \begin{bmatrix} \frac{f}{a} & 0 & 0 & 0 \\ 0 & f & 0 & 0 \\ 0 & 0 & \frac{N+F}{N-F} & \frac{2NF}{N-F} \\ 0 & 0 & -1 & 0 \end{bmatrix} $$ Note the $(0, 0, -1, 0)$ bottom row which makes the $w$ component of a transformed vector dependent on $z$. This is what causes objects to become smaller in the distance as $x$ and $y$ are divide by $w$ during the perspective divide, discussed later. # Clip Space and the Perspective Divide After a vector is transformed by a projection matrix it is in [clip space](/27/rendering/spaces/clip_space/). It is a 4D space and called clip space because this is where geometry is "clipped" at the borders of the image. Some form of clipping is necessary as it would be inefficient to perform computation on geometry outside the image. Clipping is more important to rasterizers as triangles are transformed into image space for pixel--polygon intersection tests (although the intersections are generated) rather than testing in world space as a raytracer does. Exactly why clipping is performed in clip space becomes more apparent after the perspective divide is introduced. The perspective divide "normalizes" a clip space vector by dividing by its $w$ component so that the new $w$ is 1: $$ v_\mathsf{NDC} = \frac{v_\mathsf{clip}}{v_{\mathsf{clip}_w}} $$ As said earlier, this scales down objects in the distance in the case of a perspective projection. ## Precision In addition to scaling $x$ and $y$ the perspective divide also affects $z$, creating a hyperbolic mapping which has some beneficial properties for precision when comparing $z$ values. When truncated to integers, the number 3.1 is not less than 3.2 as only the 3 is compared. The resolution of possible values is one, i.e. numbers have to be at least one apart before they are distinct. Using linear $z$ the resolution is the same for objects close to the camera as those way off in the distance. However high precision in the distance is often not necessary as geometry there is sparse, while detailed geometry is drawn up close. By scaling $z$ so that possible values are closer together near the camera, the precision is better optimized for typical scenes. ## Clipping One method is to clip polygons at the image boundaries, as in the image below. However the resulting shapes may be quads and additional vertices and triangles are needed. Clipping geometry is expensive. Alternatively the rasterizer may simply not produce fragments for pixel positions outside the image. I.e. triangles completely outside the image are culled and those that intersect it are still sent to the rasterizer which can efficiently ignore their area outside the image. This works fine for triangles bordering the $X$ and $Y$ boundaries, but there are a few problems in the $Z$ direction. ![Clipping triangles to the image borders][1] The perspective projection preserves straight lines, with the exception of lines that cross the $z=0$ plane, for example a triangle with some vertices in front of the camera and some behind. The point behind will have its $(x, y)$ position inverted and edge interpolation will be incorrect. This makes it impossible to perform triangle clipping in image space or NDC and elevates the importance of clipping in clip space, as it's named. Triangles that bridge the $z=0$ and $z=N$ region and are within the $X$ and $Y$ image boundaries must be clipped to the near plane before the perspective divide. Triangles which intersect the near and far clipping planes may be rasterized without geometry clipping and clipped by discarding fragments outside the depth range. Clipping implementation is discussed in more detail by [Fabian Giesen at his blog](https://fgiesen.wordpress.com/2011/07/05/a-trip-through-the-graphics-pipeline-2011-part-5/). # Aspect Ratio The aspect ratio is a ratio between the width and height of the image, specifically width:height or $\mathsf{width}/\mathsf{height}$. For example, some common resolutions are 640×480, 1680×1050 and 1920×1080 with aspect ratios of 4:3, 16:10 and 16:9 respectively. The projection normally encodes the image aspect ratio so that after NDC, scaling to the image resolution produces an undistorted image. A seeming alternative may be to apply the aspect ratio when scaling NDC, but clipping must be performed in clip space so the aspect ratio must be applied beforehand. [1]: /u/img/bcf7ab83539f.svg
Toggle Preview
Edit message
*
A description of the changes made
Discard Draft
Save Draft
leave this field blank to prove your humanity
Flag
the thing you clicked
for moderator attention.
Reason choice:
Spam, promoting, advertising without disclosure
Rude, inappropriate, generally offensive
Too arrogant or demeaning to others
Other
Reason:
The reason for raising the flag
Error