Heuristic42
Blog
Opengl
Meta
Rendering
0
comment
Dec 15 at 4:39
Matrices
Hello! Are you looking to elevate your online business with a p…
–
anonymous
comment
Nov 27 at 0:30
DerBard: Custom Split Mechanical Keyboard Prototype
hello
–
anonymous
comment
Nov 19 at 15:47
Matrices
[deleted]
–
anonymous
created
Oct 20 at 20:30
Iterators: pointers vs cursors
You're already doing both of these by hand. This post emphaisze…
–
pknowles
comment
Oct 10 at 10:27
Matrices
[deleted]
–
anonymous
comment
Oct 4 at 19:12
Matrices
[deleted]
–
anonymous
comment
Sep 30 at 18:51
Matrices
[deleted]
–
anonymous
comment
Sep 23 at 16:15
Matrices
[deleted]
–
anonymous
comment
Sep 21 at 6:52
Contributing
I kind of predicted what was bound to happen when my favourite …
–
anonymous
comment
Sep 7 at 1:21
Route contention when running docker and a VPN
Thank you for this. Between this and the overwriting of iptabl…
–
anonymous
comment
Sep 6 at 17:57
Making a real EMF Reader
Sorry for the random quoted text comments. I am one of those p…
–
anonymous
comment
Sep 6 at 17:48
Making a real EMF Reader
["ove! Play a tone with a buzzer and has 5 LEDs to show the “EM…
–
anonymous
comment
Sep 6 at 17:47
Making a real EMF Reader
["easure direction Measure the magnetic fie"](#q107-644-685)
–
anonymous
comment
Aug 20 at 17:01
Matrices
[deleted]
–
anonymous
comment
Aug 11 at 22:32
Matrices
[deleted]
–
anonymous
edited
Jun 8 at 22:29
Rethinking writing files with memory mapping and C++
This post introduces the motivation behind the [decodless C++ o…
–
admin
created
Jun 8 at 22:16
Rethinking writing files with memory mapping and C++
This post introduces the motivation behind the [decodless C++ o…
–
pknowles
comment
Jun 5 at 13:36
Contributing
[deleted]
–
anonymous
comment
Apr 19 at 11:24
Matrices
[deleted]
–
anonymous
comment
Apr 13 at 0:25
Matrices
[deleted]
–
anonymous
comment
Apr 5 at 9:43
Matrices
[deleted]
–
anonymous
comment
Mar 27 at 17:19
Matrices
[deleted]
–
anonymous
comment
Mar 25 at 4:59
Matrices
[deleted]
–
anonymous
comment
Mar 5 at 15:39
Matrices
[deleted]
–
anonymous
…
View All
Log in
Vector Spaces
leave this field blank to prove your humanity
Slug
*
A URL path component
Parent page
<root>
rendering/:Article2:3D Rendering (Computer Graphics)
--- rendering/cameras/:Article11:Cameras
--- rendering/matrices/:Article12:Matrices
------ rendering/matrices/projection/:Article14:Projection Matrix
--- rendering/vectors/:Article13:Vectors
--- rendering/geometry/:Article62:3D Geometry
------ rendering/geometry/triangle_meshes/:None
--- rendering/shading/:Article64:Shading
------ rendering/shading/transparency/:Article70:Transparency and Alpha Blending
--- rendering/lights/:Article65:Lights
--- rendering/rasterization/:None
------ rendering/rasterization/deepimage/:Article72:Deep Image
--- rendering/shadows/:Article67:Shadows
--- rendering/spaces/:Article68:Vector Spaces
------ rendering/spaces/tangent_space/:Article69:Tangent Space
------ rendering/spaces/clip_space/:Article89:Clip Space
--- rendering/rotations/:None
--- rendering/images/:Article74:<unset>:Images
------ rendering/images/mipmapping/:Article75:<unset>:Mipmapping
--- rendering/materials/:None
opengl/:Article3:OpenGL Tutorials
--- opengl/oit/:Article7:Order Independent Transparency (OIT)
--- opengl/framebuffer/:Article71:The Framebuffer
meta/:Article4:Pages About This Site
--- meta/contribute/:Article5:Contributing
--- meta/bugs/:Article9:Bugs
--- meta/about/:Article10:Why does this website exist?
The parent page this belongs to.
Article title
*
Article revisions must have a non-empty title
Article body
*
A number of vector spaces are discussed below and widely used in 3D graphics to place objects in a common scene. Transformation [matrices](/9/rendering/matrices/) both define and transform [vectors](/10/rendering/vectors/) between spaces. Either objects are projected into a discretized image for rasterization or rays are projected from the image into the world for raytracing. See [projection matrix](https://www.heuristic42.com/11/rendering/matrices/projection/). **Object space** is the name given to the coordinate system that objects, i.e. triangle mesh vertices, are defined in. Each mesh has its own object space, and the data typically remains static with animation applied on the fly during vertex processing. Exceptions include deformable objects and dynamically defined meshes such as isosurfaces. Object space operations (as opposed to image space, below) commonly refers to the use of triangle data, without specific reference to the vector space, and can also imply more global operations on all instantiated objects in world space. **World space** is the space the scene is constructed in, instantiating objects and placing them in relation to one another by defining an object-to-world space transformation. These and other transformations can be efficiently stored in 4 × 3 or 4 × 4 [homogeneous matrices](https://www.heuristic42.com/9/rendering/matrices/#homogeneous-matrices) and applied by multiplying vertex coordinates. By pre-multiplying and combining transformations, a vertex can be transformed to a desired space with a single operation. World space is common to all instances and provides view-independent relative positions to objects in the scene. With the scene defined, a viewer is necessary to relate a 2D image to the 3D world. Thus, a virtual camera model is defined, somewhat implicitly in some cases, and is split into two parts: a view and projection. **Eye space**, or camera space, is given by the [camera](/8/rendering/cameras/)’s view, commonly position and direction. The transformation can be thought of as moving the entire world/scene to provide the viewer location and rotation, which despite appearances is mathematically inexpensive. This analogy works for other transforms but may be more intuitive here. Like the previous spaces, eye space is linear and infinite, with viewing volume bounds or projection yet to be defined. **Clip space** exists after a projection matrix has been applied. As in the name, it is here that geometry is clipped to the viewing volume. Orthographic projections are supported with the same operations but the purpose of clips space is much more apparent when discussing [perspective projections](/11/rendering/matrices/projection/#perspective). It is actually a 4D space and the next step to *NDC* is a *perspective divide*, dividing transformed vectors by $w$. While the vector space is 4D, $w$ is linearly dependent on $z$ so it is not a *basis*. Clipping is necessary otherwise a vertex positions near or crossing the $w = 0$ boundary will produce incorrect results for perspective projections. This is due to a divide by zero or negatives that invert $x$ and $y$ coordinates. One convenience is the projection had been applied so there is no need to calculate viewing volume planes in eye space were clipping to happen earlier. The viewing volume to clip to is always $-w \geq x \geq w$, $-w \geq y \geq w$ and $-w \geq z \geq w$. Another convenience is the space is linear so regular vector operations can, and must, be performed before the perspective divide. For example, $z$ becomes non-linear in *NDC*, although that's more of a depth buffer feature. Perspective-correct vertex interpolation must be performed in clip space, before normalization. The downside is it takes some extra extra maths to work with $w$. See [Clip Space](/27/rendering/spaces/clip_space/). **Normalized device coordinates** (NDC) are a result of the [perspective divide](/11/rendering/matrices/projection/#clip-space-and-the-perspective-divide), where 4D homogeneous vectors are normalized, $\mathsf{NDC} = \mathsf{clip}_{xyz}/\mathsf{clip}_w$, after the perspective transformation to clip space. The 3D components of visible points commonly range between -1 and 1. This really depends on the projection matrix. A notable case is a difference in the $z$ range between the projections created by OpenGL and DirectX software. To undo a perspective divide when transforming from NDC to eye space, simply transform by the inverse projection matrix and re-normalize with the same divide, rather than attempt to compute and scale by $w$. **Image space** linearly scales NDC to the pixel dimensions. For example, $\frac{\mathsf{NDC_{xy}} + 1}{2} \mathsf{resolution}$ This is where rasterization would be performed. Alternatively, points in image space can be projected into 3D rays in world space (from the near to the far clipping planes for example), multiplying by the inverse projection and view matrices for raytracing.
Toggle Preview
Edit message
*
A description of the changes made
Discard Draft
Save Draft
leave this field blank to prove your humanity
Flag
the thing you clicked
for moderator attention.
Reason choice:
Spam, promoting, advertising without disclosure
Rude, inappropriate, generally offensive
Too arrogant or demeaning to others
Other
Reason:
The reason for raising the flag
Error