Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Texture mapping
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Rasterisation algorithms== Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility and performance. ===Affine texture mapping=== [[File:Perspective correct texture mapping.svg|thumb|400px|Because affine texture mapping does not take into account the depth information about a polygon's vertices, where the polygon is not perpendicular to the viewer, it produces a noticeable defect, especially when rasterized as triangles.]] '''Affine texture mapping''' linearly interpolates texture coordinates across a surface, and so is the fastest form of texture mapping. Some software and hardware (such as the original [[PlayStation (console)|PlayStation]]) [[3D projection|project]] vertices in 3D space onto the screen during rendering and [[Linear interpolation|linearly interpolate]] the texture coordinates ''in screen space'' between them. This may be done by incrementing [[Fixed point arithmetic|fixed point]] [[UV coordinates]], or by an [[incremental error algorithm]] akin to [[Bresenham's line algorithm]]. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (see figure: the checker box texture appears bent), especially as primitives near the [[3d camera coordinate system|camera]]. Such distortion may be reduced with the subdivision of the polygon into smaller ones. For the case of rectangular objects, using quad primitives can look less incorrect than the same rectangle split into triangles, but because interpolating 4 points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the [[#Forward texture mapping|forward texture mapping]] used by the Nvidia [[NV1]], was able to offer efficient quad primitives. With perspective correction (see below) triangles become equivalent and this advantage disappears. [[File:Affine texture mapping tri vs quad.svg|thumb|400px|For rectangular objects, especially when perpendicular to the view, linearly interpolating across a quad can give a superior affine result versus the same rectangle split into two affine triangles.]] For rectangular objects that are at right angles to the viewer, like floors and walls, the perspective only needs to be corrected in one direction across the screen, rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor, and then an affine linear interpolation across that horizontal span will look correct, because every pixel along that line is the same distance from the viewer. ===Perspective correctness=== '''Perspective correct''' texturing accounts for the vertices' positions in 3D space, rather than simply interpolating coordinates in 2D screen space.<ref name="NGen15">{{cite magazine|date=March 1996|title=The Next Generation 1996 Lexicon A to Z: Perspective Correction|url=https://archive.org/details/nextgen-issue-015/page/n39/mode/2up|magazine=[[Next Generation (magazine)|Next Generation]]|publisher=[[Imagine Media]]|issue=15|page=38}}</ref> This achieves the correct visual effect but it is more expensive to calculate.<ref name="NGen15"/> To perform perspective correction of the texture coordinates <math>u</math> and <math>v</math>, with <math>z</math> being the depth component from the viewer's point of view, we can take advantage of the fact that the values <math>\frac{1}{z}</math>, <math>\frac{u}{z}</math>, and <math>\frac{v}{z}</math> are linear in screen space across the surface being textured. In contrast, the original <math>z</math>, <math>u</math> and <math>v</math>, before the division, are not linear across the surface in screen space. We can therefore linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to result in a perspective correct texture mapping. To do this, we first calculate the reciprocals at each vertex of our geometry (3 points for a triangle). For vertex <math>n</math> we have <math>\frac{u_n}{z_n}, \frac{v_n}{z_n}, \frac{1}{z_n}</math>. Then, we linearly interpolate these reciprocals between the <math>n</math> vertices (e.g., using [[Barycentric coordinate system|barycentric coordinates]]), resulting in interpolated values across the surface. At a given point, this yields the interpolated <math>u_i, v_i</math>, and <math>zReciprocal_i = \frac{1}{z_i}</math>. Note that this <math>u_i, v_i</math> cannot be yet used as our texture coordinates as our division by <math>z</math> altered their coordinate system. To correct back to the <math>u, v</math> space we first calculate the corrected <math>z</math> by again taking the reciprocal <math>z_{correct} = \frac{1}{zReciprocal_i} = \frac{1}{\frac{1}{z_i}}</math>. Then we use this to correct our <math>u_i, v_i</math>: <math>u_{correct} = u_i \cdot z_i</math> and <math> v_{correct} = v_i \cdot z_i</math>.<ref>{{Cite web|url=http://www.lysator.liu.se/~mikaelk/doc/perspectivetexture/|title=Perspective Texturemapping|last=Kalms|first=Mikael|date=1997|website=www.lysator.liu.se|access-date=2020-03-27}}</ref> This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider) and in parts that are farther away this difference is larger (compressing the texture). :Affine texture mapping directly interpolates a texture coordinate <math>u^{}_{\alpha}</math> between two endpoints <math>u^{}_0</math> and <math>u^{}_1</math>: ::<math>u^{}_{\alpha}= (1 - \alpha ) u_0 + \alpha u_1</math> where <math>0 \le \alpha \le 1</math> :Perspective correct mapping interpolates after dividing by depth <math>z^{}_{}</math>, then uses its interpolated reciprocal to recover the correct coordinate: ::<math>u^{}_{\alpha}= \frac{ (1 - \alpha ) \frac{ u_0 }{ z_0 } + \alpha \frac{ u_1 }{ z_1 } }{ (1 - \alpha ) \frac{ 1 }{ z_0 } + \alpha \frac{ 1 }{ z_1 } }</math> 3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality/precision trade-offs, which can be applied to both software and hardware. Classic software texture mappers generally did only simple mapping with at most one lighting effect (typically applied through a [[lookup table]]), and the perspective correctness was about 16 times more expensive. ===Restricted camera rotation=== [[File:Freedoom 2018.png|300px|thumb|right|[[Doom engine]] did not permit ramped floors or slanted walls. This requires perspective correction only once per each horizontal or vertical span, rather than per-pixel.]] The ''[[Doom engine]]'' restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camera [[Pitch (orientation)|pitch]] with [[Shear (transformation)|shearing]] which allowed the appearance of greater freedom whilst using the same rendering technique. Some engines were able to render texture mapped [[heightmaps]] (e.g. [[Nova Logic]]'s [[Voxel Space]], and the engine for [[Outcast (video game)|Outcast]]) via [[Bresenham's line algorithm|Bresenham]]-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives.<ref>"[https://web.archive.org/web/20131113094653/http://www.codermind.com/articles/Voxel-terrain-engine-building-the-terrain.html Voxel terrain engine]", introduction. In a coder's mind, 2005 (archived 2013).</ref> ===Subdivision for perspective correction=== Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals. First, keeping the arithmetic mill busy at all times. Second, producing faster arithmetic results.{{vague|date=March 2020}} ====World space subdivision==== For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The [[Sony PlayStation]] made extensive use of this because it only supported affine mapping in hardware but had a relatively high triangle throughput compared to its peers. ====Screen space subdivision==== [[File:Texturemapping subdivision.svg|thumb|200 px|Screen space sub division techniques. Top left: Quake-like, top right: bilinear, bottom left: const-z]] Software renderers generally preferred screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the [[x86]] CPU; the [[68000]] or any [[RISC]] is much more suited). A different approach was taken for ''[[Quake (video game)|Quake]]'', which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor.<ref>Abrash, Michael. ''Michael Abrash's Graphics Programming Black Book Special Edition.'' The Coriolis Group, Scottsdale Arizona, 1997. {{ISBN|1-57610-174-6}} ([http://www.gamedev.net/reference/articles/article1698.asp PDF] {{Webarchive|url=https://web.archive.org/web/20070311022026/http://www.gamedev.net/reference/articles/article1698.asp |date=2007-03-11 }}) (Chapter 70, pg. 1282)</ref> The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the [[polygon normal]] to achieve a more constant z but the effort seems not to be worth it. ====Other techniques==== Another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided<ref>{{cite patent | inventor-last = Spackman | inventor-first = John Neil | issue-date = 1998-04-14 | title = Apparatus and method for performing perspectively correct interpolation in computer graphics | patent-number = 5739818 | country-code = US }}</ref> but the amount of bookkeeping makes this method too slow on most systems. Finally, [[Build engine|the Build engine]] extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it. ===Hardware implementations=== Texture mapping hardware was originally developed for simulation (e.g. as implemented in the [[Evans and Sutherland]] ESIG and Singer-Link Digital Image Generators DIG), and professional [[Workstation#Graphics workstations|graphics workstations]] such as [[Silicon Graphics]], broadcast [[digital video effect]]s machines such as the [[Ampex ADO]] and later appeared in [[Arcade cabinet]]s, consumer [[video game console]]s, and PC [[video card]]s in the mid-1990s. In [[flight simulation]], texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. It was also in flight simulation applications, that texture mapping was implemented for real-time processing with prefiltered texture patterns stored in memory for real-time access by the video processor.<ref>{{cite journal |last1=Yan |first1=Johnson |title=Advances in Computer-Generated Imagery for Flight Simulation |journal=IEEE |date=August 1985 |volume=5 |issue=8 |pages=37β51 |doi=10.1109/MCG.1985.276213 |url=https://ieeexplore.ieee.org/document/4056245|url-access=subscription }}</ref> Modern [[graphics processing unit]]s (GPUs) provide specialised [[fixed function unit]]s called ''texture samplers'', or [[texture mapping unit|''texture mapping units'']], to perform texture mapping, usually with [[trilinear filtering]] or better multi-tap [[anisotropic filtering]] and hardware for decoding specific formats such as [[S3 Texture Compression|DXTn]]. As of 2016, texture mapping hardware is ubiquitous as most [[System on a chip|SOC]]s contain a suitable GPU. Some hardware combines texture mapping with [[hidden-surface determination]] in [[Tiled rendering|tile based deferred rendering]] or [[scanline rendering]]; such systems only fetch the visible [[Texel (graphics)|texels]] at the expense of using greater workspace for transformed vertices. Most systems have settled on the [[Z-buffering]] approach, which can still reduce the texture mapping workload with front-to-back [[Sorting algorithm|sorting]]. Among earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen: * '''Forward texture mapping''' iterates through each texel on the texture, and decides where to place it on the screen. * '''Inverse texture mapping''' instead iterates through pixels on the screen, and decides what texel to use for each. Inverse texture mapping is the method which has become standard in modern hardware. ====Inverse texture mapping==== With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of a [[rendering primitive]] is projected to a point on the screen, and each of these points is [[UV mapping|mapped to a u,v texel coordinate]] on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive. The primary advantage is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen. The main disadvantage versus forward texture mapping is that the [[memory access pattern]] in the [[texture space]] will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed by [[texture cache|texture caching]] techniques, such as the [[swizzled texture]] memory arrangement. The linear interpolation can be used directly for simple and efficient [[#Affine texture mapping|affine]] texture mapping, but can also be adapted for [[#Perspective correctness|perspective correctness]]. ====Forward texture mapping==== Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture, splatting each one onto a pixel of the [[frame buffer]]. This was used by some hardware, such as the [[3DO Interactive Multiplayer|3DO]], the [[Sega Saturn]] and the [[NV1]]. The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly. This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles (see [[#Affine texture mapping|affine texture mapping]] above). The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness. The existing hardware implementations did not provide effective [[UV coordinates|UV coordinate mapping]], which became an important technique for 3D modelling and assisted in [[Clipping (computer graphics)|clipping]] the texture correctly when the primitive goes over the edge of the screen. These shortcomings could have been addressed with further development, but GPU design has since mostly moved toward inverse mapping. {{clear}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)