Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Framebuffer
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Portion of random-access memory containing a bitmap that drives a video display}} {{Use American English|date=November 2024}} [[File:Sun sbus cgsix framebuffer.jpg|thumb|Sun TGX Framebuffer]] A '''framebuffer''' ('''frame buffer''', or sometimes '''framestore''') is a portion of [[random-access memory]] (RAM)<ref>{{cite web|url=http://www.webopedia.com/TERM/F/frame_buffer.html|title=What is frame buffer? A Webopedia Definition|work=webopedia.com|date=June 1998 }}</ref> containing a [[bitmap]] that drives a video display. It is a [[memory buffer]] containing data representing all the [[pixel]]s in a complete [[video frame]].<ref>{{cite web |url=http://www.sunhelp.org/faq/FrameBuffer.html#00 |title=Frame Buffer FAQ |access-date=14 May 2014 }}</ref> Modern [[video card]]s contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a [[video signal]] that can be displayed on a computer monitor. In [[computing]], a '''screen buffer''' is a part of [[computer memory]] used by a computer application for the representation of the content to be shown on the [[computer display]].<ref name="google">{{cite book|title=.NET Framework Solutions: In Search of the Lost Win32 API|author=Mueller, J.|date=2002|publisher=Wiley|isbn=9780782141344|url=https://books.google.com/books?id=XYQruTc6_44C|page=160|access-date=2015-04-21}}</ref> The screen buffer may also be called the '''video buffer''', the '''regeneration buffer''', or '''regen buffer''' for short.<ref name="smartcomputing">{{cite web|url=http://www.smartcomputing.com/editorial/dictionary/detail.asp?searchtype=2&DicID=10421&RefType=Dictionary&guid=|archive-url=https://web.archive.org/web/20120324192310/http://www.smartcomputing.com/editorial/dictionary/detail.asp?searchtype=2&DicID=10421&RefType=Dictionary&guid= |archive-date=2012-03-24 |url-status=dead|title=Smart Computing Dictionary Entry - video buffer|access-date=2015-04-21}}</ref> Screen buffers should be distinguished from [[video memory]]. To this end, the term '''off-screen buffer''' is also used. The information in the buffer typically consists of color values for every pixel to be shown on the display. Color values are commonly stored in 1-bit [[binary image|binary]] (monochrome), 4-bit [[palette (computing)|palettized]], 8-bit palettized, 16-bit [[high color]] and 24-bit [[Color depth#True color .2824-bit.29|true color]] formats. An additional [[alpha channel]] is sometimes used to retain information about pixel transparency. The total amount of memory required for the framebuffer depends on the [[Display resolution|resolution]] of the output signal, and on the [[color depth]] or palette size. == History == [[File:SWAC 003.jpg|thumb|Memory pattern on [[SWAC (computer)|SWAC]] Williams tube CRT in 1951]] Computer researchers{{who|date=July 2017}} had long discussed the theoretical advantages of a framebuffer but were unable to produce a machine with sufficient [[computer memory|memory]] at an economically practicable cost.{{citation needed|date=August 2017}}<ref name="Gaboury">{{Cite journal|last=Gaboury|first=J.|date=2018-03-01|title=The random-access image: Memory and the history of the computer screen|journal=Grey Room|volume=70|url=https://escholarship.org/uc/item/0b3873pn|issue=70|pages=24–53|doi=10.1162/GREY_a_00233|s2cid=57565564|issn=1526-3819|hdl=21.11116/0000-0001-FA73-4|hdl-access=free}}</ref> In 1947, the [[Manchester Baby]] computer used a [[Williams tube]], later the Williams-Kilburn tube, to store 1024 bits on a [[cathode-ray tube|cathode-ray tube (CRT)]] memory and displayed on a second CRT.<ref>{{Cite journal|last1=Williams|first1=F. C.|last2=Kilburn|first2=T.|date=March 1949|title=A storage system for use with binary-digital computing machines|url=https://ieeexplore.ieee.org/document/5241129|archive-url=https://web.archive.org/web/20190426011059/https://ieeexplore.ieee.org/document/5241129|url-status=dead|archive-date=April 26, 2019|journal=Proceedings of the IEE - Part III: Radio and Communication Engineering|volume=96|issue=40|pages=81–|doi=10.1049/pi-3.1949.0018}}</ref><ref>{{Cite web|url=http://curation.cs.manchester.ac.uk/digital60/www.digital60.org/birth/manchestercomputers/mark1/documents/report1947cover.html|title=Kilburn 1947 Report Cover Notes (Digital 60)|website=curation.cs.manchester.ac.uk|access-date=2019-04-26}}</ref> Other research labs were exploring these techniques with [[MIT Lincoln Laboratory]] achieving a 4096 display in 1950.<ref name="Gaboury" /> A color scanned display was implemented in the late 1960s, called the [[Brookhaven National Laboratory|Brookhaven]] RAster Display (BRAD), which used a [[drum memory]] and a television monitor.<ref>{{citation |author1=D. Ophir |author2=S. Rankowitz |author3=B. J. Shepherd |author4=R. J. Spinrad |title=BRAD: The Brookhave Raster Display |work=Communications of the ACM |volume=11 |number=6 |date=June 1968 |pages=415–416 |doi=10.1145/363347.363385|s2cid=11160780 |doi-access=free }}</ref> In 1969, A. Michael Noll of [[Bell Telephone Laboratories, Inc.]] implemented a scanned display with a frame buffer, using [[magnetic-core memory]].<ref>{{cite journal |last=Noll |first=A. Michael |title=Scanned-Display Computer Graphics |journal=Communications of the ACM |volume=14 |number=3 |date=March 1971 |pages=145–150 |doi=10.1145/362566.362567|s2cid=2210619 |doi-access=free }}</ref> A year or so later, the Bell Labs system was expanded to display an image with a color depth of three bits on a standard color TV monitor. The vector graphics used in the computer had to be converted for the scanned-graphics of a TV display. In the early 1970s, the development of [[MOS memory]] ([[metal–oxide–semiconductor]] memory) [[Integrated circuit|integrated-circuit]] chips, particularly [[large-scale integration|high-density]] [[DRAM]] (dynamic [[random-access memory]]) chips with at least 1{{nbsp}}[[kibibit|kb]] memory, made it practical to create, for the first time, a [[digital memory]] system with framebuffers capable of holding a standard video image.<ref name="Shoup_SuperPaint"/><ref>{{cite conference |last1=Goldwasser |first1=S.M. |title=Computer Architecture For Interactive Display Of Segmented Imagery |conference=Computer Architectures for Spatially Distributed Data |date=June 1983 |publisher=[[Springer Science & Business Media]] |isbn=9783642821509 |pages=75–94 (81) |url=https://books.google.com/books?id=8MuoCAAAQBAJ&pg=PA81}}</ref> This led to the development of the [[SuperPaint]] system by [[Richard Shoup (programmer)|Richard Shoup]] at [[Xerox PARC]] in 1972.<ref name="Shoup_SuperPaint">{{cite web |url=https://ohiostate.pressbooks.pub/app/uploads/sites/45/2017/09/Annals_final.pdf |archive-url=https://web.archive.org/web/20040612215245/http://accad.osu.edu/~waynec/history/PDFs/Annals_final.pdf |archive-date=2004-06-12 |title=SuperPaint: An Early Frame Buffer Graphics System |author=Richard Shoup |publisher=IEEE |work=Annals of the History of Computing |year=2001 |url-status=dead }}</ref> Shoup was able to use the SuperPaint framebuffer to create an early digital video-capture system. By synchronizing the output signal to the input signal, Shoup was able to overwrite each pixel of data as it shifted in. Shoup also experimented with modifying the output signal using color tables. These color tables allowed the SuperPaint system to produce a wide variety of colors outside the range of the limited 8-bit data it contained. This scheme would later become commonplace in computer framebuffers. In 1974, [[Evans & Sutherland]] released the first commercial framebuffer, the Picture System,<ref>{{citation |title=Picture System |url=http://s3data.computerhistory.org/brochures/evanssutherland.3d.1974.102646288.pdf |publisher=Evans & Sutherland |access-date=2017-12-31}}</ref> costing about $15,000. It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit [[grayscale]], and became a boon for graphics researchers who did not have the resources to build their own framebuffer. The [[New York Institute of Technology]] would later create the first 24-bit color system using three of the Evans & Sutherland framebuffers.<ref name="NYIT-History">{{cite web |url=https://www.cs.cmu.edu/~ph/nyit/masson/nyit.html |title=History of the New York Institute of Technology Graphics Lab |access-date=2007-08-31}}</ref> Each framebuffer was connected to an [[RGB]] color output (one for red, one for green and one for blue), with a Digital Equipment Corporation PDP 11/04 [[minicomputer]] controlling the three devices as one. In 1975, the UK company [[Quantel]] produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the [[1976 Montreal Olympics]] to generate a [[picture-in-picture]] inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium. The rapid improvement of integrated-circuit technology made it possible for many of the home computers of the late 1970s to contain low-color-depth framebuffers. Today, nearly all computers with graphical capabilities utilize a framebuffer for generating the video signal. [[Amiga]] computers, created in the 1980s, featured special design attention to graphics performance and included a unique [[Hold-And-Modify]] framebuffer capable of displaying 4096 colors. Framebuffers also became popular in high-end workstations and [[arcade system board]]s throughout the 1980s. [[Silicon Graphics|SGI]], [[Sun Microsystems]], [[Hewlett-Packard|HP]], [[Digital Equipment Corporation|DEC]] and [[IBM]] all released framebuffers for their workstation computers in this period. These framebuffers were usually of a much higher quality than could be found in most home computers, and were regularly used in television, printing, computer modeling and 3D graphics. Framebuffers were also used by [[Sega]] for its high-end [[List of Sega arcade system boards|arcade boards]], which were also of a higher quality than on home computers. == Display modes == [[Image:Sun sbus cgsix framebuffer2.jpg|thumb|A Sun cgsix framebuffer]] Framebuffers used in personal and home computing often had sets of defined ''modes'' under which the framebuffer can operate. These modes reconfigure the hardware to output different resolutions, color depths, memory layouts and [[refresh rate]] timings. In the world of [[Unix]] machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings. This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable – limited only by the memory available to the framebuffer. An unfortunate side-effect of this method was that the [[display device]] could be driven beyond its capabilities. In some cases, this resulted in hardware damage to the display.<ref>http://tldp.org/HOWTO/XFree86-Video-Timings-HOWTO/overd.html XFree86 Video Timings HOWTO: Overdriving Your Monitor</ref> More commonly, it simply produced garbled and unusable output. Modern CRT monitors fix this problem through the introduction of protection circuitry. When the display mode is changed, the monitor attempts to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or if the signal is outside the range of its design limitations, the monitor will ignore the framebuffer signal and possibly present the user with an error message. LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of range cannot be physically displayed on the monitor. == Color palette == Framebuffers have traditionally supported a wide variety of color modes. Due to the expense of memory, most early framebuffers used 1-bit (2 colors per pixel), 2-bit (4 colors), 4-bit (16 colors) or 8-bit (256 colors) color depths. The problem with such small color depths is that a full range of colors cannot be produced. The solution to this problem was [[indexed color]], which adds a [[lookup table]] to the framebuffer. Each color stored in framebuffer memory acts as a color index. The lookup table serves as a palette with a limited number of different colors, while the rest is used as an index table. Here is a typical indexed 256-color image and its own palette (shown as a rectangle of swatches): :{| style="border-style: none" border="0" cellpadding="0" |- || [[File:Adaptative 8bits palette sample image.png]] || || [[File:Adaptative 8bits palette.png]] |} In some designs it was also possible to write data to the lookup table (or switch between existing palettes) on the fly, allowing dividing the picture into horizontal bars with their own palette and thus render an image that had a far wider palette. For example, viewing an outdoor shot photograph, the picture could be divided into four bars, the top one with emphasis on sky tones, the next with foliage tones, the next with skin and clothing tones, and the bottom one with ground colors. This required each palette to have overlapping colors, but carefully done, allowed great flexibility. == Memory access == While framebuffers are commonly accessed via a [[Memory-mapped I/O|memory mapping]] directly to the CPU memory space, this is not the only method by which they may be accessed. Framebuffers have varied widely in the methods used to access memory. Some of the most common are: * Mapping the entire framebuffer to a given memory range. * Port commands to set each pixel, range of pixels or palette entry. * Mapping a memory range smaller than the framebuffer memory, then [[bank switching]] as necessary. The framebuffer organization may be [[packed pixel]] or [[Planar (computer graphics)|planar]]. The framebuffer may be [[all points addressable]] or have restrictions on how it can be updated. == RAM on the video card == {{see also|Video memory}} Video cards always have a certain amount of RAM. A small portion of this RAM is where the bitmap of image data is "buffered" for display. The term ''frame buffer'' is thus often used interchangeably when referring to this RAM. The CPU sends image updates to the video card. The video processor on the card forms a picture of the screen image and stores it in the frame buffer as a large bitmap in RAM. The bitmap in RAM is used by the card to continually refresh the screen image.<ref>{{cite web|url=http://karbosguide.com/hardware/module7b1.htm|title=An illustrated Guide to the Video Cards|work=karbosguide.com}}</ref> == Virtual framebuffers == Many systems attempt to emulate the function of a framebuffer device, often for reasons of compatibility. The two most common [[Virtualization|virtual]] framebuffers are the [[Linux framebuffer]] device (fbdev) and the X Virtual Framebuffer ([[Xvfb]]). Xvfb was added to the [[X Window System]] distribution to provide a method for running X without a graphical framebuffer. The Linux framebuffer device was developed to abstract the physical method for accessing the underlying framebuffer into a guaranteed memory map that is easy for programs to access. This increases portability, as programs are not required to deal with systems that have disjointed memory maps or require [[bank switching]]. == Page flipping == A frame buffer may be designed with enough memory to store two frames worth of video data. In a technique known generally as [[double buffering]] or more specifically as [[page flipping]], the framebuffer uses half of its memory to display the current frame. While that memory is being displayed, the other half of memory is filled with data for the next frame. Once the secondary buffer is filled, the framebuffer is instructed to display the secondary buffer instead. The primary buffer becomes the secondary buffer, and the secondary buffer becomes the primary. This switch is often done after the [[vertical blanking interval]] to avoid [[screen tearing]] where half the old frame and half the new frame is shown together. Page flipping has become a standard technique used by PC [[game programmer]]s. == Graphics accelerators == {{See also|Video card|Graphics processing unit}} As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of [[CPU]] time required to fill the framebuffer. This is commonly called ''graphics acceleration''. Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator in their raw form. The accelerator then [[Rasterisation|rasterizes]] the results of the command to the framebuffer. This method frees the CPU to do other work. Early accelerators focused on improving the performance of 2D [[GUI]] systems. While retaining these 2D capabilities, most modern accelerators focus on producing 3D imagery in real time. A common design uses a [[graphics library]] such as [[OpenGL]] or [[Direct3D]] which interfaces with the graphics driver to translate received commands to instructions for the accelerator's [[graphics processing unit]] (GPU). The GPU uses those instructions to compute the rasterized results and the results are [[bit blit]]ted to the framebuffer. The framebuffer's signal is then produced in combination with built-in video overlay devices (usually used to produce the mouse cursor without modifying the framebuffer's data) and any final special effects that are produced by modifying the output signal. An example of such final special effects was the [[spatial anti-aliasing]] technique used by the [[3dfx Voodoo]] cards. These cards add a slight blur to the output signal that makes aliasing of the rasterized graphics much less obvious. At one time there were many manufacturers of graphics accelerators, including: [[3dfx Interactive]]; [[ATI Technologies|ATI]]; [[Hercules Computer Technology|Hercules]]; [[Trident Microsystems|Trident]]; [[Nvidia]]; [[Radius (hardware company)|Radius]]; [[S3 Graphics]]; [[Silicon Integrated Systems|SiS]] and [[Silicon Graphics]]. {{as of|2015}} the market for graphics accelerators for x86-based systems is dominated by Nvidia (acquired 3dfx in 2002), [[AMD]] (who acquired ATI in 2006), and [[Intel]]. ==Comparisons== With a framebuffer, the electron beam (if the display technology uses one) is commanded to perform a [[raster scan]], the way a [[television]] renders a broadcast signal. The color information for each point thus displayed on the screen is pulled directly from the framebuffer during the scan, creating a set of discrete picture elements, i.e. pixels. Framebuffers differ significantly from the [[vector display]]s that were common prior to the advent of raster graphics (and, consequently, to the concept of a framebuffer). With a vector display, only the [[vertex (geometry)|vertices]] of the graphics primitives are stored. The [[electron beam]] of the output display is then commanded to move from vertex to vertex, tracing a line across the area between these points. Likewise, framebuffers differ from the technology used in early [[text mode]] displays, where a buffer holds codes for characters, not individual pixels. The video display device performs the same raster scan as with a framebuffer but generates the pixels of each character in the buffer as it directs the beam. == See also == *[[Bit plane]] *[[Scanline rendering]] *[[Swap chain]] *[[Tile-based video game]] *[[Tiled rendering]] *[[Tektronix 4050]] used a [[storage tube]] to eliminate the need for framebuffer memory == References == {{Reflist}} {{Refbegin}} * {{cite web |url=http://accad.osu.edu/~waynec/history/PDFs/14_paint.pdf |title=Digital Paint Systems: Historical Overview |author=Alvy Ray Smith |work=Microsoft Tech Memo 14 |date=May 30, 1997 |url-status=dead |archive-url=https://web.archive.org/web/20120207124911/https://design.osu.edu/carlson/history/PDFs/14_paint.pdf |archive-date=February 7, 2012 }} * {{cite web |url=http://accad.osu.edu/~waynec/history/lesson15.html |title=Hardware advancements |work=A Critical History of Computer Graphics and Animation |publisher=The Ohio State University |author=Wayne Carlson |year=2003 |url-status=dead |archive-url=https://web.archive.org/web/20120314015613/http://design.osu.edu/carlson/history/lesson15.html |archive-date=2012-03-14 }} * {{cite web |url=http://accad.osu.edu/~waynec/history/PDFs/paint.pdf |title=Digital Paint Systems: An Anecdotal and Historical Overview |author=Alvy Ray Smith |publisher=IEEE Annals of the History of Computing |year=2001 |url-status=dead |archive-url=https://web.archive.org/web/20120205050230/https://design.osu.edu/carlson/history/PDFs/paint.pdf |archive-date=2012-02-05 }} {{Refend}} == External links == * [https://web.archive.org/web/20060211051810/http://www.acid.org/radio/arts-ep05-transcript.txt Interview with NYIT researcher discussing the 24-bit system] * [http://www.sunhelp.org/faq/FrameBufferHistory.html History of Sun Microsystems' Framebuffers] {{Graphics Processing Unit}} [[Category:Computer graphics]] [[Category:Computer memory]] [[Category:Image processing]] [[Category:User interfaces]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:As of
(
edit
)
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Graphics Processing Unit
(
edit
)
Template:Nbsp
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Use American English
(
edit
)
Template:Who
(
edit
)