Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
GeForce
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Graphics processor generations == {{Timeline of release years | title = Generations timeline | range1 = 1999β | range1_color = #76B900 #619700 | 1999 = [[GeForce 256]] | 2000 = [[GeForce 2 series]] | 2001 = [[GeForce 3 series]] | 2002 = [[GeForce 4 series]] | 2003 = [[GeForce FX series]] | 2004 = [[GeForce 6 series]] | 2005 = [[GeForce 7 series]] | 2006 = [[GeForce 8 series]] | 2008a = [[GeForce 9 series]] | 2008b = [[GeForce 200 series]] | 2009a = [[GeForce 100 series]] | 2009b = [[GeForce 300 series]] | 2010a = [[GeForce 400 series]] | 2010b = [[GeForce 500 series]] | 2012 = [[GeForce 600 series]] | 2013 = [[GeForce 700 series]] | 2014a = [[GeForce 800M series]] | 2014b = [[GeForce 900 series]] | 2016 = [[GeForce 10 series]] | 2018 = [[GeForce 20 series]] | 2019 = [[GeForce 16 series]] | 2020 = [[GeForce 30 series]] | 2022 = [[GeForce 40 series]] |2025=[[GeForce 50 series]]}} === GeForce 256 === {{Main|GeForce 256}}{{Expand section|date=July 2024}} === GeForce 2 series === {{Main|GeForce 2 series}} Launched in March 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series. === GeForce 3 series === {{Main|GeForce 3 series}} Launched in February 2001, the GeForce3 (NV20) introduced programmable [[vertex and pixel shaders]] to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The ''NV2A'' developed for the [[Microsoft]] [[Xbox (console)|Xbox]] game console is a derivative of the GeForce 3. === GeForce 4 series === {{Main|GeForce 4 series}} Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the [[Accelerated Graphics Port|AGP]] 4Γ interface, but a few began the transition to AGP 8Γ. === GeForce FX series === {{Main|GeForce FX series}} Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak [[floating point]] shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering". === GeForce 6 series === {{Main|GeForce 6 series}} Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented [[high-dynamic-range imaging]] and introduced [[Scalable Link Interface|SLI]] (Scalable Link Interface) and [[Nvidia PureVideo|PureVideo]] capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing). === GeForce 7 series === {{Main|GeForce 7 series}} The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the [[Accelerated Graphics Port|AGP]] bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency [[supersampling]] and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the Nvidia line. This era began the transition to the PCI-Express interface. A 128-bit, eight [[render output unit]] (ROP) variant of the 7800 GTX, called the [[RSX Reality Synthesizer]], is used as the main GPU in the Sony [[PlayStation 3]]. === GeForce 8 series === {{Main|GeForce 8 series}} Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully support [[Direct3D]] 10. Manufactured using a 90 nm process and built around the new [[Tesla (microarchitecture)|Tesla microarchitecture]], it implemented the [[unified shader model]]. Initially just the 8800GTX model was launched, while the GTS variant was released months into the product line's life, and it took nearly six months for mid-range and OEM/mainstream cards to be integrated into the 8 series. The die shrink down to [[65 nanometer|65 nm]] and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, 8800GT and 8800GTS-512, first released on October 29, 2007, almost one whole year after the initial G80 release. === GeForce 9 series and 100 series === {{Main|GeForce 9 series|GeForce 100 series}} The first product was released on February 21, 2008.<ref>{{cite magazine | url = https://www.forbes.com/home/technology/forbes/2008/0107/092.html | magazine = Forbes.com | title = Shoot to Kill | author = Brian Caulfield | access-date = December 26, 2007 | date = January 7, 2008 | url-status = dead | archive-url = https://web.archive.org/web/20071224085947/http://www.forbes.com/home/technology/forbes/2008/0107/092.html | archive-date = December 24, 2007 | df = mdy-all }}</ref> Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512 MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.<ref>{{cite web | url = http://www.nvidia.com/object/geforce_9800gtx.html | title = NVIDIA GeForce 9800 GTX | access-date = May 31, 2008 | url-status = live | archive-url = https://web.archive.org/web/20080529170120/http://www.nvidia.com/object/geforce_9800gtx.html | archive-date = May 29, 2008 | df = mdy-all }}</ref> Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of [[Direct3D]] 10.1.<ref>[http://www.dailytech.com/Crytek+Microsoft+Nvidia+Downplay+DirectX+101/article9656.htm DailyTech report] {{webarchive|url=https://web.archive.org/web/20080705162712/http://www.dailytech.com/Crytek+Microsoft+Nvidia+Downplay+DirectX+101/article9656.htm |date=July 5, 2008 }}: Crytek, Microsoft and Nvidia downplay Direct3D 10.1, retrieved December 4, 2007</ref> In March 2009, several sources reported that Nvidia had quietly launched a new series of GeForce products, namely the GeForce 100 Series, which consists of rebadged 9 Series parts.<ref>{{cite web | url = http://www.bit-tech.net/news/hardware/2009/03/23/nvidia-quietly-geforce-100-series/1 | title = Nvidia quietly launches GeForce 100-series GPUs | date = April 6, 2009 | url-status = live | archive-url = https://web.archive.org/web/20090326054142/http://www.bit-tech.net/news/hardware/2009/03/23/nvidia-quietly-geforce-100-series/1 | archive-date = March 26, 2009 | df = mdy-all }}</ref><ref>{{cite web | url = http://www.hardwaresecrets.com/news/3728 | title = nVidia Launches GeForce 100 Series Cards | date = March 10, 2009 | url-status = live | archive-url = https://web.archive.org/web/20110711151956/http://www.hardwaresecrets.com/news/3728 | archive-date = July 11, 2011 | df = mdy-all }}</ref><ref>{{cite web | url = http://www.wiyule.com/2009/03/24/nvidia-quietly-launches-geforce-100-series-gpus/ | title = Nvidia quietly launches GeForce 100-series GPUs | date = March 24, 2009 | url-status = live | archive-url = https://web.archive.org/web/20090521223059/http://www.wiyule.com/2009/03/24/nvidia-quietly-launches-geforce-100-series-gpus/ | archive-date = May 21, 2009 | df = mdy-all }}</ref> GeForce 100 series products were not available for individual purchase.<ref name="Nvdia Geforce" /> === GeForce 200 series and 300 series === {{Main|GeForce 200 series|GeForce 300 series}} Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008.<ref>{{cite web | url = http://benchmarkreviews.com/index.php?option=com_content&task=view&id=179&Itemid=1 | title = NVIDIA GeForce GTX 280 Video Card Review | date = June 16, 2008 | publisher = Benchmark Reviews | access-date = June 16, 2008 | url-status = dead | archive-url = https://web.archive.org/web/20080617212421/http://benchmarkreviews.com/index.php?option=com_content&task=view&id=179&Itemid=1 | archive-date = June 17, 2008 | df = mdy-all }}</ref> The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a [[65 nanometer|65nm]] die.<ref>{{cite web | url = http://www.fudzilla.com/index.php?option=com_content&task=view&id=7364&Itemid=1 | publisher = Fudzilla.com | title = GeForce GTX 280 to launch on June 18th | access-date = May 18, 2008 | archive-url = https://web.archive.org/web/20080517141551/http://www.fudzilla.com/index.php?option=com_content&task=view&id=7364&Itemid=1 <!-- Bot retrieved archive --> | archive-date = May 17, 2008}}</ref> The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280.<ref>{{cite web | url = http://www.vr-zone.com/articles/Detailed_Geforce_GTX_280_Pictures/5826.html | title = Detailed GeForce GTX 280 Pictures | date = June 3, 2008 | publisher = VR-Zone | access-date = June 3, 2008 | url-status = dead | archive-url = https://web.archive.org/web/20080604011627/http://www.vr-zone.com/articles/Detailed_Geforce_GTX_280_Pictures/5826.html | archive-date = June 4, 2008 | df = mdy-all }}</ref> The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210.<ref>{{cite web | url = http://www.hexus.net/content/item.php?item=21369 | title = β News :: NVIDIA kicks off GeForce 300-series range with GeForce 310 : Page β 1/1 | publisher = Hexus.net | date = 2009-11-27 | access-date = 2013-06-30 | url-status = live | archive-url = https://web.archive.org/web/20110928053734/http://www.hexus.net/content/item.php?item=21369 | archive-date = September 28, 2011 | df = mdy-all }}</ref><ref>{{cite web | url = http://www.nvidia.com/object/product_geforce_310_us.html | title = Every PC needs good graphics | publisher = Nvidia | access-date = 2013-06-30 | url-status = live | archive-url = https://web.archive.org/web/20120213060347/http://www.nvidia.com/object/product_geforce_310_us.html | archive-date = February 13, 2012 | df = mdy-all }}</ref> The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase. === GeForce 400 series and 500 series === {{Main|GeForce 400 series|GeForce 500 series}} On April 7, 2010, Nvidia released<ref>{{cite web | url = http://www.anandtech.com/show/3642/nvidias-geforce-gtx-400-series-shows-up-early | title = Update: NVIDIA's GeForce GTX 400 Series Shows Up Early β AnandTech :: Your Source for Hardware Analysis and News | publisher = Anandtech.com | access-date = 2013-06-30 | url-status = live | archive-url = https://web.archive.org/web/20130523010320/http://www.anandtech.com/show/3642/nvidias-geforce-gtx-400-series-shows-up-early | archive-date = May 23, 2013 | df = mdy-all }}</ref> the GeForce GTX 470 and GTX 480, the first cards based on the new [[Fermi (microarchitecture)|Fermi architecture]], codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of [[GDDR5]] memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction. In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card. === GeForce 600 series, 700 series and 800M series === {{Main|GeForce 600 series|GeForce 700 series|GeForce 800M series}} [[File:ASUS GTX-650 Ti TOP Cu-II PCI Express 3.0 x16 graphics card.jpg|thumb|[[Asus]] Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 Γ16 graphics card]] In September 2010, Nvidia announced that the successor to [[Fermi (microarchitecture)|Fermi microarchitecture]] would be the [[Kepler (microarchitecture)|Kepler microarchitecture]], manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in [[Oak Ridge National Laboratory]]'s [[Titan (supercomputer)|"Titan" supercomputer]], leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680. With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory. At the same time, Nvidia announced [[Nvidia Shadowplay|ShadowPlay]], a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release. The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture. === GeForce 900 series === {{Main|GeForce 900 series}} In March 2013, Nvidia announced that the successor to Kepler would be the [[Maxwell (microarchitecture)|Maxwell microarchitecture]]. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in [[Original equipment manufacturer|OEM]], and low [[Thermal design power|TDP]] products in [[Desktop computer|desktop]] GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs. This was the last GeForce series to support analog video output through [[DVI-I]]. Although, analog display adapters exist and are able to convert a digital [[DisplayPort|Display Port]], [[HDMI]], or [[DVI-D]] (Digital). === {{Anchor|NVLINK|PASCAL}} GeForce 10 series === {{Main|GeForce 10 series}} In March 2014, Nvidia announced that the successor to Maxwell would be the [[Pascal (microarchitecture)|Pascal microarchitecture]]; announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. Architectural improvements include the following:<ref name="nvidia-blog-20140325">{{cite web | last = Gupta | first = Sumit | url = http://blogs.nvidia.com/blog/2014/03/25/gpu-roadmap-pascal/ | title = NVIDIA Updates GPU Roadmap; Announces Pascal | publisher = Blogs.nvidia.com | date = 2014-03-21 | access-date = 2014-03-25 | url-status = live | archive-url = http://archive.wikiwix.com/cache/20140325074350/http://blogs.nvidia.com/blog/2014/03/25/gpu-roadmap-pascal/ | archive-date = March 25, 2014 | df = mdy-all }}</ref><ref>{{cite web | url = http://devblogs.nvidia.com/parallelforall/ | title = Parallel Forall | publisher = Devblogs.nvidia.com | work = NVIDIA Developer Zone | access-date = 2014-03-25 | url-status = dead | archive-url = https://web.archive.org/web/20140326025738/http://devblogs.nvidia.com/parallelforall/ | archive-date = March 26, 2014 | df = mdy-all }}</ref> * In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units. * [[GDDR5 SDRAM#GDDR5X|GDDR5X]]{{snd}}New memory standard supporting 10 Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3 GB version), GTX 1050 Ti, and GTX 1050 use GDDR5.<ref>{{cite web|url=http://www.geforce.com/hardware/10series|title=GEFORCE GTX 10 SERIES|website=www.geforce.com|access-date=April 24, 2018|url-status=live|archive-url=https://web.archive.org/web/20161128103547/http://www.geforce.com/hardware/10series|archive-date=November 28, 2016|df=mdy-all}}</ref> * Unified memory{{snd}}A memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine". * [[NVLink]]{{snd}}A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.<ref>{{cite web | url = https://devblogs.nvidia.com/parallelforall/inside-pascal/ | title = nside Pascal: NVIDIA's Newest Computing Platform | date = 2016-04-05 | url-status = live | archive-url = https://web.archive.org/web/20170507110037/https://devblogs.nvidia.com/parallelforall/inside-pascal/ | archive-date = May 7, 2017 | df = mdy-all }}</ref><ref>{{cite web | url = http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/ | title = NVLink, Pascal and Stacked Memory: Feeding the Appetite for Big Data | date = 2014-03-25 | access-date = 2014-07-07 | author = Denis Foley | website = nvidia.com | url-status = live | archive-url = https://web.archive.org/web/20140720130522/http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/ | archive-date = July 20, 2014 | df = mdy-all }}</ref> * 16-bit ([[Half-precision floating-point format|FP16]]) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision")<ref>{{cite web | title = NVIDIA's Next-Gen Pascal GPU Architecture to Provide 10X Speedup for Deep Learning Apps | url = http://blogs.nvidia.com/blog/2015/03/17/pascal/ | website = The Official NVIDIA Blog | access-date = 23 March 2015 | url-status = live | archive-url = https://web.archive.org/web/20150402135434/http://blogs.nvidia.com/blog/2015/03/17/pascal/ | archive-date = April 2, 2015 | df = mdy-all }}</ref> and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate).<ref>{{cite news | last1 = Smith | first1 = Ryan | date = 2015-03-17 | title = The NVIDIA GeForce GTX Titan X Review | url = http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/2 | newspaper = [[AnandTech]] | page = 2 | access-date = 2016-04-22 | quote = ...puny native FP64 rate of just 1/32 | url-status = live | archive-url = https://web.archive.org/web/20160505122454/http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/2 | archive-date = May 5, 2016 | df = mdy-all }}</ref> * More advanced process node, TSMC 16mm instead of the older TSMC [[32 nm process|28 nm]] === GeForce 20 series and 16 series === {{Main|GeForce 20 series|GeForce 16 series}} In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as "[[Turing (microarchitecture)|Turing]]" at the Siggraph 2018 conference.<ref>{{Cite news|url=https://www.anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture|title=NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More|date=2018-08-13|work=Anandtech|access-date=2018-08-13|language=en|archive-date=April 24, 2020|archive-url=https://web.archive.org/web/20200424020736/https://www.anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture|url-status=live}}</ref> This new GPU microarchitecture is aimed to accelerate the real-time [[Ray tracing (graphics)|ray tracing]] support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the [[DirectX Raytracing|DXR]] extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture.<ref name=":0">{{Cite news|url=https://www.engadget.com/2018/08/14/nvidia-ray-tracing-turing-siggraph/|title=NVIDIA's Turing-powered GPUs are the first ever built for ray tracing|work=Engadget|access-date=2018-08-14|language=en-US|archive-date=August 14, 2018|archive-url=https://web.archive.org/web/20180814211503/https://www.engadget.com/2018/08/14/nvidia-ray-tracing-turing-siggraph/|url-status=live}}</ref><ref>{{Cite web|url=https://www.nvidia.com/en-us/geforce/20-series/|title=NVIDIA GeForce RTX 20 Series Graphics Cards|website=NVIDIA|language=en-us|access-date=2019-02-12|archive-date=August 3, 2017|archive-url=https://web.archive.org/web/20170803191115/http://www.nvidia.com/page/8800_features.html|url-status=live}}</ref> A whole new Tensor core design since [[Volta (microarchitecture)|Volta]] introduces AI deep learning acceleration, which allows the utilisation of DLSS ([[Deep Learning Super Sampling]]), a new form of anti-aliasing that uses AI to provide crisper imagery with less impact on performance.<ref>{{Cite web|url=https://www.legitreviews.com/nvidia-deep-learning-super-sampling-dlss-shown-to-press_207461|title=NVIDIA Deep Learning Super-Sampling (DLSS) Shown To Press|website=www.legitreviews.com|date=August 22, 2018|language=en|access-date=2018-09-14|archive-date=September 14, 2018|archive-url=https://web.archive.org/web/20180914012412/http://www.legitreviews.com/nvidia-deep-learning-super-sampling-dlss-shown-to-press_207461|url-status=live}}</ref> It also changes its integer execution unit which can execute in parallel with the floating point data path. A new unified cache architecture which doubles its bandwidth compared with previous generations was also announced.<ref name="auto">{{Cite web|url=https://www.pcper.com/news/General-Tech/NVIDIA-Officially-Announces-Turing-GPU-Architecture-SIGGRAPH-2018|title=NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018|publisher=PC Perspective|website=www.pcper.com|date=August 13, 2018|language=en|access-date=2018-08-14|archive-date=August 14, 2018|archive-url=https://web.archive.org/web/20180814155025/https://www.pcper.com/news/General-Tech/NVIDIA-Officially-Announces-Turing-GPU-Architecture-SIGGRAPH-2018|url-status=live}}</ref> The new GPUs were revealed as the Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. The high end Quadro RTX 8000 features 4,608 CUDA cores and 576 Tensor cores with 48 GB of VRAM.<ref name=":0" /> Later during the [[Gamescom]] press conference, Nvidia's CEO Jensen Huang, unveiled the new GeForce RTX series with RTX 2080 Ti, 2080, and 2070 that will use the Turing architecture. The first Turing cards were slated to ship to consumers on September 20, 2018.<ref>{{cite web|url=http://nvidianews.nvidia.com/news/10-years-in-the-making-nvidia-brings-real-time-ray-tracing-to-gamers-with-geforce-rtx|title=10 Years in the Making: NVIDIA Brings Real-Time Ray Tracing to Gamers with GeForce RTX|first=NVIDIA|last=Newsroom|website=NVIDIA Newsroom Newsroom|access-date=February 6, 2019|archive-date=December 12, 2018|archive-url=https://web.archive.org/web/20181212100231/https://nvidianews.nvidia.com/news/10-years-in-the-making-nvidia-brings-real-time-ray-tracing-to-gamers-with-geforce-rtx|url-status=live}}</ref> Nvidia announced the RTX 2060 on January 6, 2019, at CES 2019.<ref>{{cite web|url=http://nvidianews.nvidia.com/news/nvidia-geforce-rtx-2060-is-here-next-gen-gaming-takes-off|title=NVIDIA GeForce RTX 2060 Is Here: Next-Gen Gaming Takes Off|first=NVIDIA|last=Newsroom|website=NVIDIA Newsroom Newsroom|access-date=February 6, 2019|archive-date=January 19, 2019|archive-url=https://web.archive.org/web/20190119140336/https://nvidianews.nvidia.com/news/nvidia-geforce-rtx-2060-is-here-next-gen-gaming-takes-off|url-status=live}}</ref> On July 2, 2019, Nvidia announced the GeForce RTX Super line of cards, a 20 series refresh which comprises higher-spec versions of the RTX 2060, 2070 and 2080. The RTX 2070 and 2080 were discontinued. In February 2019, Nvidia announced the [[GeForce 16 series]]. It is based on the same Turing architecture used in the GeForce 20 series, but disabling the Tensor ([[artificial intelligence|AI]]) and RT ([[Ray tracing (graphics)|ray tracing]]) cores to provide more affordable graphic cards for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Like the RTX Super refresh, Nvidia on October 29, 2019, announced the GTX 1650 Super and 1660 Super cards, which replaced their non-Super counterparts. On June 28, 2022, Nvidia quietly released their GTX 1630 card, which was meant for low-end gamers. === GeForce 30 series === {{Main|GeForce 30 series}} Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series, it is built on the [[Ampere (microarchitecture)|Ampere]] microarchitecture. The GeForce Special Event introduced took place on September 1, 2020, and set September 17th as the official release date for the RTX 3080 GPU, September 24 for the RTX 3090 GPU and October 29th for the RTX 3070 GPU.<ref>{{cite web | url=https://nvidianews.nvidia.com/news/nvidia-delivers-greatest-ever-generational-leap-in-performance-with-geforce-rtx-30-series-gpus | title=NVIDIA Delivers Greatest-Ever Generational Leap with GeForce RTX 30 Series GPUs | access-date=September 3, 2020 | archive-date=January 13, 2021 | archive-url=https://web.archive.org/web/20210113035553/https://nvidianews.nvidia.com/news/nvidia-delivers-greatest-ever-generational-leap-in-performance-with-geforce-rtx-30-series-gpus | url-status=live }}</ref><ref>{{cite web | url=https://www.nvidia.com/en-us/geforce/special-event/ | title=Join us for an NVIDIA GeForce RTX: Game on Special Broadcast Event | access-date=August 16, 2020 | archive-date=September 2, 2020 | archive-url=https://web.archive.org/web/20200902190958/https://www.nvidia.com/en-us/geforce/special-event/ | url-status=live }}</ref> With the latest GPU launch being the RTX 3090 Ti. The RTX 3090 Ti is the highest-end Nvidia GPU on the Ampere microarchitecture, it features a fully unlocked GA102 die built on the Samsung [[7 nm process|8 nm]] node due to supply shortages with [[TSMC]]. The RTX 3090 Ti has 10,752 CUDA cores, 336 Tensor cores and texture mapping units, 112 ROPs, 84 RT cores, and 24 gigabytes of [[GDDR6 SDRAM|GDDR6X]] memory with a 384-bit bus.<ref>{{Cite web |title=NVIDIA GeForce RTX 3090 Ti Specs |url=https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |access-date=2022-05-12 |website=TechPowerUp |language=en |archive-date=January 23, 2023 |archive-url=https://web.archive.org/web/20230123200947/https://www.techpowerup.com/gpu-specs/geforce-rtx-3090-ti.c3829 |url-status=live }}</ref> When compared to the RTX 2080 Ti, the 3090 Ti has 6,400 more CUDA cores. Due to the [[2020βpresent global chip shortage|global chip shortage]], the 30 series was controversial as scalpers and high demand meant that GPU prices skyrocketed for the 30 series and the [[Advanced Micro Devices|AMD]] [[Radeon RX 6000 series|RX 6000]] series. === GeForce 40 series === {{Main|GeForce 40 series}} On September 20, 2022, Nvidia announced its GeForce 40 Series graphics cards.<ref>{{Cite web |last=Burnes |first=Andrew |date=2022-09-20 |title=NVIDIA GeForce News |url=https://www.nvidia.com/en-us/geforce/news/rtx-40-series-graphics-cards-announcements/ |access-date=2022-09-20 |website=NVIDIA |language=en-us |archive-date=September 20, 2022 |archive-url=https://web.archive.org/web/20220920193354/https://www.nvidia.com/en-us/geforce/news/rtx-40-series-graphics-cards-announcements/ |url-status=live }}</ref> These came out as the RTX 4090, on October 12, 2022, the RTX 4080, on November 16, 2022, the RTX 4070 Ti, on January 3, 2023, with the RTX 4070, on April 13, 2023, and the RTX 4060 Ti on May 24, 2023, and the RTX 4060, following on June 29, 2023. These were built on the [[Ada Lovelace (microarchitecture)|Ada Lovelace]] architecture, with current part numbers being, "AD102", "AD103", "AD104" "AD106" and "AD107". These parts are manufactured using the TSMC N4 process node which is a custom-designed process for Nvidia. At the time, the RTX 4090 was the fastest chip for the mainstream market that has been released by a major company, consisting of around 16,384 [[CUDA]] cores, boost clocks of 2.2 / 2.5 GHz, 24 GB of [[GDDR6 SDRAM|GDDR6X]], a 384-bit memory bus, 128 3rd gen [[Ray tracing (graphics)|RT]] cores, 512 4th gen [[Deep learning super sampling|Tensor]] cores, [[Deep learning super sampling|DLSS 3.0]] and a TDP of 450W.<ref>{{Cite web |title=NVIDIA GeForce RTX 4090 Graphics Cards |url=https://www.nvidia.com/en-us/geforce/graphics-cards/40-series/rtx-4090/ |access-date=2023-11-07 |website=NVIDIA |language=en-us}}</ref> From October to December 2024, the RTX 4090, 4080, 4070 and relating variants were officially discontinued, marking the end of a two-year production run, in order to free up production space for the coming RTX 50 series. Notably, a China-only edition of the RTX 4090 was released, named the RTX 4090D (Dragon). The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original 4090. This was primarily owing to the United States Department of Commerce beginning the enactment of restrictions on the Nvidia RTX 4090 for export to certain countries in 2023. This was targeted mainly towards China as an attempt to halt its AI development. The 40 series saw Nvidia re-releasing the 'Super' variant of graphics cards, not seen since the 20 series, as well as being the first generation in Nvidia's lineup to combine both 'Super' and 'Ti' brandings together. This began with the release of the RTX 4070 Super on January 17, 2024, following with the RTX 4070 Ti Super on January 24, 2024, and the RTX 4080 Super on January 31, 2024. === GeForce 50 series (Current) === {{main|GeForce 50 series}} The GeForce 50 series, based on the [[Blackwell (microarchitecture)|Blackwell]] microarchitecture, was announced at [[Consumer Electronics Show|CES]] 2025, with availability starting in January. Nvidia CEO [[Jensen Huang]] presented prices for the RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)