Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Graphics processing unit
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==GPU forms== ===Terminology=== In the 1970s, the term "GPU" originally stood for ''graphics processor unit'' and described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output.<ref>{{cite book |last1=Barron |first1=E. T. |title=Conference record of the 6th annual workshop on Microprogramming – MICRO 6 |last2=Glorioso |first2=R. M. |date=September 1973 |isbn=9781450377836 |pages=122–128 |language=en |chapter=A micro controlled peripheral processor |doi=10.1145/800203.806247 |doi-access=free |s2cid=36942876}}</ref><ref>{{cite journal |last1=Levine |first1=Ken |title=Core standard graphic package for the VGI 3400 |journal=ACM SIGGRAPH Computer Graphics |date=August 1978 |volume=12 |issue=3 |pages=298–300 |doi=10.1145/965139.807405 |url=https://dl.acm.org/doi/abs/10.1145/965139.807405}}</ref> In 1994, [[Sony]] used the term (now standing for ''graphics processing unit'') in reference to the [[PlayStation (console)|PlayStation]] console's [[Toshiba]]-designed [[PlayStation technical specifications#Graphics processing unit (GPU)|Sony GPU]].<ref name="gpu" /> The term was popularized by [[Nvidia]] in 1999, who marketed the [[GeForce 256]] as "the world's first GPU".<ref>{{cite web|title=NVIDIA Launches the World's First Graphics Processing Unit: GeForce 256|url=https://www.nvidia.com/object/IO_20020111_5424.html|publisher=Nvidia|access-date=28 March 2016|date=31 August 1999|url-status=live|archive-url=https://web.archive.org/web/20160412035751/https://www.nvidia.com/object/IO_20020111_5424.html|archive-date=12 April 2016}}</ref> It was presented as a "single-chip [[Processor (computing)|processor]] with integrated [[Transform, clipping, and lighting|transform, lighting, triangle setup/clipping]], and rendering engines".<ref>{{cite web|title=Graphics Processing Unit (GPU)|date=16 December 2009|url=https://www.nvidia.com/object/gpu.html|publisher=Nvidia|access-date=29 March 2016|url-status=live|archive-url=https://web.archive.org/web/20160408122443/https://www.nvidia.com/object/gpu.html|archive-date=8 April 2016}}</ref> Rival [[ATI Technologies]] coined the term "'''visual processing unit'''" or '''VPU''' with the release of the [[R300|Radeon 9700]] in 2002.<ref>{{cite web|last1=Pabst|first1=Thomas|title=ATi Takes Over 3D Technology Leadership With Radeon 9700|url=https://www.tomshardware.com/reviews/ati-takes-3d-technology-leadership-radeon-9700,491-3.html|publisher=Tom's Hardware|access-date=29 March 2016|date=18 July 2002}}</ref> The [[Xilinx|AMD Alveo MA35D]] features dual VPU’s, each using the [[5 nm process]] in 2023.<ref name=AM_1>{{cite web| title=AMD Rolls Out 5 nm ASIC-based Accelerator for the Interactive Streaming Era| author=Child, J.| url=https://www.allaboutcircuits.com/news/amd-rolls-out-5-nm-asic-based-accelerator-for-the-interactive-streaming-era| publisher=EETech Media| date=6 April 2023| access-date=24 December 2023}}</ref> In personal computers, there are two main forms of GPUs. Each has many synonyms:<ref>{{cite web|title=Help Me Choose: Video Cards |url=https://www.dell.com/learn/us/en/19/help-me-choose/hmc-video-card-inspiron-lt |archive-url=https://archive.today/20160909115302/http://www.dell.com/learn/us/en/19/help-me-choose/hmc-video-card-inspiron-lt?stp=1 |access-date=2016-09-17 |archive-date=2016-09-09 |publisher=[[Dell]] |url-status=dead }}</ref> * ''[[#Dedicated graphics processing unit|Dedicated graphics]]'' also called ''discrete graphics''. * ''[[#Integrated graphics|Integrated graphics]]'' also called ''shared graphics solutions'', ''integrated graphics processors'' (IGP), or ''unified memory architecture'' (UMA). ====Usage-specific GPU==== Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations: # Gaming #*[[GeForce|GeForce GTX, RTX]] #* [[Nvidia Titan]] #* [[Radeon|Radeon HD, R5, R7, R9, RX, Vega and Navi series]] #* [[Radeon VII]] #* [[Intel Arc]] # Cloud Gaming #*[[Nvidia GRID]] #* Radeon Sky # Workstation #* [[Nvidia Quadro]] #* [[Nvidia RTX]] #* [[AMD FirePro]] #*[[Radeon Pro|AMD Radeon Pro]] #*[[Intel Arc]] Pro # Cloud Workstation #*[[Nvidia Tesla]] #* [[AMD FireStream]] # Artificial Intelligence training and Cloud #*[[Nvidia Tesla]] #* [[Radeon Instinct|AMD Radeon Instinct]] # Automated/Driverless car #* Nvidia [[Drive PX-series|Drive PX]] ===Dedicated graphics processing unit=== {{see also|Video card}} ''Dedicated graphics processing units'' uses [[random-access memory|RAM]] that is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (see [[GDDR SDRAM|GDDR]]). Sometimes systems with dedicated ''discrete'' GPUs were called "DIS" systems as opposed to "UMA" systems (see next section).<ref name=NO_1>{{cite web| title=Nvidia Optimus documentation for Linux device driver| url=https://nouveau.freedesktop.org/Optimus.html| publisher=freedesktop| date=13 November 2023| access-date=24 December 2023}}</ref> Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that [[graphics card]]s have RAM that is dedicated to the card's use, not to the fact that ''most'' dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts. Graphics cards with dedicated GPUs typically interface with the [[motherboard]] by means of an [[expansion slot]] such as [[PCI Express]] (PCIe) or [[Accelerated Graphics Port]] (AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use [[Peripheral Component Interconnect]] (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available. Technologies such as [[Scan-Line Interleave]] by 3dfx, [[Scalable Link Interface|SLI]] and [[NVLink]] by Nvidia and [[ATI CrossFire|CrossFire]] by AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them.<ref name=CSL_1>{{cite web| title=Crossfire and SLI market is just 300.000 units| author=Abazovic, F.| url=https://www.fudzilla.com/news/graphics/38134-crossfire-and-sli-market-is-just-300-000-units| publisher=fudzilla| date=3 July 2015| access-date=24 December 2023}}</ref><ref>{{Cite web | url=https://thetechaltar.com/is-multi-gpu-dead/ |title = Is Multi-GPU Dead?|date = 7 January 2018}}</ref><ref>{{Cite web | url=https://www.techradar.com/news/nvidia-sli-and-amd-crossfire-is-dead-but-should-we-mourn-multi-gpu-gaming | title=Nvidia SLI and AMD CrossFire is dead – but should we mourn multi-GPU gaming? | TechRadar| date=24 August 2019}}</ref> Multiple GPUs are still used on supercomputers (like in [[Summit (supercomputer)|Summit]]), on workstations to accelerate video (processing multiple videos at once)<ref>{{Cite web | url=https://devblogs.nvidia.com/nvidia-ffmpeg-transcoding-guide/ | title=NVIDIA FFmpeg Transcoding Guide| date=24 July 2019}}</ref><ref>{{cite web |url=https://documents.blackmagicdesign.com/ConfigGuides/DaVinci_Resolve_15_Mac_Configuration_Guide.pdf |title=Hardware Selection and Configuration Guide DaVinci Resolve 15 |publisher=BlackMagic Design |date=2018 |access-date=31 May 2022}}</ref><ref>{{Cite web|url=https://www.pugetsystems.com/recommended/Recommended-Systems-for-DaVinci-Resolve-187/Hardware-Recommendations|title=Recommended System: Recommended Systems for DaVinci Resolve|website=Puget Systems}} * {{Cite web | url=https://helpx.adobe.com/x-productkb/multi/gpu-acceleration-and-hardware-encoding.html |title = GPU Accelerated Rendering and Hardware Encoding}}</ref> and 3D rendering,<ref>{{Cite web |date=20 August 2019 |title=V-Ray Next Multi-GPU Performance Scaling |url=https://www.pugetsystems.com/labs/articles/V-Ray-Next-Multi-GPU-Performance-Scaling-1559/}} * {{Cite web |title=FAQ | GPU-accelerated 3D rendering software | Redshift |url=https://www.redshift3d.com/support/faq}} * {{Cite news |title=OctaneRender 2020™ Preview is here! |newspaper=Otoy |url=https://home.otoy.com/render/octane-render/faqs/}} * {{Cite web |date=8 April 2019 |title=Exploring Performance with Autodesk's Arnold Renderer GPU Beta |url=https://techgage.com/article/autodesk-arnold-render-gpu-beta-performance/}} * {{Cite web |title=GPU Rendering – Blender Manual |url=https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html}}</ref> for [[VFX]],<ref>{{Cite web | url=https://www.chaosgroup.com/vray/nuke | title=V-Ray for Nuke – Ray Traced Rendering for Compositors | Chaos Group}} * {{Cite web | url=https://www.foundry.com/products/nuke/requirements |title = System Requirements | Nuke | Foundry}}</ref> [[GPGPU]] workloads and for simulations,<ref>{{Cite web | url=https://foldingathome.org/faqs/gpu2-common/frequently-asked-questions-common-ati-nvidia-gpu2-clients-2/multi-gpu-support/ |title = What about multi-GPU support? – Folding@home}}</ref> and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs. ==={{anchor|INTEGRATED|Integrated graphics|iGPU}}Integrated graphics processing unit=== [[File:Motherboard diagram.svg|thumb|The position of an integrated GPU in a northbridge/southbridge system layout|616x616px]] [[File:A790GXH-128M-Motherboard.jpg|thumb|An [[ASRock]] motherboard with integrated graphics, which has HDMI, VGA and DVI-out ports|400x400px]] ''Integrated graphics processing units'' (IGPU), ''integrated graphics'', ''shared graphics solutions'', ''integrated graphics processors'' (IGP), or ''unified memory architectures'' (UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of its [[Northbridge (computing)|northbridge]] chipset,<ref>{{Cite web|url=https://www.tomshardware.com/picturestory/693-intel-graphics-evolution.html|title = Evolution of Intel Graphics: I740 to Iris Pro|date = 4 February 2017}}</ref> or on the same [[die (integrated circuit)]] with the CPU (like [[AMD APU]] or [[Intel HD Graphics]]). On certain motherboards,<ref>{{cite web|url=https://www.gigabyte.com/products/product-page.aspx?pid=3785#ov|title=GA-890GPA-UD3H overview|url-status=dead|archive-url=https://web.archive.org/web/20150415095629/https://www.gigabyte.com/products/product-page.aspx?pid=3785#ov|archive-date=2015-04-15|access-date=2015-04-15}}</ref> AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. {{As of|2007|alt=As of early 2007}} computers with integrated graphics account for about 90% of all PC shipments.<ref>{{cite web |author=Key |first=Gary |title=AnandTech – μATX Part 2: Intel G33 Performance Review |url=https://www.anandtech.com/mb/showdoc.aspx?i=3111&p=23 |url-status=live |archive-url=https://web.archive.org/web/20080531045027/https://www.anandtech.com/mb/showdoc.aspx?i=3111&p=23 |archive-date=2008-05-31 |work=anandtech.com}}</ref>{{update inline|date=February 2013}} They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004.<ref>{{cite web |author=Tscheblockov |first=Tim |title=Xbit Labs: Roundup of 7 Contemporary Integrated Graphics Chipsets for Socket 478 and Socket A Platforms |url=https://www.xbitlabs.com/articles/chipsets/display/int-chipsets-roundup.html |url-status=dead |archive-url=https://web.archive.org/web/20070526124817/https://www.xbitlabs.com/articles/chipsets/display/int-chipsets-roundup.html |archive-date=2007-05-26 |access-date=2007-06-03}}</ref> However, modern integrated graphics processors such as [[AMD Accelerated Processing Unit]] and [[Intel Graphics Technology]] (HD, UHD, Iris, Iris Pro, Iris Plus, and [[Intel Xe#Xe-LP (Low Power)|Xe-LP]]) can handle 2D graphics or low-stress 3D graphics. Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between its [[Video random access memory|VRAM]] and GPU core. This [[memory bus]] bandwidth can limit the performance of the GPU, though [[Multi-channel memory architecture|multi-channel memory]] can mitigate this deficiency.<ref name="Coelho">{{cite web |last1=Coelho |first1=Rafael |title=Does dual-channel memory make difference on integrated video performance? |url=https://www.hardwaresecrets.com/dual-channel-memory-make-difference-integrated-video-performance/ |website=Hardware Secrets |access-date=4 January 2019 |date=18 January 2016}}</ref> Older integrated graphics chipsets lacked hardware [[Transform, clipping, and lighting|transform and lighting]], but newer ones include it.<ref>{{cite web |author=Sanford |first=Bradley |title=Integrated Graphics Solutions for Graphics-Intensive Applications |url=https://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/Integrated_Graphics_Solutions_white_paper_rev61.pdf |url-status=live |archive-url=https://web.archive.org/web/20071128165723/https://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/Integrated_Graphics_Solutions_white_paper_rev61.pdf |archive-date=2007-11-28 |access-date=2007-09-02}}</ref><ref>{{cite web |author=Sanford |first=Bradley |title=Integrated Graphics Solutions for Graphics-Intensive Applications |url=https://www.techspot.com/news/46773-amd-announces-radeon-hd-7970-claims-fastest-gpu-title.html |url-status=live |archive-url=https://web.archive.org/web/20120107200529/https://www.techspot.com/news/46773-amd-announces-radeon-hd-7970-claims-fastest-gpu-title.html |archive-date=2012-01-07 |access-date=2007-09-02}}</ref> On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics,<ref>{{Cite web |last=Shimpi |first=Anand Lal |title=AMD Outlines HSA Roadmap: Unified Memory for CPU/GPU in 2013, HSA GPUs in 2014 |url=https://www.anandtech.com/show/5493/amd-outlines-hsa-roadmap-unified-memory-for-cpugpu-in-2013-hsa-gpus-in-2014 |access-date=2024-01-08 |website=www.anandtech.com}}</ref> modern Intel processors with integrated graphics,<ref>{{Cite web |last=Lake |first=Adam T. |title=Getting the Most from OpenCL™ 1.2: How to Increase Performance by... |url=https://www.intel.com/content/www/us/en/developer/articles/training/getting-the-most-from-opencl-12-how-to-increase-performance-by-minimizing-buffer-copies-on-intel-processor-graphics.html |access-date=2024-01-08 |website=Intel |language=en}}</ref> Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over a [[bus (computing)]] between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data. ==={{Anchor|HYBRID}}Hybrid graphics processing=== Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's [[HyperMemory]] and Nvidia's [[TurboCache]]. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the high [[Memory latency|latency]] of the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory. ==={{Anchor|SP-GPGPU}}Stream processing and general purpose GPUs (GPGPU)=== {{Main|GPGPU|Stream processing}} It is common to use a [[GPGPU|general purpose graphics processing unit (GPGPU)]] as a modified form of [[stream processing|stream processor]] (or a [[vector processor]]), running [[compute kernel]]s. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "[[#Dedicated graphics processing unit|Dedicated graphics processing unit]]" above) GPU designers, [[AMD]] and [[Nvidia]], are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with [[Stanford University]] to create a GPU-based client for the [[Folding@home]] distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.<ref>{{cite web |author=Murph |first=Darren |title=Stanford University tailors Folding@home to GPUs |date=29 September 2006 |url=https://www.engadget.com/2006/09/29/stanford-university-tailors-folding-home-to-gpus/ |url-status=live |archive-url=https://web.archive.org/web/20071012000648/https://www.engadget.com/2006/09/29/stanford-university-tailors-folding-home-to-gpus/ |archive-date=2007-10-12 |access-date=2007-10-04}}</ref><ref>{{cite web |author=Houston |first=Mike |title=Folding@Home – GPGPU |url=https://graphics.stanford.edu/~mhouston/ |url-status=live |archive-url=https://web.archive.org/web/20071027130116/https://graphics.stanford.edu/~mhouston/ |archive-date=2007-10-27 |access-date=2007-10-04}}</ref> GPGPUs can be used for many types of [[embarrassingly parallel]] tasks including [[ray tracing (graphics)|ray tracing]]. They are generally suited to high-throughput computations that exhibit [[data-parallelism]] to exploit the wide vector width [[SIMD]] architecture of the GPU. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.<ref>{{cite web |title=Top500 List – June 2012 | TOP500 Supercomputer Sites |url=https://www.top500.org/list/2012/06/100 |url-status=dead |archive-url=https://web.archive.org/web/20140113044747/https://www.top500.org/list/2012/06/100/ |archive-date=2014-01-13 |access-date=2014-01-21 |publisher=Top500.org}}</ref> GPUs support API extensions to the [[C (programming language)|C]] programming language such as [[OpenCL]] and [[OpenMP]]. Furthermore, each GPU vendor introduced its own API which only works with their cards: [[AMD APP SDK]] from AMD, and [[CUDA]] from Nvidia. These allow functions called [[compute kernel]]s to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.{{Citation needed|date=August 2017|reason=CUDA as first API}} Since 2005 there has been interest in using the performance offered by GPUs for [[evolutionary computation]] in general, and for accelerating the [[Fitness (genetic algorithm)|fitness]] evaluation in [[genetic programming]] in particular. Most approaches compile [[linear genetic programming|linear]] or [[genetic programming|tree programs]] on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's [[SIMD]] architecture.<ref>{{cite web |author=Nickolls |first=John |title=Stanford Lecture: Scalable Parallel Programming with CUDA on Manycore GPUs |url=https://www.youtube.com/watch?v=nlGnKPpOpbE |url-status=live |archive-url=https://web.archive.org/web/20161011195103/https://www.youtube.com/watch?v=nlGnKPpOpbE |archive-date=2016-10-11 |website=[[YouTube]]|date=July 2008 }} * {{cite web |author=Harding |first1=S. |last2=Banzhaf |first2=W. |title=Fast genetic programming on GPUs |url=https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/eurogp07_harding.html |url-status=live |archive-url=https://web.archive.org/web/20080609231021/https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/eurogp07_harding.html |archive-date=2008-06-09 |access-date=2008-05-01}}</ref> However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there.<ref>{{cite web |author=Langdon |first1=W. |last2=Banzhaf |first2=W. |title=A SIMD interpreter for Genetic Programming on GPU Graphics Cards |url=https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/langdon_2008_eurogp.html |url-status=live |archive-url=https://web.archive.org/web/20080609231026/https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/langdon_2008_eurogp.html |archive-date=2008-06-09 |access-date=2008-05-01}} * V. Garcia and E. Debreuve and M. Barlaud. [[arxiv:0804.1448|Fast k nearest neighbor search using GPU]]. In Proceedings of the CVPR Workshop on Computer Vision on GPU, Anchorage, Alaska, USA, June 2008.</ref> Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs. ===External GPU (eGPU)=== An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering. Therefore, it is desirable to attach a GPU to some external bus of a notebook. [[PCI Express]] is the only bus used for this purpose. The port may be, for example, an [[ExpressCard]] or [[PCI Express#PCI Express Mini Card|mPCIe]] port (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), a [[Thunderbolt (interface)|Thunderbolt]] 1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), a [[Thunderbolt_(interface)#USB4|USB4 port with Thunderbolt compatibility]], or an [[OCuLink]] port. Those ports are only available on certain notebook systems.<ref>{{cite web |author=Mohr |first=Neil |title=How to make an external laptop graphics adaptor |url=https://www.techradar.com/news/computing-components/graphics-cards/how-to-make-an-external-laptop-graphics-adaptor-915616 |url-status=live |archive-url=https://web.archive.org/web/20170626052950/https://www.techradar.com/news/computing-components/graphics-cards/how-to-make-an-external-laptop-graphics-adaptor-915616 |archive-date=2017-06-26 |work=TechRadar}}</ref> eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts.<ref>{{Cite web | url=https://www.gamingscan.com/best-external-graphics-card/ |title = Best External Graphics Card 2020 (EGPU) [The Complete Guide]|date = 16 March 2020}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)