Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Graphics processing unit
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==={{Anchor|SP-GPGPU}}Stream processing and general purpose GPUs (GPGPU)=== {{Main|GPGPU|Stream processing}} It is common to use a [[GPGPU|general purpose graphics processing unit (GPGPU)]] as a modified form of [[stream processing|stream processor]] (or a [[vector processor]]), running [[compute kernel]]s. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "[[#Dedicated graphics processing unit|Dedicated graphics processing unit]]" above) GPU designers, [[AMD]] and [[Nvidia]], are pursuing this approach with an array of applications. Both Nvidia and AMD teamed with [[Stanford University]] to create a GPU-based client for the [[Folding@home]] distributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.<ref>{{cite web |author=Murph |first=Darren |title=Stanford University tailors Folding@home to GPUs |date=29 September 2006 |url=https://www.engadget.com/2006/09/29/stanford-university-tailors-folding-home-to-gpus/ |url-status=live |archive-url=https://web.archive.org/web/20071012000648/https://www.engadget.com/2006/09/29/stanford-university-tailors-folding-home-to-gpus/ |archive-date=2007-10-12 |access-date=2007-10-04}}</ref><ref>{{cite web |author=Houston |first=Mike |title=Folding@Home β GPGPU |url=https://graphics.stanford.edu/~mhouston/ |url-status=live |archive-url=https://web.archive.org/web/20071027130116/https://graphics.stanford.edu/~mhouston/ |archive-date=2007-10-27 |access-date=2007-10-04}}</ref> GPGPUs can be used for many types of [[embarrassingly parallel]] tasks including [[ray tracing (graphics)|ray tracing]]. They are generally suited to high-throughput computations that exhibit [[data-parallelism]] to exploit the wide vector width [[SIMD]] architecture of the GPU. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.<ref>{{cite web |title=Top500 List β June 2012 | TOP500 Supercomputer Sites |url=https://www.top500.org/list/2012/06/100 |url-status=dead |archive-url=https://web.archive.org/web/20140113044747/https://www.top500.org/list/2012/06/100/ |archive-date=2014-01-13 |access-date=2014-01-21 |publisher=Top500.org}}</ref> GPUs support API extensions to the [[C (programming language)|C]] programming language such as [[OpenCL]] and [[OpenMP]]. Furthermore, each GPU vendor introduced its own API which only works with their cards: [[AMD APP SDK]] from AMD, and [[CUDA]] from Nvidia. These allow functions called [[compute kernel]]s to run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.{{Citation needed|date=August 2017|reason=CUDA as first API}} Since 2005 there has been interest in using the performance offered by GPUs for [[evolutionary computation]] in general, and for accelerating the [[Fitness (genetic algorithm)|fitness]] evaluation in [[genetic programming]] in particular. Most approaches compile [[linear genetic programming|linear]] or [[genetic programming|tree programs]] on the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU's [[SIMD]] architecture.<ref>{{cite web |author=Nickolls |first=John |title=Stanford Lecture: Scalable Parallel Programming with CUDA on Manycore GPUs |url=https://www.youtube.com/watch?v=nlGnKPpOpbE |url-status=live |archive-url=https://web.archive.org/web/20161011195103/https://www.youtube.com/watch?v=nlGnKPpOpbE |archive-date=2016-10-11 |website=[[YouTube]]|date=July 2008 }} * {{cite web |author=Harding |first1=S. |last2=Banzhaf |first2=W. |title=Fast genetic programming on GPUs |url=https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/eurogp07_harding.html |url-status=live |archive-url=https://web.archive.org/web/20080609231021/https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/eurogp07_harding.html |archive-date=2008-06-09 |access-date=2008-05-01}}</ref> However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there.<ref>{{cite web |author=Langdon |first1=W. |last2=Banzhaf |first2=W. |title=A SIMD interpreter for Genetic Programming on GPU Graphics Cards |url=https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/langdon_2008_eurogp.html |url-status=live |archive-url=https://web.archive.org/web/20080609231026/https://www.cs.bham.ac.uk/~wbl/biblio/gp-html/langdon_2008_eurogp.html |archive-date=2008-06-09 |access-date=2008-05-01}} * V. Garcia and E. Debreuve and M. Barlaud. [[arxiv:0804.1448|Fast k nearest neighbor search using GPU]]. In Proceedings of the CVPR Workshop on Computer Vision on GPU, Anchorage, Alaska, USA, June 2008.</ref> Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)