Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Network processor
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Architectural paradigms== In order to deal with high data-rates, several architectural paradigms are commonly used: * [[CPU pipeline|Pipeline]] of processors - each stage of the pipeline consisting of a processor performing one of the functions listed above. * [[Parallel computing|Parallel processing]] with multiple processors, often including [[Multithreading (computer architecture)|multithreading]]. * Specialized [[microcode]]d engines to more efficiently accomplish the tasks at hand. * With the advent of [[Multi-core (computing)|multicore]] architectures, network processors can be used for higher layer ([[OSI model|L4-L7]]) processing. Additionally, traffic management, which is a critical element in [[OSI model#Layer 2: Data link layer|L2]]-[[OSI model#Layer 3: Network layer|L3]] network processing and used to be executed by a variety of co-processors, has become an integral part of the network processor architecture, and a substantial part of its silicon area ("real estate") is devoted to the integrated traffic manager.<ref>{{Cite book|url=http://www.cse.bgu.ac.il/npbook/|title=Network Processors: Architecture, Programming, and Implementation|last=Giladi|first=Ran|publisher=Morgan Kaufmann|year=2008|isbn=978-0-12-370891-5|series=Systems on Silicon}}</ref> Modern network processors are also equipped with low-latency high-throughput on-chip interconnection networks optimized for the exchange of small messages among cores (few data words). Such networks can be used as an alternative facility for the efficient inter-core communication aside of the standard use of shared memory.<ref>{{cite conference|last1=Buono|first1=Daniele|last2=Mencagli|first2=Gabriele|date=21β25 July 2014|title=Run-time mechanisms for fine-grained parallelism on network processors: The TILEPro64 experience|url=http://pages.di.unipi.it/mencagli/publications/preprint-hpcs-2014.pdf|url-status=live|conference=2014 International Conference on High Performance Computing Simulation (HPCS 2014)|location=Bologna, Italy|pages=55β64|doi=10.1109/HPCSim.2014.6903669|isbn=978-1-4799-5313-4|archive-url=https://web.archive.org/web/20190327010533/http://pages.di.unipi.it/mencagli/publications/preprint-hpcs-2014.pdf|archive-date=27 March 2019}} [https://archive.org/details/RunTimeMechanismsForFineGrainedParallelism Alt URL]</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)