Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
CDC STAR-100
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Real-world performance, users and impact== The STAR-100's real-world performance was a fraction of its theoretical performance for a number of reasons. Firstly, the vector instructions, being "memory-to-memory," had a relatively long startup time, since the pipeline from the memory to the functional units was very long. In contrast to the register-based pipelined functional units in the 7600, the STAR pipelines were much deeper. The problem was compounded by the fact that the STAR had a slower cycle time than the 7600 (40 ns vs 27.5 ns). So the vector length needed for the STAR to run faster than the 7600 occurred at about 50 elements; if the loops were working on data sets with fewer elements, the time cost of setting up the vector pipeline was higher than the time savings provided by the vector instruction(s). When the machine was released in 1974, it quickly became apparent that the general performance was disappointing. Very few programs can be effectively vectorized into a series of single instructions; nearly all calculations will rely on the results of some earlier instruction, yet the results had to clear the pipelines before they could be fed back in. This forced most programs to pay the high setup cost of the vector units, and generally the ones that did "work" were extreme examples. Worse, basic scalar performance was sacrificed to improve vector performance. Any time that the program had to run scalar instructions, the overall performance of the machine dropped dramatically. (See [[Amdahl's Law]].) Two STAR-100 systems were eventually delivered to the [[Lawrence Livermore National Laboratory]] and one to NASA [[Langley Research Center]].<ref name="PC2"/> In preparation for the STAR deliveries, LLNL programmers developed a [[Library (computing)|library]] of [[subroutine]]s, called ''STACKLIB'', on the 7600 to [[emulator|emulate]] the vector operations of the STAR. In the process of developing STACKLIB, they found that programs converted to use it ran faster than they had before, even on the 7600. This placed further pressures on the performance of the STAR. The STAR-100 was a disappointment to everyone involved. [[James E. Thornton|Jim Thornton]], formerly [[Seymour Cray]]'s close assistant on the [[CDC 1604]] and [[CDC 6600|6600]] projects and the chief designer of STAR, left CDC to form [[Network Systems Corporation]]. An updated version of the basic architecture was later released in 1979 as the [[CDC Cyber#Cyber 200 series|Cyber 203]],<ref name="PC2"/> followed by the [[CDC Cyber#Cyber 200 series|Cyber 205]] in 1980, but by this point systems from [[Cray Research]] with considerably higher performance were on the market. The failure of the STAR led to CDC being pushed from its former dominance in the supercomputer market, something they tried to address with the formation of [[ETA Systems]] in September 1983.<ref name="PC2"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)