Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
IBM Future Systems project
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Technology == === Data access === One design principle of FS was a "[[single-level store]]" which extended the idea of [[virtual memory]] (VM) to cover persistent data. In traditional designs, programs [[memory management|allocate memory]] to hold values that represent data. This data would normally disappear if the machine is turned off, or the user logs out. In order to have this data available in the future, additional code is needed to write it to permanent storage like a [[hard drive]], and then read it back in the future. To ease these common operations, a number of [[database engine]]s emerged in the 1960s that allowed programs to hand data to the engine which would then save it and retrieve it again on demand. Another emerging technology at the time was the concept of virtual memory. In early systems, the amount of memory available to a program to allocate for data was limited by the amount of [[main memory]] in the system, which might vary based on such factors as it is moved from one machine to another, or if other programs were allocating memory of their own. Virtual memory systems addressed this problem by defining a maximum amount of memory available to all programs, typically some very large number, much more than the physical memory in the machine. In the case that a program asks to allocate memory that is not physically available, a block of main memory is written out to disk, and that space is used for the new allocation. If the program requests data from that offloaded ("paged" or "spooled") memory area, it is invisibly loaded back into main memory again.<ref>{{cite web |url=https://www.techtarget.com/searchstorage/definition/virtual-memory |title= virtual memory |first=Alexander |last=Gillis |website=TechTarget}}</ref> A single-level store is essentially an expansion of virtual memory to all memory, internal or external. VM systems invisibly write memory to a disk, which is the same task as the file system, so there is no reason it cannot be used as the file system. Instead of programs allocating memory from "main memory" which is then perhaps sent to some other [[backing store]] by VM, ''all'' memory is immediately allocated by the VM. This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out. When the user logs back in, that data, and the programs that were running it as they are also in the same unified memory, are immediately available in the same state they were before. The entire concept of loading and saving is removed, programs, and entire systems, pick up where they were even after a machine restart. This concept had been explored in the [[Multics]] system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in [[core memory|core]] with a far slower backing store in the form of a hard drive or [[memory drum|drum]]. With the introduction of new forms of [[non-volatile memory]], most notably [[bubble memory]],<ref name=hansen/> that worked at speeds similar to core but had [[memory density]] similar to a hard disk, it appeared a single-level store would no longer have any performance downside. Future Systems planned on making the single-level store the key concept in its new operating systems. Instead of having a separate database engine that programmers would call, there would simply be calls in the system's [[application programming interface]] (API) to retrieve memory. And those API calls would be based on particular hardware or [[microcode]] implementations, which would only be available on IBM systems, thereby achieving IBM's goal of tightly tying the hardware to the programs that ran on it.<ref name=hansen/> === Processor === Another principle was the use of very high-level complex instructions to be implemented in [[microcode]]. As an example, one of the instructions, <code>CreateEncapsulatedModule</code>, was a complete [[linker (computing)|linkage editor.]] Other instructions were designed to support the internal data structures and operations of programming languages such as [[FORTRAN]], [[COBOL]], and [[PL/I]]. In effect, FS was designed to be the ultimate complex instruction set computer ([[Complex instruction set computer|CISC]]).<ref name=hansen/> Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, [[operating system]] software, [[data base]] software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, [[microcode]], and conventional [[software]]. More than one layer of microcode and code were contemplated, sometimes referred to as [[picocode]] or [[millicode]]. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects). The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC). Meanwhile, [[John Cocke (computer scientist)|John Cocke]], one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer ([[RISC]]).{{Citation needed |date=August 2013}} In the long run, the [[IBM 801]] RISC architecture, which eventually evolved into IBM's [[IBM POWER architecture|POWER]], [[PowerPC]], and [[Power ISA|Power]] architectures, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)