Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
CICS
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Sysplex exploitation== [[Image:CICS-TX-Sec.jpg|thumb|right|A diagram showing one site's relationship between z/OS and CICS, 2010]] At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new z/OS [[IBM Parallel Sysplex|Sysplex]] mainframe line. The Sysplex was to be based on [[CMOS]] (Complementary Metal Oxide Silicon) rather than the existing [[emitter-coupled logic|ECL]] (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a ''[[keiretsu]]'' with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM<ref>{{Cite web|url=https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV2137.html|title=IBM Archives: Thermal conduction module|date=2003-01-23|website=www-03.ibm.com|language=en-US|access-date=2018-06-01|archive-date=2016-07-20|archive-url=https://web.archive.org/web/20160720232359/http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV2137.html|url-status=dead}}</ref>) that had inert gas pistons and needed plumbed with high-volume chilled water to be cooled. However, the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers [[Amdahl Corporation|Amdahl]] and [[Hitachi Data Systems|Hitachi]]). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload. To achieve the same total transaction throughput on a Sysplex, multiple boxes would need to be used in parallel for each workload. However, a CICS address space, due to its quasi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time{{snd}} even with use of MVS sub-tasks. Without enhanced parallelism, customers would tend to move to IBM's competitors rather than use Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like [[IMS/DC]] which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power{{snd}} using multi-region operation (MRO). Eventually the second path was adopted after the CICS user community was consulted. The community vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/I, or assembler code. The IBM-recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as [[IBM Db2]] or [[IBM Information Management System|IMS/DB]]) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for [[Virtual Storage Access Method|VSAM]] and CICS Data Tables, Queue Owning Regions (QORs) for [[IBM WebSphere MQ|MQ]], CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions. In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS,<ref>{{cite book |title=IMS |publisher=John Wiley & Sons, Ltd |isbn=9780470750001 |location=Chichester, UK |pages=1β39 |doi= 10.1002/9780470750001.ch1 |chapter= IMS Context |year= 2009}}</ref> MQ for zOS<ref>{{Cite web|url=https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzal.doc/fg10370_.htm|title=IBM Knowledge Center MQ for zOS|website=www.ibm.com|date=11 March 2014 |language=en-US|access-date=2018-06-01|archive-date=2016-08-07|archive-url=https://web.archive.org/web/20160807050826/http://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzal.doc/fg10370_.htm|url-status=live}}</ref> and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex: the [[Coupling Facility]] or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure). By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU. When these were coupled together, 32 or more nodes would be able to scale two orders of magnitude greater in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of mainframe Sysplexes in two locations in Phoenix, AZ, each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-[[dot-com bubble|dotcom-bubble]] web client inquiry requests. This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64-bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one.<ref>{{Cite news|url=https://www.computerworld.com/article/2588995/retail-it/amdahl-gives-up-on--mainframe-business.html|title=Amdahl gives up on mainframe business|last=Vijayan|first=Jaikumar|work=Computerworld|access-date=2018-06-01|language=en|archive-date=2018-11-03|archive-url=https://web.archive.org/web/20181103131245/https://www.computerworld.com/article/2588995/retail-it/amdahl-gives-up-on--mainframe-business.html|url-status=live}}</ref><ref>{{Cite news|url=https://www.theregister.co.uk/2017/05/24/hitachi_exits_mainframe_hardware/|title=Hitachi exits mainframe hardware but will collab with IBM on z Systems|access-date=2018-06-01|language=en|archive-date=2018-06-13|archive-url=https://web.archive.org/web/20180613060121/https://www.theregister.co.uk/2017/05/24/hitachi_exits_mainframe_hardware|url-status=live}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)