Tuesday, February 26, 2008

IBM System z10 EC Announcements - A summary

After lots of speculation, the z10 was finally announced today : 'IBM System z10 Enterprise Class - The forward-thinking mainframe for the twenty-first century'. As usual IBM first launches the high-end model of its new generation. Just look at the new design on the picture and the nice green accent. I’ll try to walk you through some of the features that are listed below as I interpret them for the moment.


Click on image to view larger picture in new window

Update #1 : added paragraph on 'software impact'
Update #2 : all documentation references up to date

Allthough under the covers it has a similar build as the z9 with its 2 frames, its 3 I/O cages and its 4-book CEC, it is however slightly higher and it takes some more floorspace. This might ask for some physical planning when replacing a z9 EC.

General
The z10 EC (machine type : 2097) comes just as its predecessor, the z9 EC, with 5 models : the E12, E26, E40, E56 and the largest model the E64. The digits indicate the number of CPs at the customer's disposal. Each MCM has 17 processors (20 on the E64) but some are reserved : 2 spares and 11 SAPS per System z10. The distribution across the four books accounts for the odd numbers of the several models.
The z10 EC has a minimum of 16GB memory and a maximum of 1,5TB. Each book has 384GB, but as we will see further, not all of it is always available to the client.
Exit STI, enter InfiniBand : this results in an increased throughput from 2,7 to 6GB towards the I/O cages.

HSA = 0 ?! and preplanning
Recent announcements always had the same message : watch out, the HSA has grown. Now, IBM has changed strategy : it now always reserves 16GB for the HSA. This 16GB falls outside the memory the customer buys for his own use. The HSA reserves space for 4 CSSs, 15 LPs in each CSS and per CSS Subchannel set-0 and Subchannel set-1 MAXDEVICES are set to maximum. How is it implemented ? The customer orders e.g. 112GB of memory, pays for 112GB, has 112GB at his disposal, but the z10 is shipped with 128GB (112GB + 16GB for the HSA).
This is not the only way availability has improved significantly. Also other preplanning requirements are brought down to a minimum so you can "seamlessly create logical partitions (LPARs), include logical subsystems, change logical processor definitions in an LPAR (...) without a power-on reset".

Expanded scalability
The 1-book model (E12) enlarges its scalability with more subcapacity models next to the 12 full capacity settings (701 tot 712). It has 36 subcapacity settings (401 tot 612) which makes smaller upgrades possible in the lower ranges. The 400 series has approximately 24% of the capacity of the full engine series. For the 500 series it’s situated around 52% and for the 600 series at about 70%. An example : the 402 (subcapacity model with 2 engines) has 51 MSU which is almost 24% of the 702 with its 215 MSU. MSU ratings can be found over here : just scroll down to the end of the page.

Processor
The following question has already been raised many times : “Does System z get the same Power6 processor as System p and System i ?” The answer is very clear : NO. The processor has been partially designed by the same team but it still answers higher requirements concerning reliability, cache utilisation etc. But this was not really news, as we already knew the last year presentation of Charles Webb (IBM fellow) : IBM z6 –The Next-Generation Mainframe Microprocessor. The z6 processor (z10 in the meantime) is a 4-core processor which supports 1MB page frames (in addition to the usual 4K) and Hardware Decimal Floating Point.
We had our reasons for never much talking about processing speed on the mainframe. But it now more than doubles its 1,7GHz on the z9 EC to 4,4GHz on the z10 EC. With its 4,4GHz the mainframe is now also capable of running CPU-intensive workload. I’m thinking about WebSphere, Java workload, Linux consolidation … These are workloads that get maximum profit of the full processing power of the specialty engines.

HiperDispatch (and LSPR)
This is a completely new function that’s only available on the z10. It isn't described at large in the announcement, so here's only what I can make of the information I got so far on this feature : it’s a combination of firmware and z/OS software that provides an optimalization of the CPU usage. Starting point is the fact that CPUs can have different distance to memory attributes. Memory accesses can take a number of cycles depending upon cache level / local or remote repository accesssed. PR/SM exchanges this topology information with z/OS. z/OS uses this to construct its dispatching queues. The entire z10 EC hardware/firmware/OS stack now tightly collaborates to obtain the hardware’s full potential. The function can be dynamically turned on. By default it’s turned off. But beware : on the z10 the LSPR Multi-Image for z/OS 1.8 Mixed Workload estimates are run with HiperDispatch = ON.

InfiniBand architecture
The z10 introduces a new kind of connectivity for the mainframe : InfiniBand. The InfiniBand Trade Association was already founded in 1999 with steering members IBM, QLogic, SUN, CISCO … Key requirements were high-bandwidth and low latency. If you want to find out more about the technical details here’s the InfiniBand site.
This connectivity knows 2 implementations on the z10. Exclusively for the z10, there’s the InfiniBand I/O bus. De STIs of the z9 with a throughput of 2,7GB/s are replaced by the IFB I/O bus with a throughput of 6GB/s. The second implementation is called PSIFB : Parallel Sysplex using InfiniBand. This will introduce InfiniBand Coupling links in Q2. These new coupling links bridge a distance of 150 meters with a throughput of 6GB/s in each direction. This connectivity is realised with 50 micron cables with new MPO connectors. It is defined with a new CHPID : CIB (Coupling using InfiniBand). Using 2 physical ports you can define 16 CHPIDs : this allows using the same physical Coupling Links for multiple sysplexes.
Connectivity towards the z9 is only possible to a dedicated Coupling Facility z9. Throughput is reduced to 3GB/s. PSIFB coupling links towards non-dedicated z9 EC and BC is till a Statement of Direction.

Upgrade paths and operating systems
You can upgrade to a z10 from any z990 and z9 EC. As on the z9, Token Ring is no longer supported. Only HMCs with FC 0079, 0081 and 0084 are supported. Only the HMC with FC 0084 can be ordered with a new z10.
Take into account that some other features are no longer supported : the first generation of OSA-Express cards and ICB-3 links. IBM also intends for System z10 to be the last server to support ICB-4 links.
The z10 requires minimally z/OS 1.7. But if you want to use all functionalities, you have to run on z/OS 1.9. For the other operating systems : z/VM 5.2 and z/VSE 3.1. For Linux on System z you can run Novell Suse SLES9 and SLES10 or Red Hat RHEL4 en RHEL5.

On Demand Flexibility
Capacity On Demand has a couple of improvements and novelties. Up to now it was not possible to do a permanent upgrade while a temporary On Demand solution was active. CBU and On/Off Capacity on Demand could not be activacted simultaneously. These restrictions have now been lifted.
On/Off Capacity on Demand (OOCoD) and Capacity Back-Up (CBU) are two forms of temporary upgrades we already know. But some things have changed.
For CBU you can now order several options and more than one offering can be activated simultaneously. You can also choose now how many tests you want to order : 5, 10 or 15 as opposed to 5 on the z9. CBU contracts, again as opposed to before, now have a specific expiration date "specified through ordering quantities of CBU years".
With OOCoD as for all other temporary upgrades, no access to IBM/Retain is needed to activate it. All records are on the z10 itself. This must also be the reason why you can no longer do administrative tests for OOCoD. You are however entitled to one no-charge test with a maximum duration of 24 hours.
One kind of temporary upgrade that was missing up to now was a CBU-like offering for planned outages. CBU was exclusively for non-planned outages. This sometimes lead to creative interpretations of the CBU activation. IBM has now introduced Capacity for Planned Events (CPE). This can e.g. be used for planned Data Center relocations, planned Power Outages … CPE is purchased for a one-time activation of three days. After that you have to buy a new CPE record. You can activate all resources on the system : rules concerning e.g. number of zIIPs in relation to number of CPs do not apply. After three days the CPE activity is automatically terminated when possible.

OSA Express3 10GbE card
A new OSA Express3 10GbE card is announced for Q2 exclusive to z10. Main differences with its predecessor : improved throughput, reduced latency and the number of ports on this card goes from 1 to 2. "The connector is new; it is now the small form factor, LC Duplex Connector".

Software impact
It wasn't in the original announcement but I just found it in the FAQ : by adapting the MIPS-MSU rate once again, IBM offers an attractive upgrade path to the customer. As with the z9, in many cases, you can stay technologically up to date and still save costs by upgrading to the z10. This is definitely the case when you are still running on a z990. Paying for 100 MSU on a z990 results in paying for 90 MSU on a z9 and for 81 MSU on a z10.


Documentation
The Future Runs on System z (Announcement information page)
System z10 EC web page
IBM System z10 EC Data sheet
IBM System z10 EC Reference Guide
IBM System z10 EC FAQ
Redbook z10 EC Technical Introduction (SG24-7515-00)
Redbook z10 EC Technical Guide (SG24-7516-00)
Slides from Tuesday's presentation : 'The Future Runs on ?????? ?'
The manuals can already be downloaded from IBM Resource Link.

Key dates
Z10 EC Announcement + availability : February 26, 2008
Model upgrades within the z10 EC : May 26, 2008
Feature upgrades within the z10 EC : May 26, 2008
Upgrades form z990, z9 EC to z10 EC : February 26, 2008
OSA Express 3 10 GbE LR : Planned 2Q08
InfiniBand Coupling Links for z10 EC, z9 EC and z9 BC : Planned 2Q08
Redbook Z10 EC Technical Introduction : February 26, 2008
Redbook Z10 EC Technical Guide : February 26, 2008
Redbook z10 EC Capacity on Demand : March 2008
Redbook Getting started with infiniBand on z10 EC : May 2008
zPCR available for customers : Planned end of March 2008


I'm aware I didn't cover everything (e.g. cryptography, HiperSockets, HMC, FICON enhancements, energy efficiency) and there was also a topic that isn't mentioned in the announcement allthough it has been in the past : I find no reference to the Cell Processor ? Bob Hoey mentioned it in the Webcast tonight but I'm not sure when or how it will be introduced. We'll just have to wait and see, I guess.

I'll surely post some more on some aspects, Statements of Direction, publications and other related announcements. Here's a summary of all announcements made today.

Enjoy the reading !

2 comments:

Dave Bartek said...

Very nice blog. I was looking for comments on z10 outside of 'BIG BLUE' and found your blog entries very good! Thank..keep it up..Dave
dbartek@us.ibm.com

Anonymous said...

I had the chance to see the IBM z10 at CeBIT...it's a monster :). The explanation that I received there from an IBM representative was outstanding. So much power compacted in this racks.
Great article with lots of information!
Good job!