Friday, December 2, 2016

New : SCRT support - now using Service Request PMR process like other software programs

This morning I received a mail from the SCRT team about a change in the support. And I think it's worth mentioning here as well.

"IBM is pleased to announce a major change in the way SCRT is supported. Effective immediately: Support for the SCRT program is now using the standard IBM PMR service request process for reporting problems. PMR support for SCRT has been requested by many customers over the years. Please see the information below on the PMR process if you are unfamiliar."

Mind you, the PMR service is for technical problems.

"PMR service request process for technical problems with SCRT

To open a service request, you must navigate to http://www.ibm.com/support/servicerequest and sign in. Once signed in, select "New service request", then you must select "I am having a problem with software" from the list of options.

z/OS Customers:
When asked to select a product / component, enter SCRT, Sub-capacity Reporting Tool, or 5752SCRT2. Any of these keywords will locate the Sub-Capacity Reporting Tool.

z/VSE Customers:
When asked to select a product / component, enter VSE.

You must then provide any additional information necessary and submit the request.

If you have already supplied IBM with diagnostic information in your note to the SCRT id, please open the service request and forward the service request number to dmalani@us.ibm.com and andrewsi@us.ibm.com. The information you provided will be loaded into the service request on your behalf.

For additional help opening a service request, please see: https://www-946.ibm.com/sr/help/index.html"

Thursday, December 1, 2016

A brief history of Software Pricing


General Introduction

It was about 13 years ago that they asked me if I was interested in doing the presales for mainframe business. I came from the ‘other’ side since I’d always been engaged in application development as a programmer, an analyst and a DB2 admin. So, at that moment I’d never seen any mainframe hardware whatsoever.  But we had an experienced mainframe sales person at the time who learned me everything there was to learn about ESCON, CHPIDs, OSA-Express cards, CECs etc. etc. And then there was this other part as well : if you wanted (and this hasn’t really changed) to work out a business case, you had to know something about software pricing too.

I still vividly remember my first steps in ‘Software Pricing’ on mainframe or should I rather say my first stumbles. I just looked up the word in a dictionary.

Stumble [stuhm buh l]
1. the act of stumbling (to strike the foot against something, as in walking or running, so as to stagger or fall)
2. a slip or blunder

I can assure you, both meanings were very appropriate at the time.

Just one year later, our experienced sales person took on another opportunity and, just as that, I was no longer the rookie but I was the most experienced mainframe (pre)sales guy in the company. A new sales was appointed for mainframe and of course I was the one introducing him to all this magnificent stuff. Don’t worry, he pulled through with flying colours and we’re still working together.

Still, it’s a bit of a crazy story but we first met just a couple of hours before the official announcement of the z890 in April 2004 (yes, the 40th anniversary of the mainframe). There was a large IBM and BP event in Paris and he picked me up at my home for a 3-hour drive to Paris. Of course we started talking about the mainframe, comparing with the AS400 (now System i) he knew from his background. And, actually, we never stopped talking for the next three days. After each session new questions popped up and were, as good as possible, answered. Until finally we came to the subject of ‘software pricing’.  What could I say ? The only thing there was to say : “Well, you know, here’s where it gets a bit complicated”. Talking about a euphemism.

And this remained a constant during the years to come. Talking to customers, the questions I most often heard about software pricing were : “Could you freshen up my memory about that, I’m a bit confused ?” usually followed by “Could you explain that once again, I don’t think I’m still following ?”

I really would like to say, luckily things have gotten a lot less complicated over the years, but they haven’t.
So, why am I telling you all this. On November 15, we hosted a meeting for the GSE Young professionals Working Group at our HQ. These are all young mainframers we desperately want to get or to keep aboard of the mainframe ship. And I thought it was a good opportunity to write them a 'short' article on the evolutions we’ve seen over the years in software pricing on the mainframe.


General tendency

To put things into perspective of what we are talking about. We, as a business partner, are often engaged into negotiations on the hardware price of a new system. Mainframe is expensive, remember ? But when the customer starts working out his business case, it’s always played on the software part. Usually software represents 70% or more of the cost when e.g. mapped out on a four year basis. So any decrease in software cost when going to a new generation of mainframe might already make up for the hardware cost.

And this is the general tendency we have seen over the past years. Or should I say : tendencies. On the one hand IBM has always put effort into making the mainframe a competitive platform as compared to other, distributed environments. For a moment now, we’re ignoring all the benefits you get from the platform as is, but we’re purely focusing on the price, let’s say, on the CFO part of the business. You can make it competitive by making sure that the customer is only paying for what he is really using. On the other hand, IBM has made sure that with every new generation of the mainframe, prices were more attractive on the new system. This meant investing in new hardware paid off in the long run with economically more interesting software prices.

And there’s one more tendency, that is predominantly directing software pricing : it is IBM’s intention that you will pay less for new workloads that you introduce to the mainframe rather than installing them on the ‘cheaper’ distributed systems. This is a key factor in remaining competitive with the distributed systems.

Starting off with machine based pricing

When browsing through my (old) documentation, I keep lingering at a document from 2004, the year of the introduction of the z890 : ‘z890 and z800 Software Pricing P-Guide’. One of the first pages headlines ‘What is Sub-Capacity Pricing ?’.  In 2004 sub-capacity pricing is a new term for a pricing mechanism that was introduced for the z900. I know I’m more than generalising throughout this summary in order to give you a clear view of the bigger picture.

Up to then, software pricing was machine based. This meant there was a flat monthly pricing amount according to the model of e.g. your z800 box. But that was always a problem. You don’t buy a mainframe every year and so customers made a three or four year projection of what they needed in the next years and bought their systems accordingly. So, the first years, they only used e.g. 60% of what they had bought but they still had to pay for the entire machine.

But before I continue with this I should give you some more terminology. Software pricing is/was mainly divided into two categories : MLC (Monthly License Charge) and OTC (One Time Charge) pricing.

MLC pricing is a monthly charge you are paying in order to use IBM software. Examples of this is the operating system z/OS (OS/390 at the time) and lots of others like e.g. DB2, CICS, Cobol, IMS . . .  MLC software is paid in terms of numbers of MSUs (Million Service Units). It’s a measurement of the amount of processing work a computer can perform in one hour. This stands in close relationship to the MIPS (Million Instructions Per Second).

OTC, now better known as IPLA (International Program License Agreement), stands for a One Time Charge. You actually buy the software and you can pay an additional Subscription & Support (S&S) fee that also gives you the right to implement future versions at no extra cost. Examples are the operating system z/VM and lots of ‘tools’ like TSM, DB2 Utilities . . .

Enters Sub-capacity pricing

With VWLC (Variable Workload License Charge) IBM introduced the sub-capacity pricing for MLC. This meant that you closely came to pay what you were really using at that moment. Let me elaborate a bit on this, since this hasn’t fundamentally changed since then.
In order to detect what you are actually using, you need a reporting tool. That is SCRT (Sub Capacity Reporting Tool). It registers your usage for an entire month based on SMF records. Some softwares like e.g. z/OS and DB2 generate their own SMF records for this. Others take on the same level as e.g. z/OS. Out of that it will, per software, determine where your peak is for that software during that particular month. Now, you may argue, one test in acceptance that goes completely out of the roof might cause an enormous peak and then I’m penalized for that one moment during that month. Well, IBM took care of that.
The SCRT report determines the peak on a 4 hour rolling average. This means that it always takes an average of the past four hours to determine the height of the workload, so extreme peaks are levelled out. This is illustrated on the graph below for two LPARs. And if you want to make sure, you’re not going above a certain level you can use a mechanism that is known as capping. You can indicate that a certain LPAR (or group of LPARs) must not go beyond a defined level.


In this illustration, partition A’s peak rolling four-hour average is shown to peak at 73 MSUs, during the month. Let’s take z/OS as an example. When it would be running solely in partition A it would have its sub-capacity charges based on that 73 MSU value, although the machine capacity is at 100MSU.  Likewise, partition B’s peak rolling four-hour average is recorded at 52 MSUs. A product running solely in partition B would have its sub-capacity charges based on that 52 MSU value. But since z/OS is running in both LPARs, it will be charged at the combined peak for those LPARs i.e. 98 MSUs. Here we can also illustrate the capping mechanism again : if LPAR B is e.g. a test LPAR you might put up a capping of 45 MSUs and perhaps your combined peak might be lowered to 95 MSUs.

As you can see, the reporting and therefore also the billing is based on MSU’s.


The graph above, also shows you the pricing levels for the MSUs. It indicates that you pay a high price for the first MSU’s. This is an example for VWLC pricing when it was first introduced. The more MSUs you are reporting, the less you pay per MSU for the higher MSUs. As a matter of fact, there’s such a graph for every system and the steps for the smaller systems (z890, z114, zBC9 through z13s) tend to be much smaller. You have expensive 1-3 MSUs and from there on MSUs get less expensive in far smaller steps.

IPLA softwares also have sub-capacity pricing although there are still some softwares that are machine based.  These softwares are usually related to the reporting of some MLC software. If you have a machine of 200 MSU and you report only 150 MSU for DB2, then it’s sufficient to have bought the equivalent of that 150 MSU for the DB2 Cloning Tools or for the DB2 Utilities. 

General price decreases – Technology dividend

So, the stage is set for the following evolutions. The first action taken by IBM is a general one and I already mentioned it before. Customers want to be on a current system, but on the other hand they often postpone buying new systems since there’s no real reason for that. So, with the z990, IBM introduced what is now commonly referred to as the technology dividend. In fact it’s a very simple maneuver that stimulates every customer to at least make the calculation whether it’s beneficial for them to go to the next generation.

When I put ‘price decrease’ in the title, this is not entirely correct. For a specific generation, there’s a correlation between the number of MIPS on that machine and the number of MSUs. This used to be very static, not really changing over the generations. But for the last ten years, we started to call MSUs, software MSUs and the relation with MIPS became less evident. Let me just give you a small table and it will immediately be clear what we mean.

System
Pricing
MIPS
MSU
Z890
EWLC
200
32
Z9 BC
EWLC
200
30
Z10 BC
EWLC
200
25
Z114
AEWLC
200
25
zBC12
AEWLC
200
25
Z13s
AEWLC
200
25

For the first three generations from z890 to z10 BC, for the same amount of processing power, less MSUs were needed to cover it. So, when you had to pay for 32MSUs on a z890, this went down to 25MSs on a z10BC. And I can assure you, for a mainframe customer, this was a huge benefit on their software cost, well worth the investment in the new hardware. This technology dividend was more or less every time a 5% decrease.

From then on, we see that the correlation between MSUs and MIPS stayed the same. But with the z114 a new pricing was introduced. With AEWLC pricing you payed less per MSU as compared to the EWLC pricing on the z10 BC. Another technology dividend but implemented in a different way. And for the last two generations, we saw the same pricing mechanism, but an extra reduction was given per technology step. So, if you look up the official price for 3 MSU for z/OS, it hasn’t changed since the introduction of the z114. But, for the z13s you will have approximately a price reduction on it of about 10%.

General price decreases – New Workloads

The above are pricing decreases every customer could/can enjoy when moving on to the next technology. But, as I already said, IBM is particularly keen on getting new workloads on the mainframe and has gone into great effort doing so.

For those who are familiar with the terms, this started of with zOS.e and zNALC. zNALC is the best known of the two and is still in use nowadays. Let’s focus on that one. zNALC is a pricing mechanism for New Workloads that run on separate machines or in separate LPARs. The benefit is realized very simply : if you can prove that it’s a new workload on your mainframe, answering all the right criteria, your z/OS is only priced at a tenth of the normal pricing. This gives the customer a huge benefit on the software cost since z/OS is one of the most expensive softwares in MLC pricing.

The major drawback of this, is that it has to be a completely isolated workload in a separate environment. Well, now, that was always a bit the weak point of this solution. Say, you have all your data on the mainframe, as so many customers do, and you develop, let’s go modern, an app for your customers who can consult their accounts. What about all the other applications, running in another LPAR, that also have access to that information. This is often very difficult to integrate into your existing environment.

New Workloads – from separated to integrated

That’s why, over the years, IBM has made a lot of efforts to give you the best of both worlds : a better pricing for new workloads, but integrated into the existing environment. One of those efforts was e.g. to bring out a new pricing option for certain softwares like e.g. CICS and DB2. It’s called the Value Unit Edition and it’s basically an equivalent of IPLA software for that product. You buy it once and you pay a Subscription and Support fee for the ‘maintenance’ and the upgrades of the product. This is something every customer has to investigate whether this is beneficial for them or not. The advantage of this, is that the workload remains integrated into your existing environment.

Another initiative that could have impact on your software cost was the introduction of specialty engines. Here we can particularly focus on the zIIP. The zIIP (z Systems Integrated Information Processor) is an additional processor that is added to your system. Specific workloads are offloaded from z/OS to the zIIP. This can be specific DB2 workloads, Java, encryption or XML workloads (you can find a more extensive list over here). Here’s the illustration that is often used to illustrate how the zIIP is working


As you can see, the workload on the general processor is reduced as part of it is now executed on the zIIP. There’s an important caveat to be made here : this will only have an influence on your software cost if this happens during your peak period of the month. If so, here’s another example of how the software cost can go down due to a hardware investment.

New workloads today

Let’s finish off by having a look at the enhancements we recently saw in sub-capacity pricing. The first one was announced about two years ago, the third about two months ago. They have in common that they, again, focus on New Workloads like Mobile or Cloud.
-       Mobile Workload Pricing (MWP)
Used when IBM programs such as CICS, DB2, IMS, MQ, or WebSphere Application Server are processing mobile transactions from phones or tablets,
-       z Systems Collocated Application Pricing (zCAP)
Used when net new instances of IBM programs such as CICS, DB2, IMS, MQ, or WebSphere Application Server are added in support of a new application not previously running on a mainframe server
-       z Systems Workload Pricing for Cloud (zWPC)
Used when IBM programs such as CICS, DB2, IMS, MQ, or WebSphere Application Server are processing transactions from net new public cloud applications

The three have in common that they may reduce the cost of growth for these target applications by potentially reducing the reported peak capacity values for sub-capacity charges. This remains a constant throughout : it must have an impact on year peak reporting, otherwise you’ll see no benefit for your software cost.

Integration of new workloads into your existing environment is one thing, distinguishing which of the workload is new (e.g. mobile “coming from phones or tablets”) is another thing. So, just in case you started to think that I exaggerated at the beginning and you think software pricing isn’t all that complicated, let me explain how Mobile Workload Pricing can influence your reported peak and how the workload is recognized as ‘mobile’ workload. This gives me the opportunity to illustrate all the mechanisms I talked about earlier.

Here’s an illustration on MWP

Click on image for larger version

  1. Let’s assume that we are talking about the 4hr rolling average peak for this LPAR. Normally your SCRT tool would report a peak 4hr rolling average of 1.500 MSU. That is, 1.500 for z/OS and 300 for CICS. Maybe this customer is using a zIIP which could already have an influence on the reported peak that would otherwise perhaps have been 1.800 MSUs.
  2. For CICS, as we already indicated, 300MSU is reported. That means that for z/OS you will pay for 1.500 MSUs and for CICS you will pay 300 MSUs. Other softwares like DB2 have their own reporting but let’s assume they have the same MSUs as z/OS.
  3. Now you have to determine which part of CICS is used for Mobile Transactions and which part is not. This can be a particularly difficult one, since the same transaction might be executed many times, but it can originate from a mobile app or perhaps also from a plain and simple terminal. This is something a customer has to agree upon with IBM and it’s based upon e.g. specific fields of SMF records
  4. Based on that, you come to the conclusion that 200 out of the 300 MSUs that are reported by CICS actually have a mobile origin.
    Well, you get a reduction of 60% on that part of the CICS workload reducing that part of 200 MSUs to 80 MSUs.
  5. The good news is that for that LPAR, all the software in that LPAR, including z/OS and DB2 get the same reduction. This means that the reported 1.500 MSUs is lowered to 1.380 MSUs.
  6. The rest is BAU (Business As Usual), you can use this to determine the peak for that month. It is e.g. possible that at another moment your z/OS reports 1.400 MSUs at a moment that no mobile workload enters the system. This would mean that for that month your peak will be at 1.400 MSUs.
Well, this little example concludes our short walk through Software Pricing history. I might just add one source of information. There’s a very elaborate IBM internet page that explains you utterly everything about software pricing you ever wanted or, perhaps, didn’t want to know. It’s all there for you.

(This was also published in our Realdolmen z Systems newsletter)

Wednesday, November 23, 2016

Red Alert - IMS V12, V13 or V14 - potential for IMS to write incorrect log data

Yesterday, IBM issued a Red Alert on IMS. Here's the info. I'm just taking over the text from the Red Alert.

Title:

IMS V12, V13 or V14 - potential for IMS to write incorrect log data and/or over-write 64-bit (key 7) common storage which might belong to another address space.

Users Affected:

Users of IMS that are on V12 or later,
  • who are using log buffers in above-the-bar storage (BUFSTOR=64), or
  • have callers who supply an above-the-bar log record to the IMS Logger, such as:
  •       IMS 64-bit Fast Path buffer manager
          Installation-written or vendor-written software

Description:

The IMOVE macro (used by the IMS Logger) uses 32-bit instructions to advance the addresses of the source and target destinations. If the address being advanced is a 64-bit address which crosses a 4GB boundary, then the updated address is incorrect since only the low-half of the address is updated.

The two areas of IMS that are exposed to this are the IMS Logger and the 64-bit Fast Path buffer manager.

The IMS Logger is exposed to the problem if the storage allocated for the log buffers crosses a 4GB boundary. If this is the case, a log record will contain inaccurate data and an attempt will be made to inadvertently write to another location in memory. If this location is key 7, the data will be overwritten; if it is not, IMS will ABEND0C4.

The 64-bit Fast Path buffer manager is exposed to the problem if its BPND5 area crosses a 4GB boundary. To encounter the problem, not only would the area have to cross a 4GB boundary, but log record data must exist in the location where the boundary is crossed.

Recommended Actions:

(1) Determine exposure from IMS Logger or 64-bit Fast Path buffer manager
The IMS Support Center is shipping a diagnostic utility as a ++USERMOD which will determine the exposure of a given IMS system. This can help a customer gauge the urgency for which they should take action, if any. The utility runs as a stand-alone batch job and determines:
  1. If the log buffers are above the bar, and if the storage allocated for the log buffers crosses a 4GB boundary.
  2. If the 64-bit Fast Path buffer manager is enabled, and if the storage area which might contain a log record crosses a 4GB boundary
To download the utility, obtain it from:
Site: testcase.software.ibm.com
Directory: fromibm/im
File name: IM97624A.trs

The file contains ++USERMODs for IMS versions 12, 13, and 14. Instructions for running the utility are in a ++HOLD card along with its return codes.

If a customer cannot (or does not wish to) run the utility, instructions for determining the exposure via a series of /DIAGNOSE commands are in the ++HOLD card.

(2) Determine exposure from other programs which invoke the ILOG macro:
Installation-written or vendor-written programs can detect if they pass log records which reside above the bar by searching for the flag PRMLL64 which is set by the invoker of ILOG in the ILOG parameter list.

(3) Apply PTF if exposure is identified
If an exposure is identified, the exposed IMS should be:
  1. Brought down cleanly (for example, with the /CHE FREEZE or /CHE DUMPQ commands)

  2. Cold start IMS with the PTF applied for the appropriate version. The PTFs can be downloaded from ShopZ and are:
  3. V12: UI42725 (APRA PI71688)
    V13: UI42726 (APAR PI71701)
    V14: UI42685 (APAR PI71702)
    A cold start is necessary to prevent the reading of potentially inaccurate log data.

  4. After all previously exposed IMS systems in a data sharing group are successfully started with the PTF applied, if there is the possibility of needing to run a database recovery before regularly scheduled image copies will be taken, it is recommended that image copies be taken of all appropriate databases. This eliminates the possibility of reading potentially corrupt log data during a recovery.

If you haven't signed up to the Red Alerts by now, you really should do it. Just go over here.

Tuesday, November 22, 2016

Realdolmen z Systems e-zine 25


The 25th issue of our RealDolmen z Systems Newsletter was sent out today. You can download it over here. Just like the last times, there's just an English version. No more Dutch or French versions. Since it's been a bit quiet over here on the blog lately, there's lots of new content in this issue. I think I'll post some of it over here as well.

The content ? There is one major topic : software pricing. Usually we are hosting the GSE z/OS Working Group at our HQ, but on November 15, we were hosting the GSE Young professionals Working Group. And for that occasion, I thought I'd write 'A brief history of IBM Software prcing'. It's an introduction to the evolutions we saw in mainframe software pricing over the last 25 years.

We also focus on some announcements, as usual there are some hints and tips and of course the usual table of EOS dates of the operating systems is also present again.

One last note : if you're used to receiving our newsletter and you didn't this time, just send me a mail and I'll take care of it. Apparently we changed mailing systems and I just want to check everything went allright.

Enjoy the reading !

Tuesday, November 8, 2016

Realdolmen

Once in a while, I'm going off topic, but there's always a reason why. So today I would like to tell you a bit about our company :
Realdolmen is one of the biggest independent ICT experts in Belgium. With around 1,250 highly trained employees, we provide services to over 1,000 customers in Benelux. 
We strive to make ICT more personal, to make the most of your employees’ and your organisation’s potential in every collaboration we’re a part of. We do all this with the motto: To get there, together.
But I especially want to elaborate a bit on our rebranding. You might already have seen we have a new logo  


But there's more than this. It was inevitable: after helping numerous companies with their digital transformation in 2016, we also underwent a radical change ourselves. Our structure, our approach, our story: it all needed updating to better meet our customers’ needs.Here's a video explaining our strategy and why we went through this rebranding exercise.




For this occasion, we also have a special edition of our simplICiTy magazine that you can download over here. A few of the topics discussed include:

  • The story behind the new logo and baseline
  • Why we – even though we’re still a technology business – are putting people at the centre of our approach
  • How we’ve updated our services to support our customers as much as possible at an operational, tactical and strategic level
  • The Internet of Things (IoT), artificial intelligence (AI), smart machines, the hybrid cloud, big data: what trends and hypes do you as an organisation need to take into account?
  • What role can ICT play in your organisation’s digital transformation process?
Well, let me know what you think of it !

Monday, November 7, 2016

Red Alert - Potential for loss of access to data on a volume due to erroneous 'End of File' written to the VTOC or VTOCIX on z/OS V2R2 DFSMS

Here's the information for the second Red Alert. It was first published on November 2 (link) with an update on November 4 (link). I'm giving you the info from the updated version

Abstract:

Potential for loss of access to data on a volume due to erroneous 'End of File' written to the VTOC on z/OS V2R2 DFSMS (HDZ2220)

Users Affected:

Systems running z/OS V2R2 with DFSMS Secondary Space Reduction (SSR) function enabled (default)

Description:

For systems with DFSMS Device Manager Secondary Space Reduction enabled, the VTOC may be updated incorrectly if an AbendE37 occurs when no storage space is available while trying to extend to secondary extents on a new volume.

Recommended Actions:

Disable z/OS SSR function of Device Manager until fix for APAR OA51499 is available (instructions below).
Please note: If recent PTF UA82730 is applied then you must remove this PE PTF or apply ++APAR for OA51474 prior to disabling SSR. Please see APAR OA51499 Local Fix for additional details.
Disabling z/OS SSR:
  • Issue z/OS Command F DEVMAN,DISABLE(SSR) for current IPL
  •                           -or-  
    
  • Update DEVSUPxx parmlib member to include DISABLE(SSR) and issue z/OS command SET DEVSUP=(xx,yy) or SET DEVSUP=xx to all systems to persist across IPL's.

If you haven't signed up to the Red Alerts by now, you really should do it. Just go over here.

Red Alert - z/OS V2R2 - Potential for z/OS V2R2 JES2 (HJE77A0) members unable to WARM start

Last week there were two important recommendations for z/OS V2R2 users. Here's the info on the first one. I'm just taking over the text from the Red Alert.

Abstract #1:

Potential for z/OS V2R2 JES2 (HJE77A0) members unable to WARM start.

Users Affected:

Systems running z/OS V2R2 that have the JES2 Checkpoint residing on DASD -and- running in DUAL mode (CKPTDEF MODE=DUAL)

Description:

For systems that meet the above criteria, JES2 uses a change log to contain a summary of updates since the previous checkpoint cycle. JES2 may encounter I/O errors ($HASP291) and/or abends (S18A) when attempting to read in a large valid checkpoint change log. A checkpoint I/O reconfiguration will be triggered in this case, however potential overlays of JES2 private storage from the original error may result in unpredictable errors after exiting the reconfiguration.
Any z/OS V2R2 JES2 member in the multi-access spool (MAS) will not be able to read the checkpoint containing a large valid change log without encountering abends. z/OS V2R1 JES2 members in the MAS will be able to read the checkpoint appropriately, but the z/OS V2R2 members will not be able to WARM start. If the entire MAS is comprised of z/OS V2R2 members, then no member will be able to process the checkpoint and this may result in a required COLD start.
Please see APAR OA51558 for additional details.

Recommended Actions:

For users that meet the above criteria, checkpoint should be switched to DUPLEX mode (CKPTDEF MODE=DUPLEX) to avoid the situation.

If you haven't signed up to the Red Alerts by now, you really should do it. Just go over here.

Thursday, October 27, 2016

Last call for MES upgrades on your z12 system(s)

Let me freshen up your memory, just in case you wanted to have a MES upgrade on your z12 system(s) like disruptively adding memory, adding books or I/O cards, you have time till the end of the year. Or e.g. you plan on acquiring an IDAA next year, but if you stay on your zEC12 or zBC12, you will have to order the necessary 10GbE cards while it's still possible. Afterwards it will be a no go. This was announced half of February this year, shortly after the announcement of the z13s (ZG16-0021).

During 2017 'upgrades' that are delivered through a modification to the machine's Licensed Internal Code (LIC) will still be possible. E.g. adding additional CBU records or activating memory that's already in the machine. Here's an overview I made a couple of months ago

Click on image for larger version

Wednesday, October 26, 2016

Hardware withdrawal: IBM TS3500 Tape Library models L23 and L53 and select features

Do you remember when the TS3500 base frames were first announced ? No ?
I also had to look it up : the L23 and L53 were first announced in May 2006 and now their End of Marketing date was announced : 'Hardware withdrawal: IBM TS3500 Tape Library models L23 and L53 and select features - Some replacements available (ZG16-0129)'.

Effective October 7, 2017 you will no longer be able to order the L23 (base frame for 3592) and L53 (base frame for LTO) frames. This seems like a logical evolution. We have a replacement in the meantime with the TS4500 L25 and TS4500 L55 frames.

So I wonder who would still want to order a new TS3500 Tape Library for the moment, when the TS4500 is a valid replacement. But don't worry if you want to expand an existing TS3500 : since the D23 and D53 frames are not mentioned and no field installs are mentioned either, I assume that you can still order additional expansion frames on the TS3500 tape library.

Here's some information on the TS4500 Tape library
  • The IBM TS4500 Overview page where you can find a demo, the Data Sheet and an Infographic on 'Why TS4500 for z Systems'
  • The IBM Knowledge Center TS4500 documentation page

Tuesday, March 15, 2016

Potential exposure for undetected loss of data for z/OS users writing data to zEDC compressed data sets using QSAM

Here's a new Red Alert. I'm taking over the content.

Abstract:
Potential exposure for undetected loss of data for z/OS users writing data to zEDC compressed data sets using QSAM on releases z/OS 1.13, 2.1 & 2.2.

Description
Data loss may occur when writing to a zEDC compressed data set using QSAM when CLOSE is issued and there exists a partial QSAM buffer that has not yet been written, and all allocated space in the data set is filled. In this case, the unwritten partial buffer may not be written after CLOSE processing obtains the new extent for the data set.

The problem does not apply to BSAM or other access methods.

Please see APAR OA50061 for additional information.

Recommended Actions 
++APAR for OA50061 should be applied for all environments with zEDC compressed data sets. This requires an IPL to activate.

You can find the APAR information over here where you can find some more info :
"During creation of a new extent during CLOSE processing, any
user blocks that have not been compressed and written out to the
zedc compressed dataset yet will not be written out.

This does not apply to BSAM processing since all outstanding
WRITE requests must be CHECK'd for completion before CLOSE is
issued."
For the moment it also says : "++APAR will be available soon".

If you haven't signed up to the Red Alerts by now, you really should do it. Just go over here.

Thursday, February 25, 2016

New Cobol Compiler V6.1

At the same time of the announcement of the z13s, IBM also announced a new version of their Cobol compiler : 'IBM Enterprise COBOL for z/OS, V6.1 delivers new and enhanced COBOL statements for added function and flexibility (ZP16-0093)'.

Here's an overview from the announcement :
"IBM® Enterprise COBOL for z/OS®, V6.1 exploits the capabilities of IBM z/Architecture® and continues to add the following new features and enhancements:
  • The compiler internals are enhanced so that you can compile your large programs.
  • New and changed COBOL statements are included to provide you with additional functions.
  • New and changed compiler options are implemented to provide you with added flexibility. 
  • Support for generation of JSON texts
  • The ARCH compiler option allows you to exploit and tune your code to run on your choice of z/Architecture levels of the z/OS platform.
  • The OPTIMIZE compiler option allows you to select from multiple levels of increasing optimization for your code.
  • Support for the latest middleware includes IBM CICS®, IBM DB2®, and IBM IMS™."
Enterprise COBOL for z/OS V6.1 requires z/OS V2.1 (5650-ZOS), or later and is available on March 18, 2016. Program number is 5655-EC6. It runs on z9 systems and higher.

Like with previous versions, there is also a trial version announced (ZP16-0094).
"IBM® Enterprise COBOL Developer Trial for z/OS®, V6.1, an evaluation edition of Enterprise COBOL for z/OS, is a no-charge offering and does not initiate a Single Version Charging (SVC) period.
Enterprise COBOL Developer Trial lets you assess the value that could be gained from migrating to Enterprise COBOL for z/OS, V6.1, before making a formal decision to upgrade. This trial enables the evaluation of the latest IBM COBOL capabilities, in a nonproduction environment, without the prerequisite time and resource commitments that are required for a full-production migration project. The trial period is 90 days."
Next to this, there's also a Value Unit Edition (ZP16-0095).
"IBM® Enterprise COBOL Value Unit Edition, V6.1 delivers a compiler with a one-time charge price metric that is based on Value Units. The Value Unit price metric supports both full-capacity and sub-capacity environments."
In other words, this is the IPLA (or OTC) version of Cobol V6.1 with a program number for the product (5697-V61) and one for the Subscription&Support (5697-ECS).

Well, that's about it for the new Cobol Compiler. Check out the COBOL for z/OS documentation library for more information. I'm sure the 6.1 documentation will be soon available. If you need even more info you can also have a look at this Enterprise Cobol for z/OS Resources Page.

Tuesday, February 23, 2016

Preview of IBM z/VM V6.4

On February 16, 2016 along with the z13s, IBM also made a preview announcement of z/VM 6.4 : 'Delivering industry-proven advanced virtualization capabilities to support the increasing demands of your business (ZP16-0014)'

z/VM is planned to become available in the fourth quarter of 2016 and it will support the two LinuxONE versions and System z from z114 and z196 onwards. Mind you, z/VM 6.3 was still supporting the z10 systems, z/VM 6.4 will NOT !

Here's a short overview directly from the announcement :
"z/VM V6.4 enables extreme scalability, security, and efficiency, creating cost savings opportunities, and provides the foundation for cognitive computing on z Systems and LinuxONE.  z/VM V6.4 is planned to deliver:
  • Increased efficiency with HyperPAV paging that takes advantage of DS8000 features to increase the bandwidth for paging and allow for more efficient memory management of over-committed workloads.
  • Easier migration with enhanced upgrade-in-place infrastructure that provides an improved migration path from previous z/VM releases.
  • Improved operations with ease of use enhancements requested by clients, such as querying service applied to the running hypervisor and providing environment variables to allow programming automation based on systems characteristics and client settings.
  • Improved Small Computer System Interface (SCSI) support for guest attachment of disk and other peripherals, and hypervisor attachment of disk drives to z Systems and LinuxONE systems to:
    • Increase efficiency by allowing an IBM FlashSystem® to be directly attached to z/VM for system use without the need for an IBM System Storage® SAN Volume Controller (SVC).
    • Enable ease of use with enhanced management for SCSI devices to provide information needed about device configuration characteristics.
  • Increased scalability by exploiting Guest Enhanced DAT to allow guests to take advantage of large (1 MB) pages, decreasing the memory and overhead required to perform address translation.
  • Integration of new CMS Pipelines functionality which previously was not formally incorporated within the z/VM product, allowing a much more inclusive set of tools for application developers. "
Next to that, there's also support for simultaneous multithreading (SMT) and the vector extension facility (SIMD) instructions on the z13 systems. The last one helps "accelerating Business Analytics workloads on z13, z13s, and LinuxONE".

Have a look at the z/VM page and the z/VM 6.4 Resources page. I'm sure, in time, more information will certainly become available on those pages. And it's always interesting to follow the discussions on the z/VM discussion list.

Friday, February 19, 2016

Hardware withdrawal of zEC12, zBC12 and zBX Model 004

It’s become customary by now, the announcement of one system is accompanied by the End of Marketing announcement of the previous generation, in this case the zEC12 and the zBC12 : 'Hardware Withdrawal: IBM zEnterprise EC12, IBM zEnterprise BC12, and IBM z BladeCenter Extension (zBX) Model 004 - some replacements available (ZG16-0021)'

However, it’s a bit more complicated this time because what used to be a two step operation is now somewhat of a three step generation. Remember with the z10 EC and z10 BC or with the z196 and z114.
First there was the End of Marketing of the system itself and all hardware MES upgrades like disruptively adding memory, adding books or I/O cards.
And secondly, a year later, all LIC upgrades like e.g. adding a processor within the same book or drawer, non-disruptively adding memory was ended as well.
This is again the case with the zEC12 en zBC12, except that six months before the first step all delivery of and upgrades to a z12 are no longer possible “for the listed RoHS Jurisdictions”. This includes, I think, most of the European countries. So, this is how the withdrawals go

June 30, 2016 : RoHS jurisdictions
  • All Models of the zEC12 and upgrades towards them
  • All models of the zBC12 and upgrades towards them
December 31, 2016 : all countries
  • All Models of the zEC12 and upgrades towards them
  • All models of the zBC12 and upgrades towards them
  • Model conversions and hardware MES features applied to an existing zEC12 or zBC12 server
December 31, 2017
  • Field installed features and all associated conversions that are delivered solely through a modification to the machine's Licensed Internal Code (LIC)

You know I mentioned before that I thought the End of Marketing for previous generations was quite tight from a customer point of view. Here’s a brief overview.

Click on image for larger version

I also mentioned the zBX 004. Well, as of March 31, 2017 the last zBX Model 004 will have been marketed and no more upgrades from previous models will be possible either. I guess no further comment is needed with regards to the zBX ?

Thursday, February 18, 2016

Two Flash Alerts for DS8870

Well, life goes on, and this week I saw two flash alerts passing by on DS8870. You can find them here and here. They're not that long, so I'm just taking over both of their contents.

DS8870 Global Mirror suspend caused by a Track Format Descriptor mismatch

Abstract
Global Mirror suspends caused by a microcode logic error introduced in R7.4 that results in a Track Format Descriptor mismatch. 

Content
Global Mirror suspends caused by a microcode logic error introduced in R7.4 that results in a Track Format Descriptor mismatch. Microcode is improperly setting a flag in a PPRC control block. This problem is pervasive in Global Mirror environments. R7.4 code levels below 87.41.44.0 and R7.5 levels below 87.51.23.4 are exposed to this issue.
Recommendation:
A mandatory ECA 712 is being released to the field, we recommend upgrade to code bundle R7.5 87.51.23.4

High Performance Flash Enclosure drive failures can cause loss of access

Abstract
IBM released microcode with improved error handling for DS8870 High Performance Flash Enclosure (HPFE) flash drive errors. 

Content
IBM has developed microcode enhancements for error handling in HPFE. DS8870 (R7.5 SP2.3 ) 87.51.23.4 microcode levels contain the changes to improve the reliability and availability of High Performance Flash Enclosures. This enhancement streamlines error detection and isolation when a failing Flash Drive exhibits excessive errors.

This change is designed to be concurrently installable on DS8870 presently running the R7.x families of microcode.

Recommendation:
A mandatory ECA 737 is being released to the field, we recommend upgrade to code bundle R7.5 SP2.3 87.51.23.4 as soon as possible.

ID: 312723 312369 313417


Tuesday, February 16, 2016

z13s - a new generation of z Systems built for cloud and mobile


General Introduction



A couple of weeks ago IBM announced its new version of the LinuxONE and especially of the smaller version, known as the Rockhopper. As a matter of fact, any one who follows the mainframe market a little bit could guess it was based on the new little brother of the z13. And it was of course. So today IBM announced the new z13s. Allthough LinuxONE was announced via Press Channels, it is now coupled with the z13s in the official announcement : "Expanding the IBM Systems' portfolio with additions to IBM z Systems and IBM LinuxONE (ZG16-0002)". I'm going to give you an introduction to the z13s, but I will also refer from time to time to the LinuxONE Rockhopper of course.

You know I largely concentrate on the technical aspects of the announcement rather than focusing on the strategic importance of the platform. Lots of other sources will give you plenty of information about this. Take e.g. a look at this IBM page covering z13s and its relation with Cloud, Security, Analytics and DevOps. And you can find quite some videos on Youtube as well.


I must immdiately say, I don't like the naming of this new system : z13s. This was my plural for two z13s, oops, I guess that will be two z13's and two z13s's. Any one a better suggestion ?

But let's get to the content of the announcement. I'll give you the usual survey of the new system starting with some technical specifications. Of course the Software Pricing is also always an interesting part. I'll conclude with some key dates and a couple of references to already available documentation. Here's an overview of the z13s.

Click on image for larger version

As usual, along with the announcement of the new 'Business Class' model there's also the GA2 for the 'Enterprise Class' model, the z13. No separate announcement as before though, it's all in the z13s announcement.

Technical specifications

The z13s has machine type 2965. Just like its predecessors it's also a one frame, air-cooled machine. Just like the z114 and the zBC12 it has top-exit I/O cabling and top exit power cabling. Unlike the z13, the processor is, with its 4.3 GHz, just a bit faster than its predecessor, the zBC12 (4.2 GHz). Of course it also has the new hardware features that were introduced with the z13 like Simultaneous Multi Threading (SMT) and Single Instruction Multiple Data (SIMD). Look for my post of the announcement of the z13 for more details about these. SMT is made available for IFLs and zIIPs, just like on the z13.

Again, we have two models : the N10 and the N20. The N10 has one drawer (13 PUs) and the N20 can have one or two drawers for its 26 PUs. As you can guess the N10 has 10 configurable engines, the N20 has, indeed, 20 of them. The N10 has 2 SAPs and one IFP (Integrated Firmware Processor) resulting in 10 configurable engines The N20 has an IFP, 3 SAPs and 2 spare engines. But what's different from the zBC12 is that the z13s N20 can be a one drawer or a two drawer model. The second drawer is added when more than 2TB of memory is needed. As you can see it has 52 PUs, but only 20, as with the 1-drawer model, are available to the customer. The second drawer is only there for the extra memory from 2TB up to 4TB. Later on I'll tell you more about the memory.


If you go through the announcement, you will notice there are two other models as well : the L10 and the L20. Well, that's the indication for the Rockhopper and it's the same story with the drawers as well as for the configurable engines. Except, you can of course only have IFLs here.

As I already indicated, both models (N10, N20) can have up to 6 traditional CPs ranging from A01 to Z06 giving us 156 capacity settings. The smallest model, the A01 has once again a higher capacity than its predecessor. On the z114 it was 25 mips, on the zBC12 it was 50 mips and now it's become 80 mips. Same story as with the zBC12, allthough it now has 10 MSUs, it still has zELC pricing and the software price for the A01 remains the same as for the zBC12 A01 with its 6 MSUs.

Here's an overview of how the z13s compares to its predecessors.


You know by now what PCI stands for, of course. so a uniprocessor goes up from 1064 PCI on a zBC12 to 1430 PCI on a z13s.

Memory - the first 64GB are for free

I rememeber when I wrote about the announcement of the z13, one of the subtitles was Memory Memory Memory. And I can say, you're in for the same treat on the z13s. The maximum memory is 8 times larger than what we had on the zBC12 resulting in 4TB. Model N10 will be limited to around 1TB. But what's perhaps even more interesting is the minimum capacity you get. The first 64GB are free, you can't start any lower. And as bizarre as it may sound, tripling your memory will cost you less than e.g. adding just some capacity. Let me give you an example. Say you have 48GB on a zBC12. If you ask no additional memory, you'll always start at 64 on the z13s. If you want to double your capacity to 96GB, you're actually better off with 152GB than with 120GB. Here's an overview for the one drawer N20 configurations.


If you're a bit confused about the strange numbers, it's because the capacity needed for HSA and for RAIM is not taken into account in the Client GB. Upgrades within one line are non-dsiruptive, upgrades to a following line are disruptive.

More memory really makes a difference nowadays. On the one hand with 'traditional' workloads like e.g. DB2 but on the other hand definitely with Linux workloads. There's a specific redbook covering this matter : 'Benefits of Configuring More Memory in the IBM z/OS Software Stack'.

Under the covers

Here you see an 'under the covers' illustration of a z13s one drawer N20 model.


As you can see this resembles the zBC12. With new z13s systems you can only have PCIe I/O drawers but you can still carry forward one traditional I/O drawer but then only for FICON Express8 cards. As with the zBC12, empty slots in this I/O drawer cannot be filled after or during the upgrade to the z13s.
As with the z13, you can now have a rack mounted 1U HMC, which is installed in a customer supplied rack. Also similar is the placement of the Support Elements at the top of the frame.

Upgrades are possible from any z114 or zBC12. Upgrades from an N10 to an N20 and from a 1 drawer N20 to a two drawer N20 are disruptive. Only the N20 allows an upgrade to the larger z13 N30.

Connectivity

Here's an overview of the current connectivity features.


As you can see this is all pretty straightforward. There are some improvements on FICON and OSA-Express functions, but do have a look at the announcement for all the details.

New features and z13 GA2

Now we'll have a look at a couple of new things and I can immediately say that these become also available on the z13 (GA2) now. So, an overview of z13 GA2 might be in place here


I'll pick out some of them. I will come back to other features in future posts like LPAR Group Absolute Capping, SMC-D . . .

zACI or z Appliance Container Infrastructure

zACI is a new framework to support new types of virtual software appliances. The goal of this new framework is to simplify the way IBM will install software 'appliances'. Here's the entire definition
"z Appliance Container Infrastructure (zACI) is a new partition type which, along with an appliance installer, enables the secure deployment of software and firmware appliances. zACI will shorten the deployment and implementation of firmware solutions or software solutions delivered as virtual software appliances. The zACI framework enforces a common set of standards and behaviors, and a new zACI partition mode for a virtual appliance -- requiring a new zACI LPAR type."
If that doesn't make sense to you, it kind of reminds me of the concept of containers such as used with Docker. You can read a good introduction on Docker by one of my colleages over here.
His definition of Docker is pretty similar to what IBM seems to be targetting to accomplish with zACI. Well, at least, that's what I think about it for the moment.
"Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in."
I think that an example will make this immediately a lot clearer. One of those virtual software appliances is IBM zAware. IBM zAware is a well-defined virtual appliance that can be installed in a separate LPAR. The zAware partition mode is replaced by the zACI LPAR mode.

One of those appliances that is planned is z/VSE Network Appliance that will also be installed via zACI. It will avoid using a TCP/IP stack of z/VSE and it's a way to directly communicate from a z/VSE LPAR to an external network just using this appliance. or as the announcement states it :
"The z/VSE Network Appliance builds on the z/VSE Linux Fast Path (LFP) function and provides TCP/IP network access without requiring a TCP/IP stack in z/VSE. The appliance utilizes the new z Appliance Container Infrastructure (zACI) introduced on z13 and z13s servers. Compared to a TCP/IP stack in z/VSE, this can support higher TCP/IP traffic throughput while reducing the processing resource consumption in z/VSE."
The first examples of zACI are announced today and more will follow later on.

Dynamic Partition Manager (DPM) for Linux and KVM

This is a new administrative mode for creating partitions for customers that are running Linux and KVM only. It's a way of helping out people who are not familiar with the traditional ways of creating, managing and monitoring LPARs. In other words, this mainly targets new LinuxONE customers that have no mainframe background whatsoever. This means that instead of IMLing a CPC in traditional PR/SM mode customers will be using Dynamic Partition Manager (DPM) Mode. Therefore it only supports Linux partitions. z/OS, z/VM, z/VSE and the likes are not supported. And, it supports FCP storage only. Or as the announcement puts it :
"The new mode, DPM, provides simplified, consumable, and enhanced partition lifecycle and dynamic I/O management capabilities via the Hardware Management Console (HMC) to:
  • Create and provision an environment -- Creation of new partitions, assignment of processors and memory, and configuration of I/O adapters (network, FCP storage, crypto, and accelerators)
  • Manage the environment -- Modification of system resources without disrupting running workloads
  • Monitor and troubleshoot the environment -- Source identification of system failures, conditions, states, or events that may lead to workload degradation"

Software pricing

My story is pretty straightforward here. Similar to what happened with the zBC12 we don't see a new pricing mechanism but, again, a reduction to the AEWLC pricing. This means a reduction for standalone MLC pricing as indicated in this accompanying announcement : 'Technology Transition Offerings for the IBM z13s offer price-performance advantages (ZP16-0012)'. As the title of the announcement states, there are also transition offerings for migrations in a sysplex environment.

Mind you, however, when looking at those reductions : they talk about a reduction of 13% up to 30 MSUs, 10% up to 45 MSUs and 9% for higher MSUs. Keep in mind that this is the reduction as related to the original AEWLC pricing on the z114. As compared to the zBC12, you will have a reduction of approximately 5%.

This reduction does not apply to IPLA (OTC) softwares.

As I already mentioned the A01 has 4 extra MSUs but remains on zELC pricing. This means the price remains the same as on previous A01 models.

Operating Systems

z/OS
  • z/OS V2.2 with PTFs
  • z/OS V2.1 with PTFs (Exploitation)
  • z/OS V1.13 with PTFs (Limited Exploitation) 
  • z/OS V1.12 with PTFs : be aware that TSS Service Extension will be required
More info can be found in the z/OS support for the z13 and z13s chapter of the announcement.

z/VM
  • z/VM V6.4 (preview, available 4Q2016)
  • z/VM V6.3 with PTFs  - Exploitation support
  • z/VM V6.2 with PTFs – Compatibility plus Crypto Express5S support
  • z/VM V5.4 - Allthough still supported NOT compatible with z13s
z/VSE
  • z/VSE 6.1 with PTFs
  • z/VSE V5.2 with PTFs – Compatibility + Crypto Express5S
  • z/VSE V5.1 with PTFs – Compatibility (EOS 30/6/2016)
Linux
  • SLES 11 SP3+ and SLES 12 SP1+
  • Red Hat RHEL 6.6+ and RHEL 7.1+
  • Canonical Ubuntu 16.04 LTS – GA 2Q2016
KVM
  • KVM for IBM z V1.1.1 – GA 18/3/2016
  • KVM for IBM z V1.1.0

Physical planning

Nothing much to discuss here : the z13s has a similar footprint and weight compared to the zBC12. The differences
  • it's ASHRAE Class A3 (40°C and 85% relative humidity operating limits
  • Like the z13, it has door locks : both doors (front/rear) come with the lock installed
  • As already said, you can have a separately rack installed 1U HMC. The rack is supplied by the customer.

Documentation

Apparantly, it's still a bit early to come with full documentation, but this is what is already available for the moment. I would recommand you have a look at the Publications chapter in the announcement.

Redbooks (drafts) :
Web :
  • IBM z13s Web Page
  • Datasheet
  • FAQ
  • z13s Virtual Tour (Link from z13s web page but at the moment of publication of this post not yet active)
  • z13s Designed to Outcompete video on Youtube gives a good summary of the announcement

    Some key dates for the z13s and z13 GA2

    February 16, 2016
    • Day of announcement
    • First day for GA orders
    • Resource Link support available
    March 10, 2016
    • Features and functions for the IBM z13s (Type number: 2965)
    • IBM z13s Models N10, N20
    • z114 and zBC12 upgrades to z13s Models N10 and N20
    • Features and functions for the IBM LinuxONE Emperor, and Rockhopper
    • IBM LinuxONE Emperor Models L30, L63, L96, LC9, and LE1 (5 models)
    • IBM LinuxONE Rockhopper Models L10 and L20 (2 models)
    June 30, 2016
    • MES features for Models N10, N20
    • z/VSE Network Appliance using the z Appliance Container Infrastructure (zACI) 
    • MES features for LinuxONE Emperor Models L30, L63, L96, LC9, and LE1
    • MES features for LinuxONE Rockhopper Models L10 and L20
    • MES upgrades for IBM LinuxONE Emperor Lxx models to Lxx models
    • MES upgrades for IBM LinuxONE Rockhopper Lxx models to Lxx models
    September 26, 2016
    • MES upgrades for IBM LinuxONE Emperor Lxx models to IBM z13 Nxx models
    • MES upgrades for IBM LinuxONE Rockhopper Lxx models to IBM z13s Nxx models
    As usual, I could only scratch the surface and touch upon the highlights of this new system.
    There's much more to discover about it.
    And along with this announcement, there were other announcements focusing on compilers, a preview of z/VM 6.4 and End Of Support Dates for zEC12 and zBC12.

    So I can only say : Stay Tuned !