Tuesday, May 27, 2014

IBM Benelux System z Study Tour USA 2014 Edition

Hans Deketele from IBM Belgium is planning another IBM Benelux System z Study Tour USA. The tour usually takes you to Poughkeepsie and the agenda contains Lab visits, Reference companies, Premium speakers, Cutting edge technology, System z trends & directions ...

Here's his invitation to the Benelux customers
"Dear customer,

In the year of the Mainframe 50 event the focus is heavily on the IBM System z and of course we are again planning a System z Study Tour to the US, probably in early October.
Maybe you already joined us in one of the previous tours or maybe you always wanted to but never did: this is an early message that we have started planning for the next edition of this event.

Please reply to this email before May 30 if you are interested to join us. Of course this is by no means a commitment that you will actually be able to join.

We plan the tour to be all about System z and the relevant software that you need to generate success in the areas of Cloud, Analytics, Mobile, Social, Security, Linux, Storage...
So, just send an email to Hans if you're interested or you can always contact me too of course.

Friday, May 23, 2014

Reminder : Hardware End of Marketing for z196 and z114

I wrote about this in a previous post along with the announcement of the zBC12, but I think a quick reminder may be in place. Last year IBM announced the two-phase end of marketing dates for the z196 and the z114.

A little recap
  • June 30, 2014
    Past this date any upgrades towards a z196 or a z114 will no longer be possible. Nor will you be able to do any model conversions or hardware MES features. Roughly speaking this means that any upgrade which involves hardware changes will no longer be possible. The practical consequences mainly involve connectivity cards and memory. Up to June 30, 2015 you will still be able to activate that zIIP or an IFL or do a microcode upgrade (as long as you don't need an extra book) or even a downgrade. But if that involves adding memory, FICON- or OSA-cards, which is not that imaginary, then you must add them before June 30, 2014. 
  • June 30, 2015
    "Field install features and conversions that are delivered solely through a modification to the machine's Licensed Internal Code (LIC) will continue to be available until June 30, 2015" meaning that everything which is already in the machine will be able to be activated during the next year like, as I already said, zIIPs, IFLs and Plan ahead Memory.
    Capacity on Demand and CBU offerings will be usable until their expiration date. Something to keep in mind when you're planning to use a z196 or z114 after June 2015.
So, if you are not immediately planning an upgrade and you might have some extra workload(s) in the future, do your planning carefully in order to avoid any unpleasant surprises.

Monday, May 19, 2014

DS8870 announcement - Flash optimization II

This is an announcement from last week : 'IBM DS8870 next-generation flash systems deliver high availability and better performance for critical environments (ZG14-0119)'. However last week I was at the IBM Technical University for System z in Budapest and I wanted to have a closer look at the announcement before writing about it. And, by sheer coincidence, this is the first time in my life that I actually saw the real machine before I even read the announcement as we visited the DS8000 plant while in Hungary.

IBM fulfills an earlier statement of direction about the use of a "new high-density flash storage module for selected IBM disk systems, including the IBM System Storage DS8000". Now you might say : didn't they already announce an all flash box last year. Well, yes and no. They announced an all SSD box. Now you may argue again : isn't SSD the same as flash. Well, yes and no. It's more or less the same type of disks, or let's say, flash cards. Let's make a little detour to get a better understanding of this.
IBM has its FlashSystem 840 for open systems which comes from a recent acquisition of Texas Memory Systems. It aims purely at performance and promises extremely high performance and extremely low latency. How does it reach that : well, by concentrating solely on getting the data as fast as possible without any software functionality or storage controller in between that does e.g. compression, deduplication . . . And that's, to me at least, the main distinction between what IBM calls Flash and SSD. As a result the announcement says "it will help to increase IOPS by up to 4 times as compared to SSDs and up to 30 times as compared to spinning drives".

This is realized with new High Performance Flash Enclosures (HPFEs). This "high-performance flash enclosure is directly attached to the PCIe fabric, enabling increased bandwidth and transaction processing capability. The 1U enclosure contains a pair of powerful redundant RAID controllers".

Here's a configuration with just HPFEs in the box.

To the left side, you can see that 4 HPFEs fill the empty slot that was intentionally there from the beginning of the DS8870. ("Intentionally left blank" you might say). One enclosure contains 30 1.8'' flash drives of 400GB giving you a raw capacity of 12TB. If you only use the upper left slot for HPFEs, they can be combined with other types of disks in the regular disk slots.

The box you see here is an "all-flash, single rack system configured with only flash and up to 96 TB of capacity (73.6 TB of usable capacity) in a 8U of space". This all flash box also "provides twice as many I/O enclosures and up to twice as many host adapters as the standard DS8870 single frame configuration" See the extra host adapters in the green square.

Let me also give you a brief summary of the other functionalities that were announced :
General availability : June 6, 2014. Some field availability is only in September.

Tuesday, May 6, 2014

IBM Mobile Workload Pricing for z/OS

Today IBM officially announced the pricing mechanism it already revealed during the April 8 z Anniversary event : 'IBM Mobile Workload Pricing for z/OS can reduce the cost of growth for mobile transactions (ZP14-0280)'. Before giving you the details I'd like to share this video about the First National Bank of South Africa because it illustrates so clearly what mobile is all about.

Combining mobile and mainframe is answering some real concerns or requirements of companies and people using the applications. Mobile is a rapidly growing market generating lots of transactions on lots of data. And as you could see in the video the data must always be up to date. We can no longer afford to offer copies of data, so what's better than to incorporate mobile with the company's primary data. Data that's residing on the mainframe . . . where it always has been.

Now, this new pricing mechanism makes sure you're not penalized for following just that strategy. "This enhancement to sub-capacity reporting can mitigate the impact of mobile workloads on sub-capacity license charges, specifically in the cases where higher mobile transaction volumes may cause a spike in machine utilization. This can normalize the rate of transaction growth and reduce the reported peak capacity values used for sub-capacity charges".

There are some prerequisites of course : it's limited to AWLC and AEWLC pricing which means to zEC12 and zBC12 or environments that have at least one zEC12 or zBC12. You also need to install a new reporting tool that will, in this case, replace SCRT : Mobile Workload Reporting Tool (MWRT). It's use, data collection and timing of reporting is very similar to SCRT. What's the difference ?
"MWRT will calculate the 4-hour rolling average of the reported mobile transaction general purpose processor time consumed by the Mobile Workload Pricing Defining Programs and subtract 60% of those values from the traditional sub-capacity MSUs for all sub-capacity eligible programs running in the same LPAR(s) as the mobile workloads, on an hour-by-hour basis, per LPAR. The program values for the same hour are summed across all of the LPARs (and any z/OS guest systems running under z/VM®) in which the program runs to create an adjusted sub-capacity value for the program, for the given machine, for each hour. MWRT will determine the billable MSU peak for a given program on a machine using the adjusted MSU values".
You can find all additional details in the announcement itself. And . . . you have some time to figure out how things work as MWRT becomes available on June 30, 2014 and the first report can be submitted as of July 2, 2014.

Monday, May 5, 2014

Openstack and System z

Openstack and system z ? ? ?

I wrote a piece on Openstack in our System z Newsletter a couple of weeks ago. I thought I might share it over here as well.

People who read my blog know that I’ve mentioned Openstack already a couple of times. By the end of last year it was becoming quite clear that OpenStack is going to play an important role in the overall IBM strategy on Cloud ├ánd also in the System z world. So I started wondering : what is it ? What makes it so special ? And quite specifically : what does it mean for System z ? So I thought, let me try and understand this for myself and then make an attempt to explain this to my readers as well. And I can start by telling you that at te beginning I was quite sceptical about it. But therefore, let me take you a couple of years back.

It started out when I saw this chart for the first time at one or other mainframe presentation talking about System z Unified Resource Manager or zManager. It must’ve been somewhere halfway 2012 around the announcement of zEC12. The zManager part was clear to me, but why dragging in the Flex Systems, System x, Power and even VMWare for heaven’s sake ? Why sure, yet another layer on top of the rest ! And why this tight interlink with the distributed environments. No, this will never happen.

But then, as I mentioned already, in 2013 the name OpenStack popped up time and again. With the announcement of the zBC12 the graphic had undergone some changes : the OpenStack layer was added, z/VM was no longer under the umbrella of zManager but was put directly under OpenStack. And IBM told us that z/VM, as of z/VM V6.3, would be “the first System z operating environment that can be managed with these open cloud architecture-based interfaces”. Hey, where’s this going at ? Did I miss something ?

Let me tell you what we were missing : we’re looking at it from the wrong angle. We’re taking the bottom – up look. I have my own mainframe and I’m managing it. I’m managing its storage as well. And yes, perhaps zManager was a step forward : I could now manage several aspects from one, let’s call it, dashboard. But another layer on top of that ? That’s surely overkill. But you know what, take a step back and take a look at this from a business perspective instead. That’s just the opposite : you’re now looking from the top to the bottom. And frankly, at that point, you don’t even care whether you see the bottom. It’s like swimming in the sea : you just know there’s water all the way down. You won’t fall into some or other empty space and that’s exactly what OpenStack is going to do for you. You must be thinking in terms of business, resolving problems, analyzing data, getting ahead of the competition and all of this with more or less reliability, performance or security depending on what kind of workload you want to run. And let the technical guys take care of all the rest.

Switching roles again, as a technical guy, you can also make that step upwards. The constructors will take care of pretty much everything that’s underneath the OpenStack layer. You can move up to get close to the business people. You won’t be talking about LPARs or SSDs or Hypervisors. You’ll be talking their language about solutions and you’ll be implementing them on a totally different level. But it’ll work !

How on earth will that be possible ? Well, I think it’s time to tell a little bit more about OpenStack and IBMs (and lots of other companies’) commitment to it.


But first let me explain where OpenStack is actually playing within the entire Cloud spectrum where anything can be delivered as a Service (AAAS - Anything As A Service).

For those who are completely unfamiliar with this we usually see these four levels presented : At the bottom we have the hardware, next up are the Infrastructure Services (IAAS – Infrastructure As A Service), then come the Platform Services (PAAS – Platform As A Service) and at the top level we have the Business Applications as Compononts (SAAS – Software As A Service).

Below you see how these levels can be filled in :

Click on the image in order to see a larger version
I deliberately chose illustrations with no specific vendor products as we’re talking open source here. Each of those layers has significant open source elements driving out a coherent way of approaching cloud computing today : private, public or hybrid. The objective is to help build out this open cloud architecture from the hardware all the way up to how people access it on any device interacting with an application. This leads the way to build out an architecture in such a way that it is open, that allows for innovation but also allows us to move the workloads where appropriate and to have a choice of application infrastructure as we start to build these things out.

Coming to OpenStack : OpenStack plays at the IAAS level. “OpenStack is a global collaboration of developers and cloud computing technologists that seek to produce a ubiquitous Infrastructure as a Service (IaaS) open source cloud computing platform for public and private clouds. The idea is to have portability of a workload, a VM image … across different types of infrastructures.

OpenStack was founded by Rackspace Hosting and NASA jointly in July 2010. IBM joined Openstack in February 2011. By now IBM is a Platinum member meaning that it is a part of the body responsible for OpenStack governance. But why such a dedication to an open cloud software ?

Just as operating systems and virtualization technology come in both proprietary and open source versions, so does cloud platform software. The main reasons open source operating systems and virtualization technology have taken hold in the data center are usually cited as avoiding any vendor lock-in while at the same time optimizing on cost and performance. This trend has continued with cloud software technology solutions, where several proprietary and open source solutions are available on the market. However, without an open-standards approach, organizations will be locked in to a proprietary or point solution that doesn’t interoperate well or that is too costly over the long term. That’s why IBM is investing significantly in sponsoring and supporting open source solutions like OpenStack.

The lifeblood of any open source project is the community that contributes to it. This is important in terms of the basic usefulness of the project (and hence, product!) and the rate at which the project group accelerates new functionality.

The OpenStack community has close to 300 companies working together to develop an open source platform that is rich with cloud services. As an example, the latest release—Havana—had some 400 new features added by over 900 individuals from 145 different companies. These features include the core infrastructure-as-a-service layers (compute, network, and storage) and other key capabilities, which include automation, security, and a portal, just to name a few. (* - For this part of my text, I borrowed heavily from the ESG White Paper ‘IBM Storage with OpenStack Brings Simplicity and Robustness to Cloud’ from Mark Peters, Senior Analyst and Wayne Pauley, Senior Analyst)

This whole idea is reflected in the following illustration of OpenStack with the three core infrastructure-as-a-service layers (compute, network, and storage).

So, where does that all come together ? And how does it fit in with System z and its related Storage.

Openstack and System z . . . and more ! ! !

Therefore we go back to where we started : Openstack and System z. The starting point to me was z/VM 6.3 managed through OpenStack. In an article by Daniel Robinson System z Director Kelly Ryan commented on this : "Whatever cloud computing layer the client is running, whatever tools are pushing down on OpenStack, they can now push down on to z/VM and do the provisioning through it. You can envision a picture where you have your System z pieces, your PowerVM pieces, some VMware pieces, anything that ties up to OpenStack, available in a consistent manner".

The illustration below shows how that complete picture, including System z, will then look like. As you can see, it’s not only System z, it’s not only IBM distributed platforms, but it’s also 3rd party hypervisors and hardware. You can now service your business or let your business service itself from the grey layer. As a matter of fact all these companies contributing to the OpenStack project make sure that OpenStack is the one communicating with the lowest layer taking care of the provisioning of compute, networking and storage resources  . . . as I promised in the beginning.

We’ve seen how this already works for z/VM, but let me give you another example with the DS8870. You can find all the details about it in this redbooks Solutions Guide ‘Using the IBM DS8870 in an OpenStack Cloud Environment’ that you can find over here. It also contains some more technical details on OpenStack.

IBM wrote an IBM Storage Driver for OpenStack enabling OpenStack clouds to access the DS8870 storage system. This OpenStack driver provides an infrastructure for managing volumes and is the interface to the DS8870.
The dashboard provides a web-based user interface for managing OpenStack services for both users and administrators. So, the OpenStack cloud connects to the DS8870 storage system over an iSCSI or Fibre Channel connection. Remote cloud users can issue requests for storage volumes from the OpenStack cloud. These requests are transparently handled by the IBM Storage Driver. The IBM Storage Driver communicates with the DS8870 storage system and controls the storage volumes on it. Functionalities include such abilities as provisioning, attaching, snapshotting, basic backup, encryption, and quality of service (QoS). In the future, it will grow to include replication management and the changing of service level agreements (SLAs).
I describe this for the DS8870 but this is also already applicable to the XIV and the StorWize family.

Is OpenStack visionary, is it already reality ? As you can see for the moment it’s a bit of both but I’m nevertheless convinced that it’s becoming a reality that’s here to stay.