Wednesday, May 26, 2010

BMC Utilities using the zIIP too

Lately, I haven't been following very closely which ISVs are exploiting the zIIP engine. But if I remember correctly BMC was one of the companies that did not start using the zIIP right from the beginning. They have been catching up for some time now with BMC Mainview products. Last Monday there was a new press release announcing : 'BMC Software Significantly Reduces Mainframe Costs, Helps Position Customers for Economic Growth'.

This time the focus is on the DB2 utilities. Here's some more information from a BMC document :
"BMC Software has enhanced several BMC products for DB2 on z/OS to exploit zIIP engines. Internal tests have shown up to 18% offload of MIPS to the zIIP from the use of these products. Depending on your environment, you could see even more savings. For products that you run often, the savings can be substantial. A set of PTFs released on April 30, 2010 enable zIIP support using enclave SRBs for the following BMC products and solutions:
  • BMC Log Master for DB2
  • BMC PACLOG for DB2
  • BMC Database Administration for DB2
  • BMC Database Performance for DB2
  • BMC Recovery Management for DB2"

Monday, May 17, 2010

TS7740 and TS7720 Virtualization Engine enhancements

Last week, there were also some announcements on the Virtualization Engine with the release of Version 1.7. The announcements were on enhancements on the TS7740 (ZG10-0183), enhancements on the TS7720 (ZG10-0178) and the withdrawal of the replaced generation of cache controllers and cache drawers on the TS7740 and TS7720 (ZG10-0182).

The TS7740 Virtualization Engine has been around for some time now and was first announced with a capacity of 6TB (one cache controller (CC6) and three cache drawers (CX6) each with a capacity of 1.5TB with 16x146GB drives). By the end of 2008 these were replaced by one cache controller (CC7) and a maximum of 3 cache drawers (CX7) each with a usable capacity of 3,5TB (16x300GB drives) giving a total capacity of 14TB.
Now these features are also withdrawn from marketing and they're replaced by one new cache controller (CC8) and an optional 1 or 3 cache drawers (CX8). As the new 600GB drives are used each can have a usable capacity of approximately 7TB giving a maximum capacity of 28TB.

The TS7720 Virtualization Engine (the driveless one) started out in 2008 with one cache controller (CS7) and 3 or 6 additonal cache drawers (XS7). They used 1TB drives, each with a usable capacity of 10TB and a maximum capacity of 70TB. They could of course also be put into a grid adding up to 210TB.
Now we see a new cache controller (CS8) and an upgrade of the cache drawers (XS7). We are now using 2TB drives doubling the raw capacity and bringing the usable capacity to just under 24TB for the drawers and about 19TB for the controller. You can add the drawers one by one with a maximum of 6. This gives you a maximum capacity of 163TB.
But there's more. You can now also add an additonal Expansion Frame to the TS7720. This frame consists of minimally 2 cache controllers and an optional additonal 10 cache drawers. This gives the frame a maximum capacity of 278TB. Combined with the base frame, a single TS7720 may now have a maximum of 441TB.

Check out the announcements for more details and possible intermixing with existing configurations.
One more thing : the new TS7700 Virtualization Engines now ship with 16GB of physical memory instead of 8GB.

Planned availability : June 4, 2010

Wednesday, May 12, 2010

End of Marketing of Disk Modules on DS6000 and DS8000

Every time we see a new DDM appearing, some time later the oldest generation is withdrawn from marketing. Some time ago we saw the announcement of the 600GB DDMs. So, now we see the end of marketing announced of the 146GB DDMs on the DS6000 (ZG10-0193) as well as on the DS8000 (ZG10-0198).

For the DS6000 it's a pretty straightforward message : you can order them until September 30,2010 and afterwards you have the choice between 300GB or 450GB disks.

For the DS8000 it's a bit more complicated. The announcement is only made, for the moment, for the DS8100 (Model 931) and the DS8300 (Model 932), but not for the DS8700 (Model 941). And there's more than just the 146GB disks. You might say, wasn't there an announcement that the DS8700 would replace the DS8100 and the DS8300. Yes, there was and I discussed it over here. Still, as I indicated then, too, this only prohibited customers from buying new DS8100/DS8300 systems. You could and can still make upgrades to existing systems. However, this announcement limits these upgrades.

The three parts of the announcement I want to mention :
  • If I read this correctly new DS8100s (Model 931) and new DS8300s (Model 932) can still be ordered until September 30, 2010 instead of the earlier announced June 11, 2010.
  • The 146GB disks will also be End of Marketing as of September 30, 2010.
  • On top of this, also the 1TB disks are End of Marketing. These even have an earlier date : last chance to order is on July 2, 2010.
For further details, do check out the announcement itself.

IBM Lifecycle Extension for z/OS 1.9

OK, I guess everybody knows this by now. You can get up to 24 months extra support on z/OS 1.9 which is going out of support by the end of September, 2010. So, you get an extra two years to migrate to z/OS 1.11. The conditions are exactly the same as with the offering for z/OS 1.8.
The announcement : 'IBM Lifecycle Extension for z/OS V1.9 offers fee-based corrective service beyond the z/OS V1.9 service withdrawal date (ZP10-0028)'.

I'd just like to repeat a couple of the remarks I also made last year. First of all : it's a fee based offering ! Second, even if you do not have plans to install z/OS 1.11 immediately, be sure to order it anyway before the end of September. That way you'll still have it for free. If you have to order it afterwards, you might have to pay for it.

There's a FAQ and it's been updated for this z/OS 1.9 offering. You can find it over here.

Tuesday, May 11, 2010

Linux on IBM System z: A silent clip on the past and the present

I found this one via the z/VSE Twitter account : Ok, you might remember I wasn't too enthusiastic about Twitter at first. Well, I'm still no prolific user of it. Here's my account if you're interested : Most of my tweets however are automatically generated when I have a new post over here and some retweets of tweets (by other people) I really like or of which I think they might have some value to other people too. But I think the real value is in picking who you follow and trying to get good, first-hand information as soon as possible.

So I regularly pick up some information on webcasts or new publications or even 'fun' items like this one. So I would like to share this little movie with you. Hope the z/VSE people don't mind my redistributing it.

By the way : did you recognize any one ? You should have !

Monday, May 10, 2010

Red Alert for z/OS 1.11 users of GDG's with GDS_RECLAIM set to YES

Is back-to-back the right term for this : two Red Alert posts in a row ?
Here's another Red Alert in about a week's time :

Possible data loss for z/OS 1.11 users of GDG's with GDS_RECLAIM option set to YES (default) in the SMS Parmlib member, IGDSMSxx

Only GDG users that have GDSs in DEFERRED ROLL IN STATUS and receive message IGD17356I GDG RECLAIM REQUEST WAS SUCCESSFULLY PROCESSED FOR DATASET in the joblog during job execution are affected. During RECLAIM processing, output intended to be directed to the data set is not written but the job receives a CC0.

Please see APAR OA32968 for additional information.

Recommended Action:
Apply ++APAR as soon as possible.

Set the GDS_RECLAIM option to NO until ++APAR can be applied. This will generate a JCL error and prevent any loss of data.

If you want to have en overview of all past Red Alerts, then take a look over here. You can also subscribe on the same page so you'll be notified by mail of any future Red Alert.

Tuesday, May 4, 2010

Red Alert for z/OS with PTFs for OA30513

Here's a Red Alert for z/OS with PTFs for OA30513 :

z/OS with PTFs for OA30513 can experience data loss due to deletion of logstream datasets that are not eligible for deletion

With OA30513 applied, Logger dataset delete processing does not properly maintain the lowest valid point in the logstream leading to data loss. The following describes the circumstances when the problem can occur:
  1. PTFs for OA30513 are applied
  2. A logstream has allocated many offload datasets and is using more than one dataset directory or dataset directory extent
  3. This logstream is defined with AUTODELETE(YES) and a non-zero retention period (RETPD)
    This logstream is defined with AUTODELETE(NO), and the logstream's data is being trimmed
When these conditions are met, there is a code defect where Logger can delete more datasets than it should. Applications attempting to browse the logstream can get back a gap condition or an error indicating the requested data cannot be found.

There are no Logger messages or displays that will externalize the loss of data. Please see APAR OA32737 for further details.

Recommended Action:
Option 1: Restore the PTFs for OA30513. Going back to the environment pre-OA30513 will remove the exposure to data loss.
Option 2: Contact the support center for a ++APAR.

If you want to have en overview of all past Red Alerts, then take a look over here. You can also subscribe on the same page so you'll be notified by mail of any future Red Alert.