Monday, February 13, 2017

Choosing the Right Configuration

Large server setups are quite typical for KVM on z. One common pitfall is that system and application defaults that work well for a small number of servers may not work very well for a large number of virtual servers or in combination with huge host resources. The following sections present a list of snags that could happen, along with respective suggestions on how to resolve them.
Read more...

Tuesday, January 24, 2017

QEMU v2.8 & libvirt v3.0 released

QEMU v2.8 and libvirt v3.0 (available for download here) are out! The highlight from a KVM on z perspective is the introduction of CPU models.
The CPU models are of primary use in live migration scenarios. For example, in a setup with different z Systems machine generations, it is now possible to check up-front whether the target host will support all facilities required by a guest for a successful migration. In case any of the required facilities are missing, the migration is aborted, and the guest will continue to run on the current host.
Furthermore, a guest can be defined with a backlevel CPU model compatible with the target machine (of a previous z Systems generation), so that migrations become possible.

To see what CPU models QEMU (and hence the host) supports, use virsh domcapabilities:

  $ virsh domcapabilities
    [...]

    <cpu>
      <mode name='custom' supported='yes'>
        <model usable='unknown'>z10EC-base</model>
        <model usable='unknown'>z9EC-base</model>
        <model usable='unknown'>z196.2-base</model>
        <model usable='unknown'>z900-base</model>
        <model usable='unknown'>z990</model>
        <model usable='unknown'>z900.2-base</model>
        <model usable='unknown'>host</model>
        <model usable='unknown'>z900.3</model>
        <model usable='unknown'>z114</model>
        <model usable='unknown'>z890-base</model>
        <model usable='unknown'>z13.2-base</model>
        <model usable='unknown'>zEC12.2</model>
        [...]



Monday, December 12, 2016

libvirt v.2.5.0 released

libvirt v2.5.0 is now available for download here.
A highlight for the z Systems platform support in this release is scsi_host hostdev passthrough, improving throughput by providing a libvirt frontend for scsi_host devices.

Wednesday, November 16, 2016

New White Paper available: KVM Network Performance

A new white paper has been published: KVM Network Performance - Best Practices and Tuning Recommendations. It is available here, and explores different system configurations (running KVM guests), different networking configuration choices, as well as tuning recommendations for the KVM host and KVM guest environments to achieve greater network performance on the IBM z Systems platforms.

Tuesday, November 15, 2016

New Books available

Two new books are available as follows:
  • The new book KVM Virtual Server Management Tools is available here, covering the open source package virt-manager (see screenshot above) along with its supporting tools virt-install and virt-clone in a generic manner. I.e. it applies to both, KVM on z in general as well as the KVM for IBM z Systems product.
  • The Device Drivers, Features, and Commands for Linux as a KVM Guest book is now also available in a version specific to Ubuntu 16.04 here.
For a publications overview see here.

Friday, October 28, 2016

KVM for IBM z Systems v1.1.2 released

KVM for IBM z Systems v1.1.2 is out today! See here for the respective blog post on Mainframe Insights, and the following pages for the formal announcements: US, Canada, Asia-Pacific, Japan, Latin America and Europe.

It ships with QEMU v2.6 and libvirt v1.3.3. Here is a list of highlights from a KVM on z perspective:
  • Enhanced SCA support
    Exploit up to 248 CPUs per KVM guest
  • SIE capability
    Expose SIE availability in /proc/cpuinfo as follows:

       $ cat /proc/cpuinfo | grep features
       features        : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh
                         highgprs te vx sie
  • STP host toleration
    Previously, Server Time Protocol (STP) had to be turned off in the host. In case of time differences, the TOD clock is adjusted in a smooth manner, avoiding jumps.
  • Improved removable media support
    Allow to boot KVM guests seamlessly from ISO9660 media.
  • OASIS VIRTIO v1.0 support
    Support OASIS OASIS standard for virtio devices.
  • CPU hotplugging
    Allow to add CPUs dynamically to a running guest
  • STHYI support: Added the z/VM-defined Store Hypervisor Information instruction (STHYI) for KVM guests. See also here.

Thursday, October 6, 2016

Linux kernel 4.8 released

Linux kernel 4.8 (available here) has been released, here are the highlights in support of KVM on z:

Nested Virtualization
This feature allows to start further KVM hosts within KVM guests, also called second level virtualization. As a prerequisite, it requires a recent, post-2.7 version of QEMU including the s390 CPU models (assumed to be included in the forthcoming QEMU 2.8 release).
Nested virtualization is currently turned off per default, and has to be enabled when loading the kvm module:

   [root ~]# modprobe kvm nested=1

or by appending kvm.nested=1 to the kernel command line.
When starting QEMU, make sure to chose the right machine ("s390-ccw-virtio-2.8" or higher) in the domain xml's os element for the guest:

   <os>
      <type arch='s390x' machine='s390-ccw-virtio-2.8'>hvm</type>
      <boot dev='hd'/>
   </os>


Finally, verify in a KVM guest that hosting further KVM guests is possible, as indicated by flag "sie" in /proc/cpuinfo as follows:

   [root ~]# cat /proc/cpuinfo | grep features
   features        : esan3 zarch stfle msa ldisp eimm dfp
edat etf3eh
                     highgprs te vx sie

STHYI Instruction available for KVM Guests
Linux kernel 4.8 saw the inclusion of an implementation of the STHYI (Store Hypervisor Information) instruction. Using Linux kernel 4.8 for the host will make the STHYI instruction available to all KVM guests of that host.
Previously available on z/VM only, this instruction provides detailed information on CPU resources on various levels (machine, LPAR, hypervisor, guest, etc.).

Use qclib to access the information. Also see this blog post for further details regarding qclib and KVM on z.