Monday 31 December 2012

Happy New Year from everyone at Metron

Here’s hoping that everyone has had an enjoyable holiday season and is looking forward to the start of a new year.  Happy New Year to everyone from all the team at Metron.

I’m not a great one for New Year resolutions, so I won’t offer any of those.  There are things that I know I should be getting around to – my conscience regularly tells me I need to update my LinkedIn profile, but this is just part of an on-going ‘to do’ list, whatever the time of year.

I’ll leave others to make the big predictions about what the big themes will be in IT.  We’ve probably all seen plenty of information about this anyway.  Cloud is getting bigger and bigger such that it isn’t something new or special, it’s just how we do things. Software as a Service is more and more part of our day to day life – many things that used to be internal to Metron such as email and CRM are now externally hosted.  The same is happening elsewhere, with what are considered critical applications now being put in the Cloud by some organizations. 
Big data is going to be, well, BIG, so storage will continue to spread faster than anything.  Note to self: must learn what comes after ‘petabyte’, as this already seems to be a common capacity term.  Are exabytes, zettabytes, yottabytes and so on real terms, or is someone having fun at my expense?  Time will tell, all too quickly.  For those of us involved in capacity management, we need to find out sooner rather than later.

I guess our area at Metron is capacity management, and so I should be most concerned with that.  Here are three things I would like to see in the New Year.  They are randomly picked personal items – there are many others I could have selected.   I’d be interested to hear your alternatives – at least for the capacity management items(J):

-          As we still battle to come out of the recession, I’d like to see every cent valued by organizations.  Too often I hear that ‘we don’t need capacity management because servers/virtual servers/Cloud resources are cheap’ or ‘we don’t need capacity management because we’ve bought enough resource to see us through the next 3 years’ – the latter being a genuine quote from a Deputy CIO to me.  Nothing is cheap if you buy more than you need.  The cost is not the unit cost of the item, it’s what value that money could have bought your business used elsewhere.


-          I’d like to see the Capacity Management Group (www.cmg.org) resurgent.  Capacity management should be seen as vital to cost effective IT service delivery, too often it is not.  CMG offers a superb forum for exchange of ideas and free education to enable you to allow capacity management achieve its potential for your organization.  Restrictions on travel and an increasing internal focus by management in response to the recession have seen attendance at conferences such as CMG diminish due to their ‘cost’, rather than be considered for their ‘value’.  Developments in the IT world such as virtualization and Cloud will make it ever easier to spend money ineffectively on IT infrastructure – capacity management offers a route to avoid that.  Participating in CMG can help any organization realize that benefit.


-          I’d like to see the New York Jets and Sheffield Wednesday Football Club conquer all before them...... Oh well, you can’t have everything.

By all means send me your capacity management wishes for the New Year and I will pass them on through our blog.  In the meantime, have a happy, healthy and successful 2013.

Andrew Smith
Chief Sales & Marketing Officer

 

 

 

Thursday 27 December 2012

vSphere 5 versus Hyper-V SP1 Performance Showdown

It's time to stop guessing and start testing.

vSphere 5 is the most popular x86-virtualization platform, and exciting enhancements keep coming with each new release.

Hyper-V from Microsoft is also a popular solution for server virtualization on the x86 platform, and it has become even more so with the addition of advanced features in SP1. The fact that Hyper-V is included with Windows also makes it attractive from a cost perspective.

Understanding the performance aspects of these virtual environments is important to ensuring that you get maximum benefit from your virtualization investments.

The usual way to test performance between platforms is through benchmarking. The ideal benchmark would incorporate real production workloads, which in most cases is not feasible. An alternative is to utilize generic benchmarks that approximate production workloads.

I recently carried out a general performance comparison between vSphere and Hyper-V, across all major components using identical generic benchmarking tools.I also examined virtualization specific performance metrics that are available in both environments.

On January 3, 2013(8am PT, 9am MT, 10am CT, 11am ET, 4pm UK, 5pm CET) I'll run through the results with you on a webinar.

As a basic understanding of the hypervisor architecture is important when evaluating performance data I'll also be looking at architectures, compared and related to available metrics and discussing important differences in architecture and terminology .

I'll share my benchmark results from both environments along with conclusions that I've formed from my findings.

Join me as I compare two of the most popular x86-virtualization platforms in use.

  • Architecture review
  • Metrics available
  • Challenges of benchmarking virtual environments
  • Testing environment and benchmarks
  • Methods and objectives
  • Results and conclusions

Register for this Webinar

Look forward to speaking to you then.

Dale Feiste
Consultant

Monday 24 December 2012

Merry Christmas from all at Metron


Happy holidays to everyone who has had contact with Metron throughout the year:  clients, prospects, partners, analysts, suppliers.  It has been another fun, enjoyable and as ever, challenging year.  More than ever before, Metron’s business seems to be encompassing a wider geographical sphere covering a more varied range of cultural and religious areas.  Whether you celebrate Christmas or not, may I wish a happy and healthy holiday season to everyone.

I guess Metron’s wider geographical coverage is symptomatic of how large scale IT infrastructure environments are developing.  More and more of the businesses we work with are treating capacity management as a centralized global function spanning data centers in many countries.  It is common now to have capacity management carried out on one continent, managed from another for applications used on a third.  The need for rapid deployment, standard processes and common format for reporting becomes ever more important in such circumstances. 

Our response to this during the year has been to formalize our capacity management beliefs and practices over the last 25 years into our 360 Capacity Management strategy.  In such a disparate global environment, those delivering capacity management to their organization need to bring all views of capacity together to reduce resources required for capacity reporting and help their users understand capacity issues better by presenting such varied information in an easy to understand, common format.

360 Capacity Management offers just this.  Over the 25 years Metron has developed comprehensive support for capacity management of all key server environments as a platform for enterprise capacity management.  For many years the CustomDB element of athene® has offered the capability to extend athene®’s core facilities to other areas.  Now, with Integrator, the successor to CustomDB, this has moved further forward.  Metron and clients can quickly and easily create Capture Packs, connectors to any capacity data you have available.  Metron provides and supports an ever-growing library of Capture Packs covering disk, network, application, end to end and alternative server data sources.  Combined with client provided data unique to their own business, all this data within the athene® CMIS is then available for capacity reporting and prediction using athene®’s application functionality: comprehensive enterprise wide capacity reporting for physical, virtual and Cloud environments from component, service and business perspectives.

It’s been fun formalizing this 360 Capacity Management strategy over the last year and working closely with individual organizations to develop the existing Capture Pack library.  I look forward to seeing this collaborative approach continue and extend in the year ahead.  Once again many thanks to those who have been involved and best wishes for the holiday season to you all.

 
Andrew Smith
Chief Sales & Marketing Officer
 

Wednesday 19 December 2012

Performance counters for Network and Storage - Top performance and capacity tips for Hyper-V ( 6 of 7)


As I mentioned on Monday today I'll be taking a look at Hyper-V performance counters for Network and Storage.
 
From the perspective of the network, the following objects provide all the required data about the physical interfaces and the performance when virtualized.


         Network Interface

         Hyper-V Virtual Switch

         Hyper-V Legacy Network Adapter

         Hyper-V Virtual Network Adapter

 

As stated previously the network side of things becomes more and more important as we’re starting to have larger numbers of, and larger sized, virtual machines undertaking the work.

The network interface counter is still valid and is a very useful way of understanding how the network ports are being used.

Finally the storage counters provide a useful level of information relating to the disks, both from the physical and virtual perspectives.

 
         Physical Disk

         Hyper-V Virtual IDE Controller

         Hyper-V Virtual Storage Device


As with any OS level performance monitoring, the usual rules apply when dealing with SAN based storage.  The technology employed by SAN’s to manage the disks and improve the performance e.g. caching, buffering etc remain transparent to the OS, so be wary of how the data is interpreted.  Response times are usually an indicator, disk utilization less so.
 
I'll conclude on friday with my top tips for managing Hyper-V capacity.
 
Rob Ford
Principal Consultant
 
 

Tuesday 18 December 2012

Key Metrics for Effective Storage Performance and Capacity Reporting

Doing capacity management for storage can be difficult with the many complex and varied technologies being used.

Given all of the options available for data storage strategy, a clear understanding of the architecture is important in identifying performance and capacity concerns. A technician looking at metrics on a server is often seeing only the tip of a storage iceberg. However, the host view is important when looking at measured I/O response.

If response times are severely impacted on a busy server, then end users of the hosted applications will also be impacted. High response at the OS typically originates somewhere in the backend storage infrastructure.

Tracking and reporting on key metrics at the host and backend storage controllers can prevent these incidents from occurring and having the right tools in place can be the difference between prevention and firefighting.

I'll be taking a closer look at key metrics for storage performance at my webinar on Wednesday covering:

  • Storage architecture
  • Virtualization
  • Key metrics for the host and backend storage environment
  • Reporting on what is most important

  • Register and come along http://www.metron-athene.com/services/training/webinars/webinar-summaries.html

    Dale Feiste
    Consultant

    Monday 17 December 2012

    Capturing performance data - Hyper-V performance counters for CPU and Memory (5 of 7)

    Moving onto capturing performance data; the main sources of information are the Hyper-V performance counters as seen from the root partition, there are 21 functioning counters that provide around 600 metrics in total and Vendor products should interrogate these remotely via WMI.

    Sadly Perfmon metrics within each guest partition may not be reliable for CPU etc, due to processor skew, so be careful how this data is being used. However certain other metrics can be used and these can be seen via SCVMM.

    “In guest” monitoring is very light (as it is in vmware), so process level metrics aren’t captured, so an additional in guest agent will be required.

    Now the performance counters

    For the CPU, the following objects can provide a useful source of information:
     

             Hyper-V Hypervisor Logical Processor

             Hyper-V Hypervisor Root Virtual Processor

             Hyper-V Hypervisor Virtual Processor

             Processor

     
    One important fact to be aware of, if you are monitoring the root partition the processor counter isn’t a Hyper-V counter and will give you the wrong numbers. To accurately monitor how the physical hardware, CPU’s and the guests are being utilized you will need to use the Hyper-V hypervisor counters. The processor counter isn’t virtualized and isn’t aware that it is virtualized.

    For the memory, the following objects are recommended:
     

             Hyper-V Hypervisor Partition

             Hyper-V Hypervisor Root Partition

             Hyper-V Dynamic Memory Balancer

             Hyper-V Dynamic Memory VM

             Memory

     
    These provide a good overview of how the memory is being consumed at the partition level and at the hypervisor level via the Dynamic Memory counters.  The “Memory” performance object also provides some guidance as to the host level consumption.

    On Wednesday I'll look at the performance counters for Network and Storage.

    Rob Ford
    Principal Consultant

     

    Wednesday 12 December 2012

    Metrics and Monitoring - Top performance and capacity tips for Hyper-V ( 4 of 7)


    The main focus today is what metrics you should be looking at and the options for capturing the data.

    Let’s look at the options for capturing the data first.  Hyper-V Manager is an ‘out of the box’ tool which is more of a management GUI, similar to vCenter in some respects but not quite as polished or developed. It doesn’t really have any performance or capacity aspects, but provides a little bit of information although not to the same level as vCenter.

    System Center Operations Manager is Microsoft’s management monitoring tool and provides a central source of monitoring for Hyper-V. It is driven by having Hyper-V management packs and can provide a useful source of information. The main issue with SCOM is that there are minimal metrics and they don’t always capture the right level of data for Capacity Management. Although there is some basic trending, there are no modeling capabilities and there is no real control over inbuilt aggregation. It tends to be captured with the operational end of the tool and then automatically aggregated as it’s put into the data warehouse.

    It does provide some monitoring level support so you can see the Host, Guest, potentially cluster level and some of the application metrics.

    System Center Virtual Machine Manager (SCVMM) allows for multiple host management and multiple hypervisor management. The template and library management allows automatic deployment from templates.  It has integrated P2V conversions, a modicum of virtual machine performance monitoring (from SCOM) and allows you to drive live migration events.  Interestingly it allows you to manage your VMware estate as well, via vCenter.

    Whilst it does provide some metrics, they tend to be fairly high level and it is more of a monitoring alerting tool rather than capacity management tool.

    Moving onto capturing performance data; the main sources of information are the Hyper-V performance counters as seen from the root partition, there are 21 functioning counters that provide around 600 metrics in total and Vendor products should interrogate these remotely via WMI.

    On Friday I'll be looking at these performance counters.

    Rob Ford
    Principal Consultant

     

     

    Monday 10 December 2012

    Comparison between Hyper-V and VMware - Top performance and capacity tips for Hyper-V (3 of 7)


    Today we’ll take a look at how Hyper-V 2012 compares with VMware. The table below shows the differences.


    The Raw Device Mapping (RDM) is effectively allowing you to connect the hard disk directly to a virtual machine.  Within vSphere 5.1 this can be done up to 64TB whilst Microsoft suggests that Hyper-V 2012 can do 256TB plus and is only limited by the size of the physical disk – so there is scope to grow.

    Both vSphere 5.1 and Hyper-V 2012 offers similar support for guests, although interestingly this has only been provided by VMware on a recent upgrade to 5.1 and it’s the first time that vmware have had to upgrade their software to ‘catch up’ with Hyper-V.

    Currently Hyper-V supports larger cluster sizes and is potentially cheaper, although this appears to be dependent on which website you’re looking at.

    If your organisation is buying datacenter licenses then certainly Microsoft can work out more cost effective, a data centre licence allows you to enable the hyper-v role and then all of the guests running on that server are automatically licensed.  The same cannot be said for VMware, which involves buying the VMware software and then your Microsoft software.

    Hyper-V comes with enterprise functionality available as standard whereas with VMware you tend to have to have the ‘higher level’ versions to unlock things like V-motion etc.  It’s a similar story when looking at the management layer,Hyper-V can be managed with the out of the box tool whereas VMware requires the purchase of vCenter.

    With regards to the performance comparison, there are very few independent sources that have performed an objective comparison and the likelihood is that each will perform better in different circumstances.

    That said and whilst Hyper-V 2012 is still quite new, early test results suggest that there is not much between the two platforms with regards to CPU and memory.

    Early reports, suggest

     
             Improved IO throughput with Hyper-V

             Comparable CPU loading

             Improved memory utilization with Hyper-V

    It’s still too early to tell whether there are significant benefits to be had by choosing one over the other.
     
    On Wednesday I'll be sharing my thoughts on what metrics you should be monitoring and the options for capturing data. In the meantime join our community and get access to our white papers, podcasts and free downloads.....
     
    Rob Ford
    Principle Consultant

    Friday 7 December 2012

    Updates to functionality - Top performance and capacity tips for Hyper-V ( 2 of 7)


    There are numerous updates to functionality.
     
    The last reasonable contender was Hyper-V 2008.  The following tables summarise the differences and what they mean.
     
     

    Hyper-V 2012 has increased guest and cluster support bringing these up to serious production levels.
     
     

    As you can see from the table Windows 2008 R2 provided live migration, but relied on the servers being built as windows cluster boxes, so it didn’t really provide the functionality and flexibility that VMware gave you and being single instance, you could only do one live migration event automatically.  These changes mean that we’re now looking at a far closer parity with vmware.

    We now have the option to migrate child partitions between Windows servers that aren’t clustered and combined with live storage migration, migrate between servers that aren’t running on shared storage as well. This now provides a good deal of flexibility.

    One of the key differences in 2012 is the addition of SR-IOV support which allows full access to the physical network adapters for a guest. You will see that from the size of some of the virtual machines that could, in theory, be created the next bottlenecks will undoubtedly be in shared networking. The SR-IOV support is a key facilitator for having this size of virtual machine and allows for complete access to the network adapter. So the required network bandwidth will be available to cope with the volume of work that the guests are going to have to do.

    Dynamic memory now has improved management. Memory reclamation is included which allows you to balloon things when required and also to allow a guest to start up even if minimum memory is not available. In a lot of respects this is a step forward in terms of what VMware have available, as Hyper-V will allow you to ‘dig in’ to what the box is doing and balloon in an intelligent way, to free up memory and allow you to get other resources available.
     

    Guest NUMA support extends the hardware based functionality into the realms of the guest; again, key given the potential guest resource allocation.

    Smart paging is intelligent memory management that allows you to bridge the gaps between minimum and start up memory if physical resource is low. This is more dynamic in terms of how it is managing its memory and is an improvement over VMware.

    Runtime memory configuration allows you to change the dynamic memory allocation when the virtual machine is running, which is a big operational step forward when managing heavily utilized environments.

    Resource metering allows you to track how key performance metrics are used over time. Not quite as good as it sounds, predominantly around the network side of things and allows you to monitor some of these key metrics tied really with chargeback more than anything else and persists through live migration. 
     
    On Monday I'll be making some comparisons between Hyper-V and VMware.
     
    Rob Ford
    Principal Consultant

     
     
     
     
     

     

    Wednesday 5 December 2012

    Top performance and capacity tips for Hyper-V

    My blog series will look at the changes in Windows/Hyper-V 2012 - what that means from the perspective of the business and managing the capacity and concentrates on the following areas:

     
    ·         Technology recap

    ·         Updates to the functionality

    ·         Comparison between VMware and Hyper-V

    ·         Metrics and Monitoring

    ·         Top tips for managing Hyper-V capacity


    Technology recap

    What is Hyper-V?

    Even though Hyper-V has been out there for some time it’s still not widely adopted.  It’s as similar in design as Xen; it’s still classed as a type 1 hypervisor, but has a managing partition rather than Vmware which has multiple guests running on a hypervisor.

     It was first released in June 2008 and the latest versions are available by enabling a role within Windows 2012 or via the core version.  The key elements are:

     
             The hypervisor (around 100k in size)

             Parent or root partition (the first and controlling guest)

             Child partitions

             Two versions –Full and server core

     
    The main difference between the two versions is the available functionality.  The core version purely allows for Hyper-V, whereas the full Windows 2012 allows for all of the usual windows roles to be enabled e.g. IIS, AD, FTP etc

    This is an architecture diagram from MSDN which gives you an idea of how the architecture hangs together.

     


     
    On the left you can see the root partition which effectively manages the child partition and allows you to create them.
    The key differentiator here is the enlightened and unenlightened child partitions, when a partition is enlightened it enables you to use ‘VMware type’ tools and provides better ‘all round’ performance.

    You will need to bear in mind that the child partitions communicate to the hyper visor via the root partition from the I/O perspective, so if the root partition is very busy performance and capacity may be impacted.
     
    On Friday I'll be looking at the updates to functionality.
    In the meantime why not join our community and get free access to our papers,podcasts and downloads http://www.metron-athene.com/_downloads/index.html
     
    Rob Ford
    Principal Consultant