How To Make Server Virtualisation More Effective

Server virtualisation is a common activity, but that doesn't mean you can't improve your existing process. Gartner analyst Thomas Bittman shares tips on how to improve your virtualisation approach as our World Of Servers journey to the Gartner Infrastructure, Operations & Data Centre conference in Sydney wraps up for day 1.

Bittman's advice breaks down to five core steps: choose the right platform; consolidate and standardise before you virtualise; make sure you consider the software ramifications; focus on the operational impact of what you roll out; and create an ongoing strategic plan. Despite the apparent ease of deploying a virtual machine (VM), none of those are trivial tasks.

Bittman argues the ease can actually cause more problems than it solves. "It's very easy with virtualisation to take a problem and wrap it up in a big bubble and hide it," he said. "Once you virtualise something it makes it a lot easier for it to last, and that brings more admin costs. . Before you virtualise something, take a long look at whether it would be better to let it die. It's harder to kill things once they're virtualised."

Those components will need their own management infrastructure. "Attack what's inside the virtual machine first, because VM management isn't hard. What's inside the VM has to be managed too."

Almost any element can be virtualised, he said. "This has really changed a lot in the past couple of years. IO-intensive apps, databases and mail servers used to be something you would not want to virtualised. But frankly, there are no serious red light any more; there might be yellow lights. I've seen some very large VMs being deployed."

The product space remains competitive, Bittman suggested, especially between VMware and Microsoft. "With Windows Server 2012 we have a product that, with Hyper-V and System Center 2012, matches up pretty well with VMware. They're very, very close. The problem is that's not necessarily enough to displace a solution that's already deployed. But what is happening is a trend we call second sourcing. It started early last year. Large organisations gave had multiple hypervisors by accident, always. What started happening a year ago is we saw this happening strategically. Seconding sourcing is a major factor in Microsoft gaining some share, and we're going to see more pressure in second sourcing."

For its part, Microsoft seems happy with that positioning. "It's a core part of the operating system that you have to have some sort of virtualisation proposition. We build it into the OS as a role. That's fundamentally how we look at it. ," Microsoft Australia Windows Server product marketing manager Mike Heald told Lifehacker.

Being able to track usage is also vital (a point we've already made once today). "Funding and chargeback is still one of the biggest questions we get," Bittman said. "Chargeback is politically charged and a business problem, but showback and metering is happening." That's increasingly built into the platform; Server 2012 has built-in metering capabilities and can expand those through System Center 2012 and PowerShell add-ons, for instance.

Lifehacker's World Of Servers sees me travelling to conferences around Australia and around the globe in search of fresh insights into how server and infrastructure deployment is changing in the cloud era. This week, I'm in Sydney for the Gartner Infrastructure, Operations & Data Center Summit, looking for practical guidance on developing and managing your IT infrastructure and using virtualisation effectively.


Comments

    It's harder to kill things once their virtualized? I don't really see his point.. It doesn't really make any difference, except you've invested a bit of time in virtualizing it..

      i think he means it depends on the business function - is it better to kill the 20 year old accounting app that ran on obsolete big iron that needs to be babysat through the end of quarter run yet has so many legacy hook ins from other apps the business would be in serious trouble if it died, or finally migrate to something new and capable and easier to manage on a virtualised platform?

        Sure and I completely agree with that.. Hell, its what I do for a job primarily.. I still don't understand how its relevant to virtualization..

          To elaborate on what zen said and having experienced this pain first hand it means that once the legacy application has been virtualized then the business might see no need to actually upgrade/replace the system even though it's software layer is just as obsolete as it may have been on a physical system.

          So a lot of times without the traditional 'timebomb' of hardware warranty expiry/physical server failure a system may not be upgraded/replaced as often as it should be.

Join the discussion!

Trending Stories Right Now