Virtualisation should mean a more efficient deployment of your server resources, but elementary mistakes can result in wasted resources. Here are ten to keep on your to-don’t list.
Image courtesy of Shutterstock
At last week’s Gartner Symposium event, analyst Cameron Haight talked through some of the more common virtualisation mistakes companies make. Keep these in mind when you’re planning any future virtualisation activities.
Mistake #1: Assuming virtualisation is the norm
Virtualisation is common, but it’s still far from universal. “As an industry, we’re only at about 50 per cent total virtualisation,” Haight said. “There’s still a larger market for virtualisation than the somewhat hype-filled cloud market.”
Mistake #2: Not checking rival options
The virtualisation market is still dominated by VMware, but competition is much fiercer than it was five years ago. “There’s more heterogeneity in the market today than in the past,” Haight said. “We’re seeing more and more interest in Hyper-V, not just in smaller deployments but also in test and development. Red Hat remains an option for “Linux/open source purists”, Haight added, and Citrix is a popular choice in environments with a a heavy VDI infrastructure.
Some choices are dictated by other technology decisions. “Oracle has interesting pricing when you run it on VMware; another way of saying interesting is frustrating. That drives some Oracle adoptions.”
“‘Nobody gets fired for buying VMware’ has become the new ‘Nobody gets fired for buying IBM’,” Haight said. “They may say ‘you paid too much, gosh that’s a lot of money’ but for most of your needs VMware works very well. The question that comes up is: are the others good enough? Do you need a 1TB virtual machine ? Do you need a million IOPS? If you don’t, some of those cheaper or free options might fit the bill. VMware is still growing, but it can only go one way. Be of the mind that the platform you chose five years ago may not be the only platform you have going forward.”
Mistake #3: Planning deployment images
A typical virtualisation strategy is to develop a standard image that can be quickly deployed, but that represents short-term thinking. “Don’t just think about standardising images; often the better thing to do is standardise the process. Through a standardised process you can lay down and deploy very different stacks, and meet the needs of clients without imposing an image they might continue to rebel against.”
Mistake #4: Not checking software compatibility
It’s still dangerous to assume that an application which can run on a regular server will run easily on a virtual machine. “Software ramifications are still one of the biggest issues to deal with,” Haight said. “You still find vendors who won’t support on virtualised infrastructure, or who only support one platform, but they’re becoming less of an issue overall.”
Mistake #5: Inadequate performance testing
Performance can be a particular challenge when virtualisation is used as a test environment for private or public cloud deployments. “Applications that people are putting up on public cloud work well on a high speed LAN, but performance issues can loom when you expand,” Haight said.
Mistake #6: Not asking if virtualisation is appropriate
“The big question is: what’s the value of virtualisation? Maybe you don’t need to virtualise as much as you think you do. Ask yourself: what is the simplest environment to manage? One app per server. With virtualisation, you have to work out which element caused the problem. Similarly, while there are benefits to reducing capex, it introduces cost and complexity.”
Mistake #7: Poor financial planning
The cost justifications for virtualisation are often hazy. “Virtualisation often starts as a strategy to save money, but becomes part of a migration to cloud,” Haight said. “That doesn’t mean it’s a smooth process or an absolute one.”
The process is further complicated by complexities in application pricing, which doesn’t always reflect virtualisation. “A lot of the pricing models we see still have some physical underpinning, and yet we’re in a virtualised environment. That poses challenge, especially for future cloud deployments.”
Those models are likely to change again in the future. “Nobody much wants to pay per virtual machine. Those who want to do it are because it matches their physical procurement process. We should expect more variability in terms of pricing.”
Mistake #8: No ongoing management plans
Deploying a virtualised environment is only half the battle; you also have to implement a system to manage changes. “Configuration management is still a little bit of a problem in terms of performance,” Haight said. “We make changes to the server estate in virtual infrastructure and unbeknownst to ourselves we can create a performance problem.”
Mistake #9: Not controlling deployments
“Many lines of business think that virtual equals free,” Haight said. “We saw this in a big way a few years ago. It’s tapered off because we’ve implemented processes to fix it, but virtual machine sprawl still an issue. You need to think about capacity management of clusters, not machines
Mistake #10: Making IT irrelevant
Only a handful of clients Haight has consulted with have service level agreements in place for virtual machine provisioning, and the time periods can vary enormously. One site Haight talked to quoted a six week rollout period, which seems ridiculous. When he asked why the process was so slow, Haight was told” “Well, we don’t want to create undue expectations.” That’s a stupid idea. “The only expectation you’ll create is that they’ll go round you and won’t need you anymore.”