Why Don't More Workplaces Virtualise Exchange?

Microsoft Exchange is often a server hog. Server hogs are increasingly being butchered for the juicy virtualised bacon within. So why aren't more Exchange deployments jumping on the virtualisation gravy train?

Picture by Linda N

It's hardly news that virtualisation is a major trend in enterprise IT deployments. Market leader VMware estimates that it will have its software installed on more than 50% of servers shipped worldwide by the end of the year.

The benefits of virtualisation are well understood: lower hardware costs, more efficient system utilisation, and much easier experimentation with dev and test environments are amongst the most obvious. However, when it comes to actual deployments, certain application environments are clearly being favoured.

According to VMware's own customer research, 67% of its customers use virtualisation for rolling out SharePoint deployments. However, that figure drops to 47% for SQL Server databases, and just 42% for Exchange systems. Those numbers are striking; many companies run SharePoint and Exchange in tandem, and running a single-purpose Exchange server seems like a waste of resources.

Australian businesses are keener on virtualisation than the global figures would suggest. "We see it earlier in Australia than anywhere in the world," VMware's ANZ senior manager products and solutions Michael Warrilow told Lifehacker.

Warrilow rejects criticisms that companies resist virtualising Exchange because the technology is too fiddly or Microsoft's pricing model discourages it. "I don't think that's fair anymore. They've done a pretty good job of making the licensing better."

One possible explanation is that a working Exchange server is something where tinkering produces less obvious benefits. Database servers often feature large numbers of single workloads — a prime candidate for virtualisation — where a fully-loaded, single-purpose Exchange server may not be such a high priority for alteration, Warrilow noted.

The relatively slow process of Exchange migration could also be a factor. "Anybody looking to Exchange 10 is thinking about virtualisation," Warrilow said. 2010 has been on sale since 2009, while its predecessor Exchange 2007 appeared in 2006, meaning anyone still running it will need to look seriously at shifting in the future to avoid continuing with an out-of-date platform.

Had a good or bad experience with virtualising Exchange? Tell us about it in the comments.

Evolve is a weekly column at Lifehacker looking at trends and technologies IT workers need to know about to stay employed and improve their careers.


    Exchange 2010 (and 2007) has four different server components: Mailbox (MB), Hub Transport (HT), Client Access (CA) and Unified Messaging (UM). Until Exchange 2010 SP1 (August-ish 2010), the UM role was not supported to be virtualised, so if you wanted to take up Microsoft's Unified Messaging capabilities, you needed at least one physical box.

    Roles such as CA and HT, that may not necessarily have much disk IO can quite reliably be virtualised. Services like the MB role however, have high disk and memory IO and are, for that reason, often not considered for virtualisation. Exchange can be a monolithic beast at times. My organisation runs 100% physical Exchange boxes, however I'm looking to roll out virtual CA and HT servers as we scale outwards over time, in order to save on cost. CA and HT servers can also be provisioned/deprovisioned rapidly, which is a very exciting candidate for virtualisation. Tools like SC Orch (Opalis) which can reactively provision/deprovision VMs have a very exciting future in this space.

      I don't think that High IO is a reason not to virtualise any more, there are alot of studies reporting that there is very little difference between the performance of current Virtualisation technologies vs Windows on Bare-metal.

      As I've mentioned in a post after yours, we have been running Exchange 2007 virtualised for a few years now (Cached Mode clients) and have had a great experience. Sure enough Microsoft may say that it's "Not supported" but it will work and if you call their helpdesk and they won't help you because you're virtualised, ask if they support it on Hyper-V ;-)

    Our Exchange Environment has been Virtualised on VMware products for the last 6 years, during that time we've gone from Exchange 2003 to 2007 and from 300 users to over 1000.

    I'll always recommend Virtual over Physical deployments unless there is a very compelling reason not too.

      Likewise. I'm a massive fan of virtualisation. That doesn't change the fact that for *SOME* high IO scenarios, VMs may not be the best option (depending on host hardware etc).

    lol, 50% of all servers. VM Ware just shot itself in the head with its new licencing costs.

    Our company runs almost all servers on VM Ware, but we just saw the new costs and our CIO and tech team are looking to see if we can get off it.

    Sort of a shame, since no one could touch them already, not even MS, but well they got greedy

      I did agree with your comment a few days ago, but after having some time with our VMware Sales rep, we've found that the licensing model isn't quite as horrible as some people are making it out to be.

      I definitely agree with companies that use single core servers with memory in excess of 48GB (At least if they are at the Enterprise Plus level.) VMware have hurt alot of companies with this license model but the cost of licensing the extra memory is likely to be far less than purchasing additional hardware.

      I think the license model brings VMware's pricing closer to what it's worth as well as encouraging larger companies to start looking at Enterprise Licensing Agreements instead of adhering to their listed pricing models.

    I wonder if there is still an uncertainty towards virtualization. I mean, it can't be the best, most flawless piece of technology ever, can it? So why trust your entire organisation's email system to it?

    You could argue the same thing for SQL server, but email is more of a workplace core than say, your financing app, which not every employee runs all the time.

    Think about it this way: if you could virtualize your car experience (e.g. let your car drive FOR you), would you?

      Um your analogy isn't quite how virtualisation tech works.

      A better example would be virutalising your car to drive 'n' independant cars, all capable of being driven by the same engine and transmission, but with their own transmission.

      Sort of like how an old four wheel drive had 2 transmissions, one for the rear wheels one for the front.

    One of the main issues with Virtualising some servers, i.e. SQL, is the fact you need to buy high end disks to handle the high IO access of SQL databases. Especially if you've got multiple VM's running on the same LUN's.
    standard 7.5K SATA drives just don't cut it, 15K SAS are needed and they aren't the cheapest.
    Planning for virtualisation is what catches people out, it's a massive outlay for the hardware which will allow you to grow in the future and not slow down as you add more VM's to it.

      100% agree with you JohnnyBoy. Virtualisation has created a whole new challenge in the industry when it comes to storage and alot of the storage vendors have risen to the challenge.

      We use NetApp storage with the majority of our disks being 7.2k RPM SATA with a small amount of 450GB 15k RPM SAS for extreme cases. However, the 7.2k drives performance is supplemented by massive read cache i.e. http://www.netapp.com/us/products/storage-systems/flash-cache/

      With NetApp being great at fast write, FlashCache gives us a great boost with their already great read times.

Join the discussion!

Trending Stories Right Now