Guest blogger David Klemke has put his monocle aside to focus on a vexing question at TechEd Australia 2012: what does private cloud mean these days?
Microsoft has made it clear to us that it’s not content with being second fiddle in any market that it’s a part of. It hass now been almost 20 years since it first got into the client/server market and that time in the field has given the company a huge competitive advantage, with over 75 per cent of the Australia x86 market running on the Windows platform. It might not be the king of the virtualisation market yet — VMware still retains that crown with 65 per cent — but there’s no denying the fact that the Microsoft freight train is picking up speed as it now has an impressive 27 per cent share of the market. With the improvements and new features coming in Windows 2012 I can’t see that slowing down anytime soon and that should send out alarm bells among competitors.
The first step that Microsoft has taken in the right direction is bringing up Hyper-V’s feature set to the point where I finally feel comfortable saying that it is at feature parity with VMware. Sure, at a base hypervisor level it has been at that stage for a while but there was no denying the fact that it rarely had an answer for many of the other value-add features. Hyper-V 3.0 and Server 2012 now bring in features such as:
- Huge virtual machines with 64 cores, 1TB RAM and access to multiple storage volumes that can each be up to 64TB in size.
- Direct mapping from guests to storage and fibre channel adapters.
- Live snapshots with online merging.
- A new VHDX format that boasts a 25 per cent performance increase over its predecessor.
- Storage offload for things like eagerzeroed disks and snapshot merging.
The list really does go on and instead of me replicating it here I’d recommend either catching up on the VIR312 and VIR314 TechEd sessions or just having a read over the updated list direct from Microsoft. Whilst feature parity is all well and good what’s really impressive is how much of this functionality is free with your purchase of Windows Server 2012.
No matter what version you buy, whether it be Standard, Datacenter or even the free Hyper-V hypervisor edition, you will get all of these features completely free. Microsoft’s competitors could easily ignore something like this when it was just the basic hypervisor but when Microsoft starts giving away things like Hyper-V Replica, an out of the box disaster recover solution, you’re going to either have to do the same or make a really convincing argument as to why your customers need to keep paying for that feature.
There’s also another stark difference that bears mentioning: the target audience for Microsoft’s virtualisation platform. For a good chunk of its life virtualisation was reserved for the annals of the big halls of data centre IT, the place where there was much to be saved by consolidating everything down in fewer servers. Because of this many of the features and optimisations present in all virtualisation platforms were directed at this level, which often made the technology inefficient for small scale deployments. Hyper-V 3.0, on the other hand, brings in a lot of features that don’t make a whole lot of sense at the big virtualisation level (it does not support SAN-based replication, for instance) but are absolutely fantastic for the small to medium enterprise. This, coupled with the swath of features being offered for free, really has the potential to shake up the server virtualisation market.
That’s not to say that private clouds aren’t a platform unto themselves, far from it. It’s just that the traditional public clouds were total packages and many of them provide services that just simply aren’t available in their private cloud counterparts. Applications built on the private cloud stack are in fact just traditional Microsoft applications and whilst you can make them cloud-like in terms of functionality the onus is completely on you to do so.
Azure services on the other hand abstract that all away from you, leaving you with dead simple ways to scale your application. Realistically the only thing that’s truly shared between the public and private clouds is the nomenclature that follows them as the feature sets really aren’t comparable.
I guess my point here is that you need to recognise the differences between a private cloud, traditional public cloud and the other kind of public cloud which needs to be named something more like “hosted private cloud”. The capabilities, costs and feature sets between all three are stark and each have a particular use case in mind. One day Microsoft might commoditise its Azure technology which will enable us to run fully abstracted clouds that can seamlessly move between any provider but until that time just remember the above definitions and ensure that you utilise the one that’s most appropriate for your application.
Yesterday was a really interesting experience as I’ve now got a clear picture as to where Microsoft sees itself in the virtualisation and cloud space. In true Microsoft style, it’s most definitely not the first to the party but the amount of progress made in the years since the release of Windows Server 2008 R2 is really impressive. There’s no denying that Microsoft can achieve some incredible things when it puts its mind to it and my first day in sessions here at TechEd Australia 2012 is a testament to that. Today I’m hoping to spend my time deep in the application development and automation space as whilst this cloud stuff is all well and good it’s all done for one thing: running applications.
Visit Lifehacker’s TechEd 2012 Newsroom for all the news from the show.
David Klemke is covering Windows Server 2012 for Lifehacker using his ASUS Zenbook WX32VD.