In today’s world, most IT organizations maintain lab environments as they play an important role related to application development, testing and quality assurance. Many advances have occurred over the past decade within these labs, with an obvious one being the widespread adoption of x86 virtualization, as pioneered by VMware. In fact, these labs were a strategic beachhead for VMware, and drove the initial adoption of virtualization for most organizations.

Most of these labs are leveraging VMware, but now face multi-hypervisor environments, as well as recreating Software-as-a-Service (SaaS) application integrations. Why are multi-hypervisors and cloud integration with AWS or Azure being considered?

The bottom line is cost. Now that IT has matured, and IT administrators are beginning to get more visibility into what the applications require from an infrastructure perspective to operate normally, they are seeing that not all the performance requirements of VMware are needed. Combine this with the fact that the majority of the virtualized workloads are Microsoft based, and it is no wonder that Microsoft Hyper-V appears to be the second hypervisor of choice for the hybrid data center.

Multi-Hypervisor Deployment Catching On

The primary multi-hypervisor deployment scenario within labs is hypervisor tiering, whereby high-end workloads are provisioned to VMware and the rest are provisioned to Hyper-V. Over time, a growing number of users expect that Hyper-V will dominate their labs, and VMware will be restricted to the production data center.

Looking further into the future, Hyper-V will start to penetrate into production and VMware will only retain the high ground (production core) for mission critical and compute-intensive workloads. We have, of course, all seen this movie play out in the past in data centers. In the database market, SQL crept into IT organizations through the back door and eventually grew like a weed, leaving Oracle to retain the high ground.

Now that lab environments can contain multiple hypervisors, we will continue to see the confidence from developers in being able to test integrations with SaaS services such as Salesforce.com with an internal accounting system, which then feeds data into a database to provide mobile access to sales managers globally. These discussions are now commonplace as we have the capacity to test, simulate and consume these test grounds with little impact on production systems.

Evolutionary Stages of Lab Management

The evolution of the lab can be understood as a progression from crawling, to walking, to running. Lab automation can be viewed as the crawling phase within software labs. This phase helped developers move from traditional waterfall methodologies to agile customer-focused methodologies by enabling them to rapidly setup and tear down on-demand test environments—without the need of internal IT’s intervention, and without the load normally placed on the physical environment.

This enabled the development, testing and QA cycles to move in parallel across the organization by providing each developer his or her own independent, isolated test environment that was an actual copy of the production systems. In 2011, VMware said they had more than 4,000 deployments of its lab management product, so this is a relatively mature market.

Evolution to multi-hypervisor deployments can be viewed as the walking phase, and one that will see increased adoption in 2013 and 2014. This will be focused on gaining economic value from legacy applications and their integrations.

Application developers needed an agile environment that allows them to accelerate development times without putting large impacts on infrastructure, or costs, or causing VM sprawl.

The need to support deployment of application on VMware, Microsoft, and Amazon, as well as through Web Services and Mobile device specific integrations, is driving the need for more diverse environments. The ability to perform automation across hypervisor and across platform, without committing to a single vendor, is critical based on the speed and volatility of the current market. You can see the developments in this space by listening to the discussion of SDN (software defined networking), where the data plane and control plane are allowing the software to tell the infrastructure the resources it needs.

As for the running phase, it is quickly approaching—over the next 12 to 24 months. As the capability to consume and replicate environments becomes common, the speed at which integrations with applications to feed Big Data applications will increase exponentially. Most of the systems in use by businesses today, do not have the ability to interact directly with new technologies, so developers are working frantically to create middleware to speed the integration of legacy application with new technologies.

Legacy systems are now virtualized, but the new technologies that are being consumed come from vendors such as Amazon, Azure, Google, and soon from Pivotal. Developers also have Infrastructure-as-a-Service (IaaS) from Terremark, AT&T, Rackspace, and other locations, that need to be consumed to replicate the environments they are developing applications to be consumed in. Ultimately, this enables the business value we all expect from the cloud.

Clearly, developers need to be able to consume the same environments as their consumers. This will be a combination of internal physical and virtual systems integrated with externally hosted IaaS or Platform-as-a-Service (PaaS), and tied into a cloud application such as Salesforce.com or Workday.com. Software Labs will evolve from being 100% on-premise, towards hybrid cloud architectures, just as our customers are. Some workloads will remain in a private cloud and others will be scattered into one or more public clouds, with integrations back to the critical systems that will remain on premises, and in control of the organizations.

By focusing on the crawl, walk, run journey, users are able  to understand where they are on their journey to multi-hypervisor deployment, the cloud, and the consumption of as-a-service architectures. Understanding where you are, allows you to not only evaluate your needs for today, but gives you a view of the overall journey you will be taking based on the evolution of your SDLC methodology and pace. This will ultimately help all organizations focus on the customer by leveraging data more efficiently, and by turning IT into a profit center.

John Ross