latest updates from easySERVICE™
Virtualization has enabled organizations to dramatically reduce the cost of delivering compute services through the consolidation of hardware and data center facilities; however in most virtualized data centers today there are significant opportunities to avoid unnecessary costs as organizations scale out their virtual IT estate.
Given most organizations continue to scale out, with double digit growth in the number of application workloads hosted on their virtual data center infrastructure, the annual savings in capital and operational expenses can be translated into tens, hundreds or even millions of dollars per annum.
The big questions you should be asking are: “How efficient are your virtualized data centers?” “What actions can be taken on an ongoing basis to ensure you and your customers are getting the best return on investment possible?” and “How does this translate into hard dollar year-on-year cost savings?”
Managing the Economics of your Virtualized Datacenter
Virtualization has turned IT infrastructure into a utility platform, through the ability to share infrastructure resources amongst multiple application workloads and provision them and decommission them through software.
Ensuring there is enough infrastructure capacity available, to meet the resource demands of application workloads, is critical in maintaining consistent performance. At the same time it is important to ensure that the infrastructure is not over provisioned because this impacts the overall economics associated with service delivery, driving up the cost to host each application workload. “Desired State” is where you do not compromise application performance whilst eliminating inefficiencies from your infrastructure.
Today native hypervisor scheduling and monitoring tools do not provide the “Decision Analytics” to maintain virtualized IT in the “Desired State” which is why such a high percentage of organizations benchmarked through business impact assessment and have a significant opportunity to streamline the overall cost of service delivery.
Key Strategies to Manage Virtualized IT in the “Desired State” to Optimize the Cost of Delivering Virtualized Compute
Intelligent Workload Placement Decisions With in Clusters
Application workloads typically consume a broad set of resources including CPU, Memory, Network I/O, storage I/O, disc space and IOPs capacity on storage. They also fluctuate in terms of the demand they place on the environment over time. They may also be constrained by limitations in the architecture of the servers, network, storage, as well as business policies such as software licensing and application resilience and business continuity requirements.
Hypervisor vendors have developed simple scheduling mechanisms to equalize utilization of host resources within a cluster. These are based on threshold mechanisms that detect deviations in the utilization of host resources within a cluster, such as memory and CPU, and respond by moving the biggest consumer(s) of the resource to less-utilized hosts.
Whilst this approach attempts to correct the deviation in utilization of memory or CPU across hosts in a cluster, they do not look holistically at how application workloads could be best placed in the environment to preserve the quality of service, preventing workload interference, whilst maximizing the efficiency of the infrastructure.
Intelligent Workload Placement – Across Clusters
Today Microsoft and VMware hypervisor solutions provide the ability to live migrate/vMotion application workloads between clusters; however what is missing is the capability to analyze cross cluster workload placement decisions, on an ongoing basis, to drive a virtualized IT estate to and control it in the “Desired State”.
Intelligent Cross Cluster workload placement opens up a whole new opportunity to better exploit islands of compute resource, which may be underutilized. Not only does this enable more efficient use of resources by eliminating the need to provision for peaks in cluster utilization, it also provides a safety net to accommodate unplanned peaks in application workload resource utilization.
Many organizations have built up their infrastructure around specific projects or they dedicate specific hardware to run specific application services.
The reality in most cases is that this does not make the most economic use of resources and in many environments results in smaller clusters of resources, which are significantly underutilized.
Having the ability to “simulate” the potential impact of consolidation scenarios, where larger pools of resources are built from existing hardware or more optimal hardware configurations are deployed, are highly valuable and can result in significant ongoing cost savings by increasing underutilized resources.
Recovering Unused Reserved Server Capacity
Both VMware and Hyper V provide mechanisms to reserve resources in the underlying infrastructure for individual application workloads. VMware refer to this as reservations and Microsoft Hyper V can be configured with static memory allocation.
These mechanisms are often employed in order to mitigate the risk of performance issues by guaranteeing resources are available to an application workload all of the time, even when they are not used. The downside of this is that this often results in significant inefficiency because in many environments customers end up reserving far more capacity from the underlying infrastructure than is actually required.
In this context having the ability to set reservations based on the actual amount of resources that are required by individual application workloads is very important. This capability is often referred to as Virtual Machine rightsizing. This capability can often result in the recovery of significant memory and CPU resources, resulting in increased hardware utilization and cost deferral.
IT organizations that manage high levels of complexity in their IT infrastructures require a sophisticated set of capabilities to diagnose and prevent application slowdowns caused by the infrastructure, especially during consolidation or migration. Complexity is driven by the heterogeneity of storage subsystems, host bus adapters (HBAs), operating systems, fabric switches, virtualization platforms, multi-site replication, storage virtualization, and the continued 50% annual growth of data and bandwidth utilization.
The larger the server and SAN (Storage Area Network) infrastructure, the greater the risk of problems. Data center consolidation, accelerated use of server and storage virtualization, flat budgets, and the migration to a cloud computing environment further complicate the infrastructure manager’s ability to track and optimize performance and availability.
We focus on building and designing the most appropriate infrastructure to meet the unique needs and characteristics of your individual business. Your data is too precious not to be protected by the best, most affordable and highly efficient data storage solution in the industry. Our solution is suitable for Modern Data Protection – Built for Virtualization and Private cloud solutions, without a big price tag.
If you’d like to discuss any of the above best practices or lessons learned with us or to learn more about how we are partnering with companies just like yours to ensure the availability of mission-critical applications, please contact us at (855) US STELLAR.