latest updates from easySERVICE™
The ever-changing and fast moving IT market can make choosing, designing and deploying a storage system challenging. This fact has been compounded in recent years by the rapid development and rise of virtual server technology, which blurs legacy notions of server, network and storage architectures. Now more than ever, efficiently designing a storage system is critical as it provides an underpinning for all elements within the IT infrastructure. A significant number of IT departments are soon to encounter performance issues within their virtual infrastructures, if they have not already. This may often be due to a view of storage as a minor consideration within the infrastructure and a lack of understanding about the importance of IOPs.
As IT staff rapidly deploys virtual machines at an exponential rate, they inevitably reach the limit of the performance of the spinning disks, exceeding the storage IOP limits before the capacity limits. As a result, a significant amount of capacity is left over and is impossible to utilize. Consider the 4 main components within virtualization: CPU, Memory, Network and Disk Storage. We see memory sizes increasing rapidly due to low-cost RAM modules and developments of faster connectivity such as 10Gbit Ethernet and 40Gbit Infiniband. All of this allows for more VM’s to be deployed at increased levels of performance.
However, when we look at disk technology, we observe that classic spinning disks remain limited by the same mechanical components, which constrains the number of IOPs and thus the number of VMs they can support, despite improvements in interface speed. Although SSD technology will alter the relationship between storage and VMs, it is still in the early phases of adoption and may not become mainstream for several years due to cost and capacity constraints as well as reliability concerns.
To simply sum up the problem, the IT industry’s thirst for greater numbers of virtual machines puts significant pressure on the storage infrastructure which can only be addressed by more efficient design.
When did storage become so critical?
Looking at a modern virtual infrastructure, the hypervisor does an excellent job of making server hardware a commodity component, regardless of the technology used – VMware, Xen or Hyper-V. By making servers ‘virtual’ an IT department can move them between hardware swiftly and without downtime, leaving the physical machine to be viewed as little more than a CPU with a bit of memory.
Conversely, in a VM environment, the storage system grows in importance as it becomes the underpinning of the entire infrastructure. In a traditional environment, most (if not all) physical servers have their own internal system disks and only rely on the SAN for application storage. In a virtualized environment the traditional system disks are provisioned from the central storage which not only adds load but also randomizes the data pattern as many virtual servers all contend for the same disk resource.
Designing the Storage System
The key to designing an efficient storage solution is understanding applications and environment requirements. Developing the knowledge to design the right architecture can come from technical meetings and discussions, remote analysis, on-site professional services and studying application best practice guides on IOP requirements for Exchange, SQL, VMware View or other applications specific to the environment.
In every case, the basic goal is to determine if the environment/ application is sequential or random in nature. Next, discover the requirements for capacity, throughput (MB/s) and/or IOPs. There may also be requirements for storage functionality such as snapshots and replication.
If the full range of data is unavailable, simply knowing the operating system and applications will give you a direction in the design. The most demanding servers in the customer environment can be monitored using I/OStat (Unix) or Perfmon(Windows). When used correctly, these built-in tools can provide all the data needed (http://www.performancewiki.com/diskio-monitoring.html). Another option is to use third party monitoring applications such as VMware Capacity Planner. This will gather detailed performance information and produce storage reports. Finally, you may gather performance statistics from the existing storage system.
The pace at which server and storage technology is advancing, particularly in the area of virtualization, has left large gaps in knowledge among general IT departments. This has lead to poorly designed architectures resulting in performance challenges.
Well-educated, experienced and technically competent storage resellers have a great opportunity to help these IT departments through professional analysis, systems design, installation services and training. A quality reseller can help future-proof the customer against growing data and performance hungry databases and server virtualization technology.