Thursday, June 11, 2015

Designing Server Virtualization

Designing your server virtualization infrastructure requires a lot of planning before it's built, as well as plans for if something ever happens. Virtualization helps eliminate hardware issues inside a data center and allows virtual machines to be easily moved. In terms of speed, virtualization can create space in a matter of moments.
It's important to take into consideration the amount of resources you'll need, especially in terms of capacity and power consumption. Just because your environment seems secure doesn't mean it's bulletproof to a disaster. Disasters can come in multiple forms and are nearly impossible to avoid. However, having a disaster recovery plan in place is key in designing a server virtualization infrastructure.
Finally, private cloud always seems to creep into plans. There's a difference between private cloud and regular virtualization, and it's important to distinguish the disparities in order to make a logical decision.
 
Resource provisioning and capacity planning :Provisioning resources and planning capacity seems like it is a simple task, but it's certainly one that can't be overlooked. Virtual machines that end up without the necessary resources will suffer performance issues. On the other hand, overprovisioning resources to a VM could be a waste. It's important to have a proper capacity plan in place to ensure your resources will be ready to handle any and all workloads and keep your environment running smoothly.
Building a successful virtual server farm: When it comes to designing a virtual server farm, there is no "one size fits all." Although that's the case, it doesn't mean that there aren't any simple guidelines to follow to create a reliable environment. Understanding your applications and knowing the quantity of hosts you're looking for are two small ways of building a scalable server virtualization infrastructure.
 
Sizing hosts for a virtual server farm :Your job isn't done once you determine how many servers you'll need for your environment. Next up, you have to figure out the size of each server, including how much memory and CPU resources each host should contain
 
The problem with overprovisioning VMs : This is a very important aspect lets discuss this in detail.  It might seem like more is better when figuring out resources for a virtual machine, but too many resources can cause hardware issues. Overprovisioning VMs can prevent slow performance, but it could have a negative long-term affect.  Appropriately sizing virtual machines can be a difficult process with many unknowns. Allocating too few resources can starve a VM and lead to poor performance. Administrators wary of this potential problem may take the safer approach and allocate more resources than a VM needs. However, this overprovisioning wastes resources that other VMs could use.
 
Capacity planning tools can help organizations identify consolidation opportunities, allowing them to scale back overprovisioned VMs and save money.
 
Overprovisioning is a huge and very pervasive problem, and I think it's because it is one of the only ways people have to manage risk in their IT environment. If you have an unknown -- you don't know what your application is going to do or you don't know exactly what you'll need -- overprovisioning is the traditional way to go about it. In virtual and cloud environments, it just keeps on propagating. In virtual environments if you have a performance problem, you can just throw more hardware at it and that's the default way around rather than digging deeper. In clouds, people buy cloud instances because they don't know what they need. Sometimes it's the most prudent way to go for someone, but we're getting to the point that this isn't something we should tolerate. There are ways to fix it that don't cost a whole lot of money. In the past, maybe it was necessary but now it's not.

We like to use an analogy to a game of Tetris. Workloads come in different shapes and sizes and when you add them together, it starts to jumble up to the point where servers look like they're full. But, when you play Tetris more cleverly and move those blocks around, you can defrag capacity and get a lot more out of it. Sometimes people are doing all the right things with the tools they have at their disposal, but they can't fight this because they don't have anything that can help them play Tetris better. I wouldn't characterize 

Overprovisioning as people doing anything wrong, it's just that they don't have the analytics at their disposal to fix it. So, we see a lot of people buying more hardware before they really need to. If you analyze things more carefully you can go farther with what you have and not increase risk, just by sorting things out so they don't collide.
 
 If I'm running a critical production environment, I might want two servers totally empty for failover purposes. I might want a bunch of capacity sitting idle for disaster recovery purposes. I might not want my servers going about half capacity for safety reasons. The way you approach that is to define your operational parameters, including safety margins and dependencies, and that defines when capacity is full -- not whether CPU use is at 100%. It really comes down to properly capturing operational policies, which means defining what spare capacity you want to have. Then, everything beyond that is a waste.
 
If you have a line of business that is running applications on central IT infrastructure and they aren't paying with some type of chargeback model, they might be hard pressed to give up some of those resources because they're not paying for them. If IT is footing the bill, they care about the density. If you're a cloud or chargeback customer, you care about what you're paying. So, it's a discussion that would go differently depending on who's footing the bill.
 
We see organizations where IT is footing the bill and still getting lines of business to tighten things up a bit. The way they do that is to address new deployments. If I'm IT, when you ask for new capacity, I'm not going to give it to you if you're wasting the capacity you have. Of course, it's not always quite that simple, but that's the type of leverage IT has.
 
Optimizing performance and power : Contrary to overprovisioning, proper resource utilization can optimize performance. Not only will you get strong performance from provisioning the right amount of resources, you could maximize efficiency and savings as well.
Reclaim swap file space, reduce storage costs : Although swap files can enable features such as memory overcommit, companies are finding out that large swap files are wasting expensive storage space. Solid-state drives are mostly measured in gigabytes instead of terabytes, which makes it critical to use that space efficiently.