Saturday, December 12, 2015

Cyber security Next steps



Cyber security matters. Products are hacked, in order to misuse, abuse and confuse. Unlike other technologies which are mastered by specific teams and functions, security is a base technology which belongs to the body of knowledge of each single software developer. I have try to ponder on  / about some best practices for security engineering in this blog. This is also a thought of continuous improvement & evolving process for a given enterprise.

Sophisticated functionality and ever-increasing perfection of embedded and distributed IT systems have been made possible through an increasing amount of interconnected components. Open interfaces, standardized platforms, and a variety of heterogeneous networks drive complexity and security risks. For any given system it is only a question of time before the resulting security vulnerabilities are systematically identified and exploited at the harm and expense of users and manufacturers. 

Security is a quality attribute which interacts heavily with other such attributes, including availability, safety, and robustness. It is the sum of all of the attributes of an information system or product which contributes towards ensuring that processing, storing, and communicating of information sufficiently protects confidentiality, integrity, and authenticity. Cyber security implies that it is not possible to do anything with the processed or managed information which is not explicitly intended by the specification of the embedded system.

Currently used security engineering concepts, such as proprietary subsystems, the protection of components, firewalls between components, and the validation of specific features are necessary basics but insufficient to ensure end-to-end security at the systems-level. Intelligent attack scenarios evolve from different directions, such as attacks on unprotected networks, introduction of dangerous code segments through open interfaces, changes to configurations, and prove that security has to become a topic throughout the entire organization and with high management attention.

Cyber security needs evolve fast with the advent of the Internet of Things (IoT). Let us look to modern automotive systems as an example of connectivity and IoT. Distributed networks such as inside cars and from car to roadside are an essential part for our today’s modern infrastructures with their needs for safety and comfort. Besides the further development of innovative sensors like radar and camera systems and the analysis of the signals in highly complex systems, the connected cars will be a driving factor for tomorrow‘s innovation. Internet connections will not only provide the need for information to the passenger - functions like eCall, communication between cars, and car to infrastructure (vehicle2x) shows high potential for revolutionizing the individual traffic. The advantages are obvious, such as improvement of the traffic flow controlled by intelligent traffic lights, warnings from roadside stations, or brake indication of adjacent cars towards enhanced driver assistant systems and automated driving. But the connection to the outer world also bears the risk for attacks to the car.

Based on our experiences with clients worldwide, we show which security engineering activities are required to create secure systems and how these activities can be performed efficiently in the automotive domain. Key points in the development of protected systems are the proper identification of security requirements, the systematic realization of security functions, and a security validation to demonstrate that security requirements have been met. Here some obvious items from the cyber security checklist:

·         Standardized process models for a systematic approach which is anchored in the complete development process. This starts in the requirements analysis phase, and continues through the design and development to the test and integration of components and the network.

·         Quick software updates to close vulnerabilities in the deployed and operational software.

·         Reliable protocols that are state-of-the-art and meet long-term security demands. Related to security, this is often combined with cryptographic keys. So a key management over the lifecycle of the vehicle must be maintained.

·         In-vehicle networks and a system architecture that provide flexibility and scalability and are designed with consideration of security aspects.

Dependability requirements are a good starting point to identify relevant security requirements and to guide elicitation of further functional requirements that will mitigate security risks. The same technique as outlined here can be applied for other scenarios – always starting with attacker motivation or functional risks due to the system architecture. Our guidance: Do not limit exposure to known incidents and defects as some textbooks suggest. Security analysis is not a checklist approach. It has to consider attack motivations of persons thinking differently than the usual engineer. However, utilizing an engineering approach, we can more easily identify vulnerabilities in our architectures.

The results of security risk and hazard analysis starting with asset identification to misuse, abuse and confuse cases and the entire security protection scheme should be well-documented. It is of utmost interest to understand the approach specifically when modifications are made at a later point. Form a legal perspective complete and maintained documentation is necessary for governance and compliance reasons. Security threats and resulting damages impact the safety of products and the integrity of private data, and are thus directly endangering the financial health of a company. Our guidance: Document the security case similarly to the safety case by means of a ALM/ PLM environment. Maintain the related documentation and enhance it with regression test scenarios for future updates.

Security requires an end-to-end perspective. Security engineering must start with a clear focus on security requirements and related critical quality requirements, such as safety, footprint, or performance and how they map to functional requirements. Software component suppliers and integrators first define the key functional requirements. These requirements are then analyzed for security risks and impacts. Security requirements are expanded into further functional requirements or additional security guidelines and validation steps. Security concepts are subsequently and consistently (i.e. traceable) implemented throughout the development process. Finally, security is validated on the basis of previously defined security requirements and test cases.

Today, cyber security by design is in the foreground due to safety, legislative and intellectual property concerns. We recommend a life-cycle perspective which takes a systems engineering perspective and drives security starting with security requirements and the related test cases, while stepwise and comprehensively building the security case in line with the impacted functional requirements and quality requirements. After all it does not help much if transactions are piecemeal encrypted and thus slow down performance.

Many security attacks are the result of poorly managed software updates and uncontrolled complexity growth. Architectures, systems, and protocols must be developed with security in mind (i.e., design for security). Competences have to be developed around security engineering, and employees have to be trained how to design, verify, and sustain security throughout the product’s life-cycle. Only with continuous measurements on their effectiveness the value of security measures improves.

Contact me at ravindrapande@gmail.com for more information or to discuss these trends.

Monday, December 7, 2015

Office 2016 Review



At IndiaTrainigServices.in we got a good look at the US editions of the Office 2016 Developer edition.
Office 2016 is a major upgrade, but not in the way you’d first suppose. Just as Windows 10 ties notebooks, desktops, phones and tablets together, and adds a layer of intelligence, Office 2016 wants to connect you and your coworkers together, using some baked-in smarts to help you along.
We have tested the client-facing portion of Office 2016. Microsoft released the trial version of Office 2016 in March as a developer preview with a focus on administrative features (data loss protection, multi-factor authentication and more) that we didn’t test.

Office 2013 users can rest easy about one thing: Office 2016’s applications are almost indistinguishable from their previous versions in look and feature set. To the basic Office apps, Microsoft has added its Sway app for light content creation, and the enterprise information aggregator, Delve. 

Collaboration in the cloud is the real difference with Office 2016. Office now encourages you to share documents online, in a collaborative workspace. Printing out a document and marking it up with a pen? Medieval. Even emailing copies back and forth is now tacitly discouraged.
Microsoft says its new collaborative workflow reflects how people do things now, from study groups to community centers on up to enterprise sales forces. But Microsoft’s brave new world runs best on Office 365, Microsoft’s subscription service, where everybody has the latest software that automatically updates over time. And to use all of the advanced features of Office, you must own some sort of Windows PC.

You could still buy Office 2016 as a standalone product: It costs Rs. 6,000 for Office 2016 Home & Student (Word, Excel, PowerPoint and OneNote ) and Rs. 18,500 for Office Home & Business, which adds Outlook 2016. Office 365 is Rs. 330 per month for a Personal plan (with one device installation) and Rs. 450 per month for a Home Plan, where Office can be installed on five devices and five phones.

If you subscribe to Office 365, it’s a moot point; those bits will stream down to your PC shortly. Windows 10 users already have access to Microsoft’s own baked-in, totally free version of Office, the Office Mobile apps. It’s those people who fall somewhere in the middle—unwilling to commit to Office 365, but still wavering whether or not to buy Office—who must decide.

Our advice to an individual, family, or small business owner: Wait. If you’ve never owned Office, the free Office Mobile apps that can be downloaded from the Windows Store onto iOS, Android, and Windows Phones are very good—and include some of the intelligence and sharing capabilities built into Office 2016. Microsoft’s Office Web apps do the same.

There’s no question that Office 2016 tops Google Apps, and I haven’t seen anything from the free, alternative office suites that should compel you to look elsewhere. But Microsoft still struggles to answer the most basic question: Why should I upgrade? That’s a question that I think Microsoft could answer easily—and I’ll tell you how it can, at the end.

Before that, here’s what works, and what doesn’t, in Office 2016. With PowerPoint, however, most of that goes out the window. You can ask coworkers to collaborate, and you can still send them links by which they can edit your shared presentations. You can still comment, and coworkers can still make changes to the text as they wish. But you can’t really manage their changes, or restrict what they can or can’t do. (You can compare and reconcile versions of the same document that a coworker has worked upon separately, however, which is vaguely similar.)

But—and this is a big but—any revisions to a document show up only if you click a teeny-tiny Save icon, way down at the bottom of the screen, that serves as a sort of CB-radio-style ‘Over’ command. It’s almost impossible to find unless you know what you’re looking for. Click it, and changes made by others show up. When your colleague makes another change, you have to click it again. It’s a pain.
Granted, collaborative editing wasn’t in the Office 2016 preview Microsoft released earlier this year. And, given that there’s an enormous blank space in the ribbon header to the right half of the screen, you have to imagine that more managed sharing is heading to PowerPoint.



Wednesday, November 11, 2015

TheInternetOfThings

The Internet of Things (IoT)  is the network of physical objects or "things" embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The Internet of Things allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for more direct integration between the physical world and computer-based systems, and resulting in improved efficiency, accuracy and economic benefit. Each thing is uniquely identifiable through its embedded computing system but is able to interpenetrate within the existing Internet infrastructure.

Lets understand this further A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a bio chip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low -- or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network. So far, the Internet of Things has been most closely associated with machine-to-machine (M2M) communication in manufacturing and power, oil and gas utilities. Products built with M2M communication capabilities are often referred to as being smart. (See: smart label, smart meter, smart grid sensor)

Business Angle - Today computers -- and, therefore, the Internet -- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code.

The problem is, people have limited time, attention and accuracy -- all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things -- using data they gathered without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.

Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first Internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the Internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Integration with the Internet implies that devices will use an IP address as a unique identifier. However, due to the limited address space of IPv4 (which allows for 4.3 billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the extremely large address space required. Objects in the IoT will not only be devices with sensory capabilities, but also provide actuation capabilities (e.g., bulbs or locks controlled over the Internet). To a large extent, the future of the Internet of Things will not be possible without the support of IPv6 and consequently the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future.

The ability to network embedded devices with limited CPU, memory and power resources means that IoT finds applications in nearly every field. Such systems could be in charge of collecting information in settings ranging from natural ecosystems to buildings and factories, thereby finding applications in fields of environmental sensing and urban planning.

On the other hand, IoT systems could also be responsible for performing actions, not just sensing things. Intelligent shopping systems, for example, could monitor specific users' purchasing habits in a store by tracking their specific mobile phones. These users could then be provided with special offers on their favorite products, or even location of items that they need, which their fridge has automatically conveyed to the phone. Additional examples of sensing and actuating are reflected in applications that deal with heat, electricity and energy management, as well as cruise-assisting transportation systems. Another excellent application that the Internet of Things brings to the picture is home security solutions. Home automation is also a major step forward when it comes to applying IoT. All these advances add to the numerous lists of IoT applications. Now with IoT, you can control the electrical devices installed in your house while you are sorting out your files in office. Your water will be warm as soon as you get up in the morning for the shower. All credit goes to smart devices which make up the smart home. Everything connected with the help of Internet.

However, the application of the IoT is not only restricted to these areas. Other specialized use cases of the IoT may also exist. An overview of some of the most prominent application areas is provided here. Based on the application domain, IoT products can be classified broadly into five different categories: smart wearable, smart home, smart city, smart environment, and smart enterprise. The IoT products and solutions in each of these markets have different characteristics.

In order to hone the manner in which the Internet of Things (IoT), the Media and Big Data are interconnected, it is first necessary to provide some context into the mechanism used for media process.  Media approach Big Data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead tap into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is of course to serve, or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities.

The media industries process Big Data in a dual, interconnected manner:
Marketing of consumers (for advertising by marketers) Data-capture : Thus, the internet of things creates an opportunity to measure, collect and analyze an ever-increasing variety of behavioral statistics. Cross-correlation of this data could revolutionize the targeted marketing of products and services.[55] For example,  the combination of analytics for conversion tracking with behavioral targeting has unlocked a new level of precision that enables display advertising to be focused on the devices of people with relevant interests.[56] Big Data and the IoT work in conjunction. From a media perspective, Data is the key derivatives of device inter connectivity, whilst being pivotal in allowing clearer accuracy in targeting. The Internet of Things therefore transforms the media industry, companies and even governments, opening up a new era of economic growth and competitiveness. The wealth of data generated by this industry (i.e. Big Data) will allow Practitioners in Advertising and Media to gain an elaborate layer on the present targeting mechanisms used by the industry.
Environmental monitoring: Applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions and can even include areas like monitoring the movements of wildlife and their habitats. Development of resource constrained devices connected to the Internet also means that other applications like earthquake or tsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile.
Infrastructure management : Monitoring and controlling operations of urban and rural infrastructures like bridges, railway tracks, on- and offshore- wind-farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. It can also be used for scheduling repair and maintenance activities in an efficient manner, by coordinating tasks between different service providers and users of these facilities. IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. Usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs of operation in all infrastructure related areas.[62] Even areas such as waste management stand to benefit from automation and optimization that could be brought in by the IoT.
Manufacturing : Network control and management of manufacturing equipment, asset and situation management, or manufacturing process control bring the IoT within the realm on industrial applications and smart manufacturing as well.[64] The IoT intelligent systems enable rapid manufacturing of new products, dynamic response to product demands, and real-time optimization of manufacturing production and supply chain networks, by networking machinery, sensors and control systems together.
Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of the IoT.[61] But it also extends itself to asset management via predictive maintenance, statistical evaluation, and measurements to maximize reliability.[65] Smart industrial management systems can also be integrated with the Smart Grid, thereby enabling real-time energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by a large number of networked sensors.
Energy Management : Integration of sensing and actuation systems, connected to the Internet, is likely to optimize energy consumption as a whole. It is expected that IoT devices will be integrated into all forms of energy consuming devices (switches, power outlets, bulbs, televisions, etc.) and be able to communicate with the utility supply company in order to effectively balance power generation and energy usage. Such devices would also offer the opportunity for users to remotely control their devices, or centrally manage them via a cloud based interface, and enable advanced functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.).In fact, a few systems that allow remote control of electric outlets are already available in the market.
And various such data intensive systems.
Your feedback can help improve our product and service. Feel free to give us suggestions. Please contact us through one of the following methods.
LinkedIn profile http://www.linkedin.com/in/ravindrarpande

Monday, August 17, 2015

Cloud Computing Maturity




Due to its exponential growth in recent years, cloud computing is still considered an emerging technology. As Cloud computing cannot yet be considered a mature and stable technology / platform. Cloud computing comes with both the benefits and the drawbacks of innovation. To better understand the complexity of cloud computing,
Let’s discuss this on this four pillars
1.      Cloud use and satisfaction level,
2.      Expected growth,
3.      Cloud-adoption drivers,
4.      Limitations to cloud adoption.
Various studies determined that the increased rate of cloud adoption is the result of perceived market maturity and the number of available services to implement, integrate and manage cloud services. Cloud adoption is no longer thought of as just an IT decision; it’s a business decision. Cloud has become a critical part of a company’s landscape and a cost effective way to create more agile IT resources and support the growth of a company’s core business.
Cloud Computing Maturity Stage
Cloud computing is still in a growing phase. This growth stage is characterized by the significant adoption, rapid growth and innovation of products offered and used, clear definitions of cloud computing, the integration of cloud into core business activities, a clear ROI and examples of successful usage. With roles and responsibilities still somewhat unclear, especially in the areas of data ownership and security and compliance requirements, cloud computing has yet to reach its market growth peak.
Cloud Adoption and Growth
How does cloud computing continue to mature? Security and privacy continue to be the main inhibitors of cloud adoption because of insufficient transparency into cloud-provider security. Cloud providers do not supply cloud users with information about the security that is implemented to protect cloud-user assets. Cloud users need to trust the operations and understand any risk. Providing transparency into the system of internal controls gives users this much needed trust.
Companies are experimenting with cloud computing and trying to determine how cloud fits into their business strategy. For some, it is clear that cloud can provide new process models that can transform the business and add to their competitive advantage. By adopting cloud-based applications to support the business, Software as a Service (SaaS) adoption is enabling organizations to channel resources into the development of their core competencies.
Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) adoptions enable businesses to experiment with new technologies and new services that require resources that would be expensive if they were completed through in-house implementation. IaaS and PaaS also allow companies to adapt to the rapid changes in market demand, because they create a completely new, faster and cheaper offering.
User Satisfaction
According to respondents, the level of satisfaction with cloud services is on the rise. Cloud services are now commonly being used to meet business as usual (BAU) and strategic goals, with the expectation that they will be more important for BAU than strategic plans in the future.
It’s not perfect yet, but the level of satisfaction with cloud services and deployment models is expected to increase as the market matures and vendors define standards to minimize the complexity around cloud adoption and management. The increase of cloud service brokers and integrator is helping businesses to integrate applications, data and shared storage in a more efficient way, making ongoing maintenance much easier.
Moving Past the Challenges
Study found that the most significant cloud concerns involve security and international data privacy requirements, data custodianship, legal and contractual issues, provider control over information, and regulatory compliance. Both cloud providers and cloud users have a role is moving past cloud concerns. Cloud providers need to demonstrate their capabilities to deliver services in a secure and reliable manner. Companies must understand their own accountability for security and compliance and their responsibility for implementing the necessary controls to protect their assets.
Gaining Maturity
The decision to invest in cloud products and services needs to be a strategic decision. Top management and business leaders need to be involved throughout a cloud product’s life cycle. Any cloud-specific risk should be treated as a business risk, requiring management to understand cloud benefits and challenges to be able to address cloud-specific risk. The need remains for better explanations of the benefits that cloud can bring to an organization and how cloud computing can fit into the overall core strategy of a business.
Effective access Control
As the threat landscape has evolved to include adversaries with deep pockets, immense resources and plenty of time to compromise their intended target, security professionals have been struggling to stave off data breaches. This is not a matter of if your network will be compromised, but when.
Since many companies have built up their perimeter defenses to massive levels, attackers have doubled down on social engineering. Phishing and malware-laden spam are designed to fool company employees into divulging login information or compromising their machine.  Since threat actors have become so good at circumventing traditional defenses, we cannot afford to have only a single point of failure. Without proper internal security, attackers are given free reign of the network as soon as they gain access to it.

Instead, attackers should encounter significant obstacles between the point of compromise and the sensitive data they are after. One way to accomplish this is with network segmentation.
Keep your hands to yourself : In an open network without segmentation, everyone can touch everything. There is nothing separating Sales from Legal, or Marketing from Engineering. Even third-party vendors may get in on the action.
The problem with this scenario is that it leaves the data door wide open for anyone with access credentials. In a few hours, a malicious insider could survey the network, collect everything of value and make off with the goods before security personnel get wind of anything out of the ordinary.
What makes this problem even more frustrating is that there is no reason everyone on the network should be able to touch every resource. Engineers don’t need financial records to perform their job, and accountants don’t need proprietary product specifications to do theirs.
By simply cordoning off user groups and only allowing access to necessary resources, you can drastically reduce the potential damage an attacker could inflict on the organization. Instead of nabbing the crown jewels, the thief will have to settle for something from the souvenir shop. Additionally, the more time the attacker spends trying to navigate and survey your network, the more time you have to find them and throw them out, preventing even the slightest loss of data in the process.
How it works: It is best to think of a segmented network as a collection of zones. Groups of users and groups of resources are defined and categorized, and users are only able to “see” the zones appropriate to their role. In practice, this is usually accomplished by crafting access policies and using switches, virtual local area networks (VLANs) and access control lists to enforce them.
While this is all well and good, segmentation can quickly become a headache in large corporate environments. Network expansion, users numbering in the thousands and the introduction of the cloud can disrupt existing segmentation policies and make it difficult to maintain efficacy. Each point of enforcement could contain hundreds of individual policies. As the network grows in users and assets, segmentation policies can quickly become outdated and ineffective.
Retaining segmentation integrity is an important security function in today’s world of advanced threats and high-profile data breaches. To properly protect themselves, organizations need to constantly maintain segmentation, adding new policies and adjusting existing ones as network needs change.
One way to tackle the challenges of traditional access control is with software-defined segmentation, which abstracts policies away from IP addresses and instead bases them on user identity or role. This allows for much more effective and manageable segmentation that can easily adapt to changes in the network topology.
Active segmentation for effective access control: When you couple software-defined segmentation with an intelligent planning and implementation methodology, you get active segmentation. This approach to segmentation allows network operators to effectively cordon off critical network assets and limit access appropriately with minimal disruption to normal business functions.
When implemented correctly, active segmentation is a cyclical process of:
1.      Identifying and classifying all network assets based on role or function
2.      Understanding user behavior and interactions on the network
3.      Logically designing access policies
4.      Enforcing those policies
5.      Continuously evaluating policy effectiveness
6.      Adjusting policies where necessary

Thursday, June 11, 2015

Designing Server Virtualization

Designing your server virtualization infrastructure requires a lot of planning before it's built, as well as plans for if something ever happens. Virtualization helps eliminate hardware issues inside a data center and allows virtual machines to be easily moved. In terms of speed, virtualization can create space in a matter of moments.
It's important to take into consideration the amount of resources you'll need, especially in terms of capacity and power consumption. Just because your environment seems secure doesn't mean it's bulletproof to a disaster. Disasters can come in multiple forms and are nearly impossible to avoid. However, having a disaster recovery plan in place is key in designing a server virtualization infrastructure.
Finally, private cloud always seems to creep into plans. There's a difference between private cloud and regular virtualization, and it's important to distinguish the disparities in order to make a logical decision.
 
Resource provisioning and capacity planning :Provisioning resources and planning capacity seems like it is a simple task, but it's certainly one that can't be overlooked. Virtual machines that end up without the necessary resources will suffer performance issues. On the other hand, overprovisioning resources to a VM could be a waste. It's important to have a proper capacity plan in place to ensure your resources will be ready to handle any and all workloads and keep your environment running smoothly.
Building a successful virtual server farm: When it comes to designing a virtual server farm, there is no "one size fits all." Although that's the case, it doesn't mean that there aren't any simple guidelines to follow to create a reliable environment. Understanding your applications and knowing the quantity of hosts you're looking for are two small ways of building a scalable server virtualization infrastructure.
 
Sizing hosts for a virtual server farm :Your job isn't done once you determine how many servers you'll need for your environment. Next up, you have to figure out the size of each server, including how much memory and CPU resources each host should contain
 
The problem with overprovisioning VMs : This is a very important aspect lets discuss this in detail.  It might seem like more is better when figuring out resources for a virtual machine, but too many resources can cause hardware issues. Overprovisioning VMs can prevent slow performance, but it could have a negative long-term affect.  Appropriately sizing virtual machines can be a difficult process with many unknowns. Allocating too few resources can starve a VM and lead to poor performance. Administrators wary of this potential problem may take the safer approach and allocate more resources than a VM needs. However, this overprovisioning wastes resources that other VMs could use.
 
Capacity planning tools can help organizations identify consolidation opportunities, allowing them to scale back overprovisioned VMs and save money.
 
Overprovisioning is a huge and very pervasive problem, and I think it's because it is one of the only ways people have to manage risk in their IT environment. If you have an unknown -- you don't know what your application is going to do or you don't know exactly what you'll need -- overprovisioning is the traditional way to go about it. In virtual and cloud environments, it just keeps on propagating. In virtual environments if you have a performance problem, you can just throw more hardware at it and that's the default way around rather than digging deeper. In clouds, people buy cloud instances because they don't know what they need. Sometimes it's the most prudent way to go for someone, but we're getting to the point that this isn't something we should tolerate. There are ways to fix it that don't cost a whole lot of money. In the past, maybe it was necessary but now it's not.

We like to use an analogy to a game of Tetris. Workloads come in different shapes and sizes and when you add them together, it starts to jumble up to the point where servers look like they're full. But, when you play Tetris more cleverly and move those blocks around, you can defrag capacity and get a lot more out of it. Sometimes people are doing all the right things with the tools they have at their disposal, but they can't fight this because they don't have anything that can help them play Tetris better. I wouldn't characterize 

Overprovisioning as people doing anything wrong, it's just that they don't have the analytics at their disposal to fix it. So, we see a lot of people buying more hardware before they really need to. If you analyze things more carefully you can go farther with what you have and not increase risk, just by sorting things out so they don't collide.
 
 If I'm running a critical production environment, I might want two servers totally empty for failover purposes. I might want a bunch of capacity sitting idle for disaster recovery purposes. I might not want my servers going about half capacity for safety reasons. The way you approach that is to define your operational parameters, including safety margins and dependencies, and that defines when capacity is full -- not whether CPU use is at 100%. It really comes down to properly capturing operational policies, which means defining what spare capacity you want to have. Then, everything beyond that is a waste.
 
If you have a line of business that is running applications on central IT infrastructure and they aren't paying with some type of chargeback model, they might be hard pressed to give up some of those resources because they're not paying for them. If IT is footing the bill, they care about the density. If you're a cloud or chargeback customer, you care about what you're paying. So, it's a discussion that would go differently depending on who's footing the bill.
 
We see organizations where IT is footing the bill and still getting lines of business to tighten things up a bit. The way they do that is to address new deployments. If I'm IT, when you ask for new capacity, I'm not going to give it to you if you're wasting the capacity you have. Of course, it's not always quite that simple, but that's the type of leverage IT has.
 
Optimizing performance and power : Contrary to overprovisioning, proper resource utilization can optimize performance. Not only will you get strong performance from provisioning the right amount of resources, you could maximize efficiency and savings as well.
Reclaim swap file space, reduce storage costs : Although swap files can enable features such as memory overcommit, companies are finding out that large swap files are wasting expensive storage space. Solid-state drives are mostly measured in gigabytes instead of terabytes, which makes it critical to use that space efficiently.