Monday, December 7, 2015

Office 2016 Review



At IndiaTrainigServices.in we got a good look at the US editions of the Office 2016 Developer edition.
Office 2016 is a major upgrade, but not in the way you’d first suppose. Just as Windows 10 ties notebooks, desktops, phones and tablets together, and adds a layer of intelligence, Office 2016 wants to connect you and your coworkers together, using some baked-in smarts to help you along.
We have tested the client-facing portion of Office 2016. Microsoft released the trial version of Office 2016 in March as a developer preview with a focus on administrative features (data loss protection, multi-factor authentication and more) that we didn’t test.

Office 2013 users can rest easy about one thing: Office 2016’s applications are almost indistinguishable from their previous versions in look and feature set. To the basic Office apps, Microsoft has added its Sway app for light content creation, and the enterprise information aggregator, Delve. 

Collaboration in the cloud is the real difference with Office 2016. Office now encourages you to share documents online, in a collaborative workspace. Printing out a document and marking it up with a pen? Medieval. Even emailing copies back and forth is now tacitly discouraged.
Microsoft says its new collaborative workflow reflects how people do things now, from study groups to community centers on up to enterprise sales forces. But Microsoft’s brave new world runs best on Office 365, Microsoft’s subscription service, where everybody has the latest software that automatically updates over time. And to use all of the advanced features of Office, you must own some sort of Windows PC.

You could still buy Office 2016 as a standalone product: It costs Rs. 6,000 for Office 2016 Home & Student (Word, Excel, PowerPoint and OneNote ) and Rs. 18,500 for Office Home & Business, which adds Outlook 2016. Office 365 is Rs. 330 per month for a Personal plan (with one device installation) and Rs. 450 per month for a Home Plan, where Office can be installed on five devices and five phones.

If you subscribe to Office 365, it’s a moot point; those bits will stream down to your PC shortly. Windows 10 users already have access to Microsoft’s own baked-in, totally free version of Office, the Office Mobile apps. It’s those people who fall somewhere in the middle—unwilling to commit to Office 365, but still wavering whether or not to buy Office—who must decide.

Our advice to an individual, family, or small business owner: Wait. If you’ve never owned Office, the free Office Mobile apps that can be downloaded from the Windows Store onto iOS, Android, and Windows Phones are very good—and include some of the intelligence and sharing capabilities built into Office 2016. Microsoft’s Office Web apps do the same.

There’s no question that Office 2016 tops Google Apps, and I haven’t seen anything from the free, alternative office suites that should compel you to look elsewhere. But Microsoft still struggles to answer the most basic question: Why should I upgrade? That’s a question that I think Microsoft could answer easily—and I’ll tell you how it can, at the end.

Before that, here’s what works, and what doesn’t, in Office 2016. With PowerPoint, however, most of that goes out the window. You can ask coworkers to collaborate, and you can still send them links by which they can edit your shared presentations. You can still comment, and coworkers can still make changes to the text as they wish. But you can’t really manage their changes, or restrict what they can or can’t do. (You can compare and reconcile versions of the same document that a coworker has worked upon separately, however, which is vaguely similar.)

But—and this is a big but—any revisions to a document show up only if you click a teeny-tiny Save icon, way down at the bottom of the screen, that serves as a sort of CB-radio-style ‘Over’ command. It’s almost impossible to find unless you know what you’re looking for. Click it, and changes made by others show up. When your colleague makes another change, you have to click it again. It’s a pain.
Granted, collaborative editing wasn’t in the Office 2016 preview Microsoft released earlier this year. And, given that there’s an enormous blank space in the ribbon header to the right half of the screen, you have to imagine that more managed sharing is heading to PowerPoint.



Wednesday, November 11, 2015

TheInternetOfThings

The Internet of Things (IoT)  is the network of physical objects or "things" embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The Internet of Things allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for more direct integration between the physical world and computer-based systems, and resulting in improved efficiency, accuracy and economic benefit. Each thing is uniquely identifiable through its embedded computing system but is able to interpenetrate within the existing Internet infrastructure.

Lets understand this further A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a bio chip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low -- or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network. So far, the Internet of Things has been most closely associated with machine-to-machine (M2M) communication in manufacturing and power, oil and gas utilities. Products built with M2M communication capabilities are often referred to as being smart. (See: smart label, smart meter, smart grid sensor)

Business Angle - Today computers -- and, therefore, the Internet -- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code.

The problem is, people have limited time, attention and accuracy -- all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things -- using data they gathered without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.

Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first Internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the Internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Integration with the Internet implies that devices will use an IP address as a unique identifier. However, due to the limited address space of IPv4 (which allows for 4.3 billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the extremely large address space required. Objects in the IoT will not only be devices with sensory capabilities, but also provide actuation capabilities (e.g., bulbs or locks controlled over the Internet). To a large extent, the future of the Internet of Things will not be possible without the support of IPv6 and consequently the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future.

The ability to network embedded devices with limited CPU, memory and power resources means that IoT finds applications in nearly every field. Such systems could be in charge of collecting information in settings ranging from natural ecosystems to buildings and factories, thereby finding applications in fields of environmental sensing and urban planning.

On the other hand, IoT systems could also be responsible for performing actions, not just sensing things. Intelligent shopping systems, for example, could monitor specific users' purchasing habits in a store by tracking their specific mobile phones. These users could then be provided with special offers on their favorite products, or even location of items that they need, which their fridge has automatically conveyed to the phone. Additional examples of sensing and actuating are reflected in applications that deal with heat, electricity and energy management, as well as cruise-assisting transportation systems. Another excellent application that the Internet of Things brings to the picture is home security solutions. Home automation is also a major step forward when it comes to applying IoT. All these advances add to the numerous lists of IoT applications. Now with IoT, you can control the electrical devices installed in your house while you are sorting out your files in office. Your water will be warm as soon as you get up in the morning for the shower. All credit goes to smart devices which make up the smart home. Everything connected with the help of Internet.

However, the application of the IoT is not only restricted to these areas. Other specialized use cases of the IoT may also exist. An overview of some of the most prominent application areas is provided here. Based on the application domain, IoT products can be classified broadly into five different categories: smart wearable, smart home, smart city, smart environment, and smart enterprise. The IoT products and solutions in each of these markets have different characteristics.

In order to hone the manner in which the Internet of Things (IoT), the Media and Big Data are interconnected, it is first necessary to provide some context into the mechanism used for media process.  Media approach Big Data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead tap into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is of course to serve, or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities.

The media industries process Big Data in a dual, interconnected manner:
Marketing of consumers (for advertising by marketers) Data-capture : Thus, the internet of things creates an opportunity to measure, collect and analyze an ever-increasing variety of behavioral statistics. Cross-correlation of this data could revolutionize the targeted marketing of products and services.[55] For example,  the combination of analytics for conversion tracking with behavioral targeting has unlocked a new level of precision that enables display advertising to be focused on the devices of people with relevant interests.[56] Big Data and the IoT work in conjunction. From a media perspective, Data is the key derivatives of device inter connectivity, whilst being pivotal in allowing clearer accuracy in targeting. The Internet of Things therefore transforms the media industry, companies and even governments, opening up a new era of economic growth and competitiveness. The wealth of data generated by this industry (i.e. Big Data) will allow Practitioners in Advertising and Media to gain an elaborate layer on the present targeting mechanisms used by the industry.
Environmental monitoring: Applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions and can even include areas like monitoring the movements of wildlife and their habitats. Development of resource constrained devices connected to the Internet also means that other applications like earthquake or tsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile.
Infrastructure management : Monitoring and controlling operations of urban and rural infrastructures like bridges, railway tracks, on- and offshore- wind-farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. It can also be used for scheduling repair and maintenance activities in an efficient manner, by coordinating tasks between different service providers and users of these facilities. IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. Usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs of operation in all infrastructure related areas.[62] Even areas such as waste management stand to benefit from automation and optimization that could be brought in by the IoT.
Manufacturing : Network control and management of manufacturing equipment, asset and situation management, or manufacturing process control bring the IoT within the realm on industrial applications and smart manufacturing as well.[64] The IoT intelligent systems enable rapid manufacturing of new products, dynamic response to product demands, and real-time optimization of manufacturing production and supply chain networks, by networking machinery, sensors and control systems together.
Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of the IoT.[61] But it also extends itself to asset management via predictive maintenance, statistical evaluation, and measurements to maximize reliability.[65] Smart industrial management systems can also be integrated with the Smart Grid, thereby enabling real-time energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by a large number of networked sensors.
Energy Management : Integration of sensing and actuation systems, connected to the Internet, is likely to optimize energy consumption as a whole. It is expected that IoT devices will be integrated into all forms of energy consuming devices (switches, power outlets, bulbs, televisions, etc.) and be able to communicate with the utility supply company in order to effectively balance power generation and energy usage. Such devices would also offer the opportunity for users to remotely control their devices, or centrally manage them via a cloud based interface, and enable advanced functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.).In fact, a few systems that allow remote control of electric outlets are already available in the market.
And various such data intensive systems.
Your feedback can help improve our product and service. Feel free to give us suggestions. Please contact us through one of the following methods.
LinkedIn profile http://www.linkedin.com/in/ravindrarpande

Monday, August 17, 2015

Cloud Computing Maturity




Due to its exponential growth in recent years, cloud computing is still considered an emerging technology. As Cloud computing cannot yet be considered a mature and stable technology / platform. Cloud computing comes with both the benefits and the drawbacks of innovation. To better understand the complexity of cloud computing,
Let’s discuss this on this four pillars
1.      Cloud use and satisfaction level,
2.      Expected growth,
3.      Cloud-adoption drivers,
4.      Limitations to cloud adoption.
Various studies determined that the increased rate of cloud adoption is the result of perceived market maturity and the number of available services to implement, integrate and manage cloud services. Cloud adoption is no longer thought of as just an IT decision; it’s a business decision. Cloud has become a critical part of a company’s landscape and a cost effective way to create more agile IT resources and support the growth of a company’s core business.
Cloud Computing Maturity Stage
Cloud computing is still in a growing phase. This growth stage is characterized by the significant adoption, rapid growth and innovation of products offered and used, clear definitions of cloud computing, the integration of cloud into core business activities, a clear ROI and examples of successful usage. With roles and responsibilities still somewhat unclear, especially in the areas of data ownership and security and compliance requirements, cloud computing has yet to reach its market growth peak.
Cloud Adoption and Growth
How does cloud computing continue to mature? Security and privacy continue to be the main inhibitors of cloud adoption because of insufficient transparency into cloud-provider security. Cloud providers do not supply cloud users with information about the security that is implemented to protect cloud-user assets. Cloud users need to trust the operations and understand any risk. Providing transparency into the system of internal controls gives users this much needed trust.
Companies are experimenting with cloud computing and trying to determine how cloud fits into their business strategy. For some, it is clear that cloud can provide new process models that can transform the business and add to their competitive advantage. By adopting cloud-based applications to support the business, Software as a Service (SaaS) adoption is enabling organizations to channel resources into the development of their core competencies.
Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) adoptions enable businesses to experiment with new technologies and new services that require resources that would be expensive if they were completed through in-house implementation. IaaS and PaaS also allow companies to adapt to the rapid changes in market demand, because they create a completely new, faster and cheaper offering.
User Satisfaction
According to respondents, the level of satisfaction with cloud services is on the rise. Cloud services are now commonly being used to meet business as usual (BAU) and strategic goals, with the expectation that they will be more important for BAU than strategic plans in the future.
It’s not perfect yet, but the level of satisfaction with cloud services and deployment models is expected to increase as the market matures and vendors define standards to minimize the complexity around cloud adoption and management. The increase of cloud service brokers and integrator is helping businesses to integrate applications, data and shared storage in a more efficient way, making ongoing maintenance much easier.
Moving Past the Challenges
Study found that the most significant cloud concerns involve security and international data privacy requirements, data custodianship, legal and contractual issues, provider control over information, and regulatory compliance. Both cloud providers and cloud users have a role is moving past cloud concerns. Cloud providers need to demonstrate their capabilities to deliver services in a secure and reliable manner. Companies must understand their own accountability for security and compliance and their responsibility for implementing the necessary controls to protect their assets.
Gaining Maturity
The decision to invest in cloud products and services needs to be a strategic decision. Top management and business leaders need to be involved throughout a cloud product’s life cycle. Any cloud-specific risk should be treated as a business risk, requiring management to understand cloud benefits and challenges to be able to address cloud-specific risk. The need remains for better explanations of the benefits that cloud can bring to an organization and how cloud computing can fit into the overall core strategy of a business.
Effective access Control
As the threat landscape has evolved to include adversaries with deep pockets, immense resources and plenty of time to compromise their intended target, security professionals have been struggling to stave off data breaches. This is not a matter of if your network will be compromised, but when.
Since many companies have built up their perimeter defenses to massive levels, attackers have doubled down on social engineering. Phishing and malware-laden spam are designed to fool company employees into divulging login information or compromising their machine.  Since threat actors have become so good at circumventing traditional defenses, we cannot afford to have only a single point of failure. Without proper internal security, attackers are given free reign of the network as soon as they gain access to it.

Instead, attackers should encounter significant obstacles between the point of compromise and the sensitive data they are after. One way to accomplish this is with network segmentation.
Keep your hands to yourself : In an open network without segmentation, everyone can touch everything. There is nothing separating Sales from Legal, or Marketing from Engineering. Even third-party vendors may get in on the action.
The problem with this scenario is that it leaves the data door wide open for anyone with access credentials. In a few hours, a malicious insider could survey the network, collect everything of value and make off with the goods before security personnel get wind of anything out of the ordinary.
What makes this problem even more frustrating is that there is no reason everyone on the network should be able to touch every resource. Engineers don’t need financial records to perform their job, and accountants don’t need proprietary product specifications to do theirs.
By simply cordoning off user groups and only allowing access to necessary resources, you can drastically reduce the potential damage an attacker could inflict on the organization. Instead of nabbing the crown jewels, the thief will have to settle for something from the souvenir shop. Additionally, the more time the attacker spends trying to navigate and survey your network, the more time you have to find them and throw them out, preventing even the slightest loss of data in the process.
How it works: It is best to think of a segmented network as a collection of zones. Groups of users and groups of resources are defined and categorized, and users are only able to “see” the zones appropriate to their role. In practice, this is usually accomplished by crafting access policies and using switches, virtual local area networks (VLANs) and access control lists to enforce them.
While this is all well and good, segmentation can quickly become a headache in large corporate environments. Network expansion, users numbering in the thousands and the introduction of the cloud can disrupt existing segmentation policies and make it difficult to maintain efficacy. Each point of enforcement could contain hundreds of individual policies. As the network grows in users and assets, segmentation policies can quickly become outdated and ineffective.
Retaining segmentation integrity is an important security function in today’s world of advanced threats and high-profile data breaches. To properly protect themselves, organizations need to constantly maintain segmentation, adding new policies and adjusting existing ones as network needs change.
One way to tackle the challenges of traditional access control is with software-defined segmentation, which abstracts policies away from IP addresses and instead bases them on user identity or role. This allows for much more effective and manageable segmentation that can easily adapt to changes in the network topology.
Active segmentation for effective access control: When you couple software-defined segmentation with an intelligent planning and implementation methodology, you get active segmentation. This approach to segmentation allows network operators to effectively cordon off critical network assets and limit access appropriately with minimal disruption to normal business functions.
When implemented correctly, active segmentation is a cyclical process of:
1.      Identifying and classifying all network assets based on role or function
2.      Understanding user behavior and interactions on the network
3.      Logically designing access policies
4.      Enforcing those policies
5.      Continuously evaluating policy effectiveness
6.      Adjusting policies where necessary

Thursday, June 11, 2015

Designing Server Virtualization

Designing your server virtualization infrastructure requires a lot of planning before it's built, as well as plans for if something ever happens. Virtualization helps eliminate hardware issues inside a data center and allows virtual machines to be easily moved. In terms of speed, virtualization can create space in a matter of moments.
It's important to take into consideration the amount of resources you'll need, especially in terms of capacity and power consumption. Just because your environment seems secure doesn't mean it's bulletproof to a disaster. Disasters can come in multiple forms and are nearly impossible to avoid. However, having a disaster recovery plan in place is key in designing a server virtualization infrastructure.
Finally, private cloud always seems to creep into plans. There's a difference between private cloud and regular virtualization, and it's important to distinguish the disparities in order to make a logical decision.
 
Resource provisioning and capacity planning :Provisioning resources and planning capacity seems like it is a simple task, but it's certainly one that can't be overlooked. Virtual machines that end up without the necessary resources will suffer performance issues. On the other hand, overprovisioning resources to a VM could be a waste. It's important to have a proper capacity plan in place to ensure your resources will be ready to handle any and all workloads and keep your environment running smoothly.
Building a successful virtual server farm: When it comes to designing a virtual server farm, there is no "one size fits all." Although that's the case, it doesn't mean that there aren't any simple guidelines to follow to create a reliable environment. Understanding your applications and knowing the quantity of hosts you're looking for are two small ways of building a scalable server virtualization infrastructure.
 
Sizing hosts for a virtual server farm :Your job isn't done once you determine how many servers you'll need for your environment. Next up, you have to figure out the size of each server, including how much memory and CPU resources each host should contain
 
The problem with overprovisioning VMs : This is a very important aspect lets discuss this in detail.  It might seem like more is better when figuring out resources for a virtual machine, but too many resources can cause hardware issues. Overprovisioning VMs can prevent slow performance, but it could have a negative long-term affect.  Appropriately sizing virtual machines can be a difficult process with many unknowns. Allocating too few resources can starve a VM and lead to poor performance. Administrators wary of this potential problem may take the safer approach and allocate more resources than a VM needs. However, this overprovisioning wastes resources that other VMs could use.
 
Capacity planning tools can help organizations identify consolidation opportunities, allowing them to scale back overprovisioned VMs and save money.
 
Overprovisioning is a huge and very pervasive problem, and I think it's because it is one of the only ways people have to manage risk in their IT environment. If you have an unknown -- you don't know what your application is going to do or you don't know exactly what you'll need -- overprovisioning is the traditional way to go about it. In virtual and cloud environments, it just keeps on propagating. In virtual environments if you have a performance problem, you can just throw more hardware at it and that's the default way around rather than digging deeper. In clouds, people buy cloud instances because they don't know what they need. Sometimes it's the most prudent way to go for someone, but we're getting to the point that this isn't something we should tolerate. There are ways to fix it that don't cost a whole lot of money. In the past, maybe it was necessary but now it's not.

We like to use an analogy to a game of Tetris. Workloads come in different shapes and sizes and when you add them together, it starts to jumble up to the point where servers look like they're full. But, when you play Tetris more cleverly and move those blocks around, you can defrag capacity and get a lot more out of it. Sometimes people are doing all the right things with the tools they have at their disposal, but they can't fight this because they don't have anything that can help them play Tetris better. I wouldn't characterize 

Overprovisioning as people doing anything wrong, it's just that they don't have the analytics at their disposal to fix it. So, we see a lot of people buying more hardware before they really need to. If you analyze things more carefully you can go farther with what you have and not increase risk, just by sorting things out so they don't collide.
 
 If I'm running a critical production environment, I might want two servers totally empty for failover purposes. I might want a bunch of capacity sitting idle for disaster recovery purposes. I might not want my servers going about half capacity for safety reasons. The way you approach that is to define your operational parameters, including safety margins and dependencies, and that defines when capacity is full -- not whether CPU use is at 100%. It really comes down to properly capturing operational policies, which means defining what spare capacity you want to have. Then, everything beyond that is a waste.
 
If you have a line of business that is running applications on central IT infrastructure and they aren't paying with some type of chargeback model, they might be hard pressed to give up some of those resources because they're not paying for them. If IT is footing the bill, they care about the density. If you're a cloud or chargeback customer, you care about what you're paying. So, it's a discussion that would go differently depending on who's footing the bill.
 
We see organizations where IT is footing the bill and still getting lines of business to tighten things up a bit. The way they do that is to address new deployments. If I'm IT, when you ask for new capacity, I'm not going to give it to you if you're wasting the capacity you have. Of course, it's not always quite that simple, but that's the type of leverage IT has.
 
Optimizing performance and power : Contrary to overprovisioning, proper resource utilization can optimize performance. Not only will you get strong performance from provisioning the right amount of resources, you could maximize efficiency and savings as well.
Reclaim swap file space, reduce storage costs : Although swap files can enable features such as memory overcommit, companies are finding out that large swap files are wasting expensive storage space. Solid-state drives are mostly measured in gigabytes instead of terabytes, which makes it critical to use that space efficiently.





Monday, May 25, 2015

Cloud deployment IaaS

Let me put my experience & views on design and implementation of a system used for automatically deploying distributed applications on infrastructure clouds. I am big fan of open systems so the efforts driven in that direction. The system interfaces with several different cloud resource providers to provision virtual machines, coordinates the configuration and initiation of services to support distributed applications, and monitors applications over time.

Infrastructure as a Service (IaaS) clouds are becoming an important platform for distributed applications. These clouds allow users to provision computational, storage and networking resources from commercial and academic resource providers. Unlike other distributed resource sharing solutions, such as grids, users of infrastructure clouds are given full control of the entire software environment in which their applications run. The benefits of this approach include support
for legacy applications and the ability to customize the environment to suit the application. The drawbacks include increased complexity and additional effort required to setup and deploy the application.
Current infrastructure clouds provide interfaces for allocating individual virtual machines (VMs) with a desired configuration of CPU, memory, disk space, etc. However, these interfaces typically do not provide any features to help users deploy and configure their application once resources have been provisioned. In order to make use of infrastructure clouds, developers need software tools that can be used to configure dynamic execution environments in the cloud.
The execution environments required by distributed scientific applications, such as workflows and parallel programs, typically require a distributed storage system for sharing data between application tasks running on different nodes, and a resource manager for scheduling tasks onto nodes. Fortunately, many such services have been developed for use in traditional HPC environments, such as clusters and grids. The challenge is how to deploy these services in the cloud given the dynamic nature of cloud environments. Unlike clouds, clusters and grids are static environments. A system
administrator can setup the required services on a cluster and, with some maintenance, the cluster will be ready to run applications at any time. Clouds, on the other hand, are highly dynamic. Virtual machines provisioned from the cloud may be used to run applications for only a few hours at a time. In order to make efficient use of such an environment, tools are needed to automatically install, configure, and run distributed services in a repeatable way.
Deploying such applications is not a trivial task. It is usually not sufficient to simply develop a virtual machine (VM) image that runs the appropriate services when the virtual machine starts up, and then just deploy the image on several VMs in the cloud. Often the configuration of distributed services requires information about the nodes in the deployment that is not available until after nodes are
provisioned (such as IP addresses, host names, etc.) as well as parameters specified by the user. In addition, nodes often form a complex hierarchy of interdependent services that must be configured in the correct order. Although users can manually configure such complex deployments, doing so is time consuming and error prone, especially for deployments with a large number of nodes. Instead, we advocate an approach where the user is able to specify the layout of their application declaratively, and use a service to automatically provision, configure, and monitor the application deployment. The service should allow for the dynamic configuration of the deployment, so that a variety services can be deployed based on the needs of the user. It should also be resilient to failures
that occur during the provisioning process and allow for the dynamic addition and removal of nodes.
For this blog we have considered a system called Wrangler that implements this functionality. Wrangler allows users to send a simple XML description of the desired deployment to a web service that manages the provisioning of virtual machines and the installation and configuration of software and services. It is capable of interfacing with many different resource providers in order to deploy applications  across clouds, supports plugins that enable users to define custom behaviors for their application, and allows dependencies to be specified between nodes. Complex deployments can be created by composing several plugins that set up services, install and configure application software,
download data, and monitor services, on several interdependent nodes.

We have been using Wrangler since mid 2010 to provision virtual clusters for scientific workflow applications on Amazon EC2, the Magellan cloud at NERSC, the Sierra and India clouds on the FutureGrid, and the Skynet cloud at ISI. We have used these virtual clusters to run several hundred
workflows for applications in astronomy, bioinformatics and earth science.
So far we have found that Wrangler makes deploying complex, distributed applications in the cloud easy, but we have encountered some issues in using it that we plan to address in the future. Currently, Wrangler assumes that users can respond to failures manually. In practice this has been a
problem because users often leave virtual clusters running unattended for long periods. In the future we plan to investigate solutions for automatically handling failures by re-provisioning failed nodes, and by implementing mechanisms to fail gracefully or provide degraded service when re-provisioning is not possible. We also plan to develop techniques for re-configuring deployments, and for dynamically scaling deployments in response to application demand.

This is just the initial steps writing the completed scenario in next blog. Do write to me at ravindrapande@gmail.com

Wednesday, May 13, 2015

New Modern Performance Management Software

If any of these emotions describe the process your company uses to administer performance reviews, there’s a good chance those same emotions are found in both the reviewer and the reviewee. Recent research found that 53 percent of employees say performance reviews don’t motivate them to work harder, and 63 of employees felt their reviews weren’t true indicators of their performance. So we also need to analyze that is this process helping us or damaging? This is a real-time exercise we need to perform

And employees aren’t the only ones who question the validity of these standard appraisals. Only 8 percent of companies report their performance management process drives high levels of value, while 58 percent say it is not an effective use of time.

That’s right: nearly 60 percent of organizations don’t see reviewing the performance of their employees as a worthwhile use of resources.  Employees don’t trust them. Managers don’t respect them. In other words: performance reviews are broken.

How to Fix the Performance Review Process :Most companies are guilty of treating performance management as a yearly event, despite research showing that organizations that use continual performance management processes have better business results. Companies where employees revise or review their goals at least quarterly are:
  45 percent more likely to have above-average financial performances
  64 percent more likely to be effective at holding costs at or below the levels of their competitors.

Performance management shouldn’t end once a performance appraisal is over. It should be an ongoing process that helps create developmental plans to support an employee’s goals, career interests, and potential, as well as your organization’s business and talent needs.

Modern performance management should be dynamic, agile, and transparent.

Companies must update their performance management processes before they can leverage technology to make smarter decisions about their workforces. Draping new software on top of a flawed process won’t fix the problem.

New Processes, New Technology :Employees want to learn. They want to be good at their jobs, and they want immediate feedback on how to improve. Performance management has evolved into a series of continuous events that include goal-setting and -revising, mentoring and coaching, and development planning.

Thankfully, human resources technology has evolved, too.

But not all systems are created equal. When you compare HR software, there are significant differences between systems. Many still support the traditional performance appraisal process, but more and more are disrupting the industry to bring the benefits of modern performance management into the workplace.

Let’s understand benefits:

1. Agility in Feedback and Learning  :Facilitating a culture of continuous feedback means everyone knows where they stand on a regular basis. If ongoing feedback seems like overkill, consider how the alternative affects poor Sally:

Menu’s annual review rolls around, during which she discovers her manager was disappointed by something she did several months ago. Rather than discussing the situation and coaching her on the fly, the behavior went unchecked, and was relegated to the annual performance meeting. Not only is Menu blindsided, but she was unable to make adjustments as time went by.

Annual or semiannual feedback is not frequent enough, and it provides very little in the way of transparency or actual direction. Underperformers will assume their performance is fine and not try to improve,  great performers who desire frequent feedback will become uncertain and disengaged. Additionally, employees who see reviews as inaccurate are twice as likely to seek new jobs.

Outdated modes of providing feedback hurt businesses. Performance management software provides transparency into employee performance. Since managers can see the status of goals in real time, it’s easy to address issues as they arise and eliminate those unwelcome end-of-year surprises.

2. Dynamic Goal Setting : Business moves fast, and annual goal-setting can’t keep pace with the modern world. A goal that was created in January may not be relevant six or even three months later, so individual goals must change to stay in sync with larger strategic goals. If employees hit personal targets that aren’t aligned with the overarching aspirations of the company, then at least some of the moving parts of your business are moving in separate directions.

The right software can simplify the goal-setting process and keep everyone working on the right objectives. Software allows companies to set overarching goals and then attach manager, team, and individual employee expectations to them. Cascading goals align everyone across the organization and provide greater transparency between and inside departments.

As business priorities evolve, performance management software can create a domino effect that keeps everybody on the same page.

3. Relevant Career Development :After salary, career growth is the No. 1 reason candidates accept job offers. This is good news for businesses, because employees content with stagnation aren’t ideal teammates. Top talent is hungry for career development, and companies should do everything they can to provide it. Performance meetings are a time to discuss employee growth, development, and long-term career aspirations, then make training plans that will bridge employees’ skill gaps and help them reach new levels.

Companies that provide detailed development planning and coaching to their employees have a third less voluntary turnover and generate twice the revenue per employee of their peers. Performance management software helps employees gain insight into their career opportunities. Not only can such software match career goals with corresponding e-learning material or classes, but it can also alert employees to internal hiring opportunities in other departments or offices. Employees that want to learn and improve are valuable assets, so invest in them.

Companies must look closely at how they manage and measure employee performance, as well as the technology they use to do so. Modern performance management methods and tools help identify competencies, aspirations, and skill gaps, and then create strong, effective employee performance programs.

If employees feel that they can get the tools they need to succeed from another company, then that’s exactly what they’ll do: they’ll leave, taking their knowledge and value to competitors. Don’t let old performance management methods and technology fuel the turnover fire.