Thursday, December 23, 2010

Knowledge Management - my thoughts

As Knowledge is intangible, dynamic, and difficult to measure, but without it no organization can survive. unarticulated knowledge is more personal, experiential, context specific, and hard to formalize; is difficult to communicate or share with others; and is generally in the heads of individuals and teams. So to start we need to understand what is important to us.

This do we need KM ?

As our Competitive success will be based on how strategically intellectual capital is managed

As after capturing the knowledge residing in the minds of employees so that it can be easily shared across the enterprise

We can after Leveraging organizational knowledge is emerging as the solution to an increasingly fragmented and globally-dispersed workplace

As we can understand that reuse of knowledge saves work, reduces communication costs, and allows a company to take on more projects

According to Wikipedia "Knowledge Management (KM) comprises a range of strategies and practices used in an organization to identify, create, represent, distribute, and enable adoption of insights and experiences. Such insights and experiences comprise knowledge, either embodied in individuals or embedded in organizational processes or practice."

The Activities we need to do it are

  • Generating knowledge, capturing knowledge
  • Making knowledge accessible
  • Representing and embedding knowledge
  • Facilitating knowledge
  • Transferring knowledge to an archival for day to day use

In my thoughts, let's call it an art of preserving useful knowledge & archiving it for future use for efficiencies increase & continuous improvement purpose. In my adaptions we have built & used a knowledge management systems, which are comprises a range of practices used in to identify, create, represent, distribute and enable adoption of insights and experiences. These insights and experiences comprise knowledge, either embodied in teams , members, or embedded in organizational processes or practice. This acquired knowledge help us to continuously improve from the learning acquired & re-align the strategies to get maximum efficiencies and quality for the output.

Let's take a sample to understand this correctly, if someone asks what sales are apt to be next quarter, we would have to say, "It depends!" I would have to say this because although we have data and information, we have not built knowledge yet. This is a trap that many fall into, because they don't understand that data doesn't predict trends of data. What predicts trends of data is the activity that is responsible for the data. To be able to estimate the sales for next quarter, we might need the information about the competition, market size, extent of market saturation, current backlog, customer satisfaction levels associated with current product delivery, current production capacity, the extent of capacity utilization, and a whole host of other things. When we were able to amass sufficient data and information to form a complete pattern that we can understand, we would have knowledge, and would then be somewhat comfortable estimating the sales for next quarter. We are looking for documenting such complex knowledge.

In an organizational context, data represents facts or values of results, and relations between data and other relations have the capacity to represent information. Patterns of relations of data and information and other patterns have the capacity to represent knowledge. For the representation to be of any utility it must be understood, and when understood the representation is information or knowledge to the one that understands.

Recently with the advent of the Web 2.0, the concept of KM has evolved towards a vision more based on people participation and emergence. This line of evolution trend is still continuing. However, there is an ongoing debate and discussions about which area have priorities over others. So these areas & priorities can be revised as per the needs.

The Areas to be addressed for a typical IT services organizations are

  • Technological data
  • Organizational data
  • Process documentation
  • Trainings & feedback
  • Dashboards/ Measuring and reporting

As you can feels KM is an umbrella activity for continuous improvement with proactive learning from all previous best practices (implementations) & errors not to repeats (error eliminations) but not limited to just learning, includes the learning from improved tools & third party products studies from Internet & intranets

The Data elements to be achieve can be as follows

  • Previous project Documents & Data
  • Learning from previous projects
  • Code snippets, qualitative review logs
  • Prato charts & actions taken results
  • Updated framework for better productivity
  • Organizational Assets updates improvements in communication with PMO & cooperate functions
  • Environmental factors updates choosing better tools & environments for better productivity

These are not all, we have to keep learning from our & neighbors mistakes & build best practices to fellow to improve over efficiencies in organizations

Motivational factors / Process results

  • Continuous improvements in terms of deviation analysis & process fine tuning thru SEPG
  • Effort & cost estimation alignment with the LOC & Function point variable factors
  • Improved matrices in terms of proactively predicting the alarms & set triggering the right element this time.
  • Improved collaboration with client & third parties in terms of getting support proactively than reactively
  • Tool for Global / Virtual teams to access / share all project knowledge status & training details
  • Making available increased knowledge content in the development and provision of products and services
  • Achieving shorter new product development cycles (Efficiencies improved)
  • Facilitating and managing innovation and organizational learning & thus sharing best practices across the department, work unit
  • Leveraging the expertise of people across the organization & documenting the knowledge for future reference
  • Increasing network connectivity between internal and external individuals
  • Managing business environments and allowing employees to obtain relevant insights and ideas appropriate to their work
  • Solving intractable or wicked problems
  • Managing intellectual capital and intellectual assets in the workforce (such as the expertise and know-how possessed by key individuals)

The value of Knowledge Management relates directly to the effectiveness that is return on investment, with which the managed knowledge enables the members of the organization to deal with today's situations and effectively envision and create their future. Without on-demand access to managed knowledge, every situation is addressed based on what the individual or group brings to the situation with them. With on-demand access to managed knowledge, every situation is addressed with the sum total of everything anyone in the organization has ever learned about a situation of a similar nature. There is no one-size-fits-all way to effectively tap a firm's intellectual capital. To create value, companies must focus on how knowledge is used to build critical capabilities.

There are always positive & negative aspects of systems as per the usage scenarios. Let's consider an example. A firm that had invested millions of dollars in a state-of-the-art intranet intended to improve knowledge sharing got some bad news: Employees were using it most often to retrieve the daily menu from the company cafeteria. The system was just started making appearances among users or in day-to-day business activities.

Few executives would argue with the premise that knowledge management is critical—but few know precisely what to do about it. There are numerous examples of knowledge-management programs intended to improve innovation, responsiveness and adaptability that fall short of expectations. The Organizations need to spend efforts in research activities at various points & explore all such instances including the roots of the problem and have developed a method to help executives make effective knowledge management a reality in their organizations.

Much of the problem with knowledge management today lies in the way the subject has been approached by vendors and the press. Knowledge management is still a relatively young field, with new concepts emerging constantly. Often, it is portrayed simplistically; discussions typically revolve around blanket principles that are intended to work across the organization. Most knowledge-management initiatives have focused almost entirely on changes in tools and technologies, such as intranets.


Do share your thoughts at Ravindrapande@gmail.com

Thursday, December 16, 2010

Metrics implementation and estimation start

Any discussion of metrics has to start with a foundation. Over the years, a consensus has arisen to describe at least a core set of four metrics. These are: size, time, effort, and defects as we have discussed it all in the last discussions. Now let’s have look at the guiding process body like Software Engineering Institute (SEI). The SEI has issued a useful publication that discusses the background to the core measures and offers recommendations for their use. There are several additional SEI documents available that go into further depth on the measures individually. And lastly, prior to the SEI, some of the first writings on the core set can be found in various web sites.

As discussed in last blog, this "minimum data nods / control points" links management's bottom line in a cohesive relationship. As project teams, we spend a certain amount of time (Days, weeks, months), expending a certain amount of work effort (person days, person-months). At the end of our hard work, the system is ready to be deployed. It represents a certain amount of functionality at a certain level of quality / user satisfaction. Anyone embarking on a measurement program should start with at least these four core measures as a foundation.

A good manager should be keeping these types of records. For projects that have been completed, size represents what has been built, as countable entities. Knowing what's been accomplished, at what speed, at what cost, and at what level of quality can tell us how well we did. This is what we call "benchmarking" or "knowing your what we achieved." And an extension to this can guide us for “what we can do” is stretched for our efficiencies.

For new projects the sizing issue becomes one of estimation. The best way of approximating what needs to be built is to have records about units you've built before, in order to help you scope the new job. Size estimation is a critical discipline. It represents a team's commitment as to what it will build. As Studies by the SEI indicate that the most common failing of ad hoc software organizations is an inability to make size estimates accurately. If you underestimate the size of your next project, common sense says that it doesn't matter which methodology you use, what tools you buy, or even what programmers you assign to the job.

Now let’s put some light on "NEW" trends in measurements of size for OO projects

Whenever software developers begin a project that involves a new technology (OO, SOA, etc.), there is great confusion as to how and what they should measure and what the appropriate "size" unit might be. The software development community has pondered questions like these since what seems to be the beginning of time. You name the technology, the language, the era. These questions often come down to size. Time we understand. Effort we understand. Defects we understand (yet very few tracks them or keeps good records!). That last measure of “size” is often where all these questions lead to.

For object-oriented development, useful measures of size have been shown to be units such as number of methods, objects, or classes. Common ranges seem to be about 175 to 250 lines of code (C++, Smalltalk, etc.) per object. Lines of code, function points, classes, objects, methods, processes, programs, Java scripts, and frames all represent various abstractions of system size.

Leveraging these size units means taking an inventory of past projects in order to better understand the building blocks of past systems. This also establishes the vocabulary of the organization in terms of the functionality that has been, and needs to be, built. It is the basis of negotiation when trying to decide what a team agrees to take on, within a given deadline.

New engineering way for estimation Function Points

Of all the followers of different size metrics, OO or otherwise, the "priests" of function points have been the most insistent in promoting that measure as the most important offering for all the benches.

There are still issues in FP thought process. In many organizations, function points seem to have served a useful purpose. Organizations are finally getting people to think about project size. It used to be that you'd ask the question, "How big is this application?" and someone might answer, "150 man-months," answering your question on size with a number for the effort (related to cost) spent building it.

That's like someone asking you, "How big is your house?" and you answer, "$250,000." You should have said something like, "It's a four-bedroom with 2 1/2 baths, for a total size of 2,400 square feet." High abstraction unit: rooms; low abstraction unit: square feet.

Size is a metric describing the bigness or smallness of the system. It can be broken into chunks of various descriptions. Function points can do this in certain applications. As previously mentioned, other units include programs, objects, classes, frames, modules, processes, computer software configuration items, subsystems, and others. All represent the building blocks of the projects / products from different levels of abstraction, or perspectives. They all ultimately translate to the amount of code that becomes compiled and built to run on a computer or embedded processor.

They translate down to a common unit just as the volume of fluid in a vessel might be described in terms of liters, gallons, quarts, pints, and ultimately, down to a common unit of either fluid ounces or milliliters. The key point to understand, though, is that all these units relate to each other in a proportional, scaling relationship.

So when questions like, "What new metrics should we use for . . . ?" arise, Direct it to what would make sense in the organization's vocabulary, if it comes down to project size. Remember that the objective is to communicate information about the system and maximize the flow of information by making sure everyone speaks a well-understood and familiar language. Have language fit the organization, not the other way around.

Challenges in the Function Point World

For many organizations, the underlying concept of function points is a decent fit. That is, the metamodel of a system comprising two parts, a database structure and functions that access those structures, correctly describes what they build.

However, as we rightly observe, "You have a problem if your system does anything other than approved functionality. That's not necessarily saying anything bad about the function point, other than that it simply is not a size metric for all systems Also, organizations have found metrics programs to be very valuable in tracking and controlling projects already under way. Unfortunately, function points can represent difficult entities to track midstream. In ongoing projects, function point users have reported difficulty in counting "what function points have been built so far." (On the other hand, it's relatively easy for a configuration management system to report how much code has been added as new and what code has been changed.) In order to fill that void, many organizations respond by using alternate size measures for tracking, such as modules built, number of objects or programs under configuration management, number of integration builds, and yes, amount of code built and tested to date. The later size can be measured by a system of measuring Line of Code(LOC, KLOC)

So whether function points have met their early promise of consistency in counting is up for debate. They may not have proved immune to counting controversy after all. Nevertheless, function points serve a purpose as one type of size metric. And if your organization builds applications, it might pay to consider function points as a size metric.

Remember that any one measure has its limitations. Therefore, utilize multiple approaches where possible and note the sizing relationships (the proportionality) between them.

Additionally we can try DEFECT METRICS to check how re-building /re-work occurs

No metrics discussion would be complete without addressing the subject of software defects. It is the least-measured entity, yet the one that receives the worst press when software failures occur in the real world. That should tell us something in and of itself. What we are seeing with software defects is the direct correlation of schedule pressure and bad planning. This squarely points to the potential for management to influence software quality, both positively and negatively. Causality between software project deadlines and defect levels thus places the opportunity for good quality software squarely in normal management laps.

We can build two categories for defects (1) defects found during system testing to delivery (including severity), over time, and (2) defects after roll-out / implementation/ user reported defects. The two categories are indistinguishably related. The latter will enable you to determine software reliability. There is much to interpret from this raw data. At the heart of this is causal analysis; that is, finding out the drivers behind defect rates and defect densities and then doing something about them.

Tuesday, December 7, 2010

simple metrics to start from for typical IT project

Let's begin with the real world, complete with all its incredible deadline pressure. Let's talk about the implementation now, we can start with a dashboard or Charts,.Charts are prepared for the standard metrics. All charts require titles, legends, and labels for all axes. They should clearly and succinctly show the metrics of interest, with no excessive detail to detract the eye. Do not overuse different line types, patterns, or color, or added dimensionality unless used specifically to differentiate items. Overlayed data is preferable to multiple charts when the different data are related to each other and can be meaningfully depicted without obscuring other details.

The most common type of chart is the tracking chart. This chart is used extensively for the Progress indicator, and is used in similar forms for many of the other indicators. For task progress, it depicts the cumulative number of planned and actual task completions (or milestones) against time. For other indicators, it may show actual versus planned staffing profiles, actual versus planned software size, actual versus planned resource utilization or other measures compared over time.

There are many ways to modify the tracking chart. A horizontal planned line representing the cumulative goal can be drawn at the top, multiple types of tasks can be overlaid on a single tracking chart (such as design, code, and integration), or the chart can be overlaid with other types of data.

It is recommended that tracked quantities be shown as a line chart, and that replanned task progress be shown as a separate planning line. The original planned baseline is kept on the chart, as well as all replanning data if there is more than a single replan.

The following sections provide brief descriptions of the different metrics categories with samples of the required charts. Individual projects may enhance the charts for their situations or have additional charts for the categories. The sample charts are designed for overhead presentations and are available as templates from the professor.

Indicator Category

Management Insight

Indicators

Progress

Provides information on how well the project is performing with respect to its schedule.

Actual vs. planned task completions

Actual vs. planned durations

Effort

Provides visibility into the
contributions of staffing on
project costs, schedule
adherence, and product quality.

Actual vs. planned staffing profiles

Cost

Provides tracking of actual costs against estimated costs and predicts future costs.

Actual vs. planned costs

Cost and schedule variances

Review Results

Provides status of action items from life-cycle review.

Status of action items

Trouble Reports

Provides insight into product and process quality and the effectiveness of the testing.

Status of trouble reports

Number of trouble reports opened, closed, etc. during
reporting period

Requirements Stability

Provides visibility into the magnitude and impact of requirements changes.

Number of requirements changes/clarifications

Distribution of requirements over releases

Size Stability

Provides insight into the
completeness and stability
of the requirements and into the ability of the staff to complete the project within the current budget and schedule.

Size growth

Distribution of size over releases

Computer Resource Utilization

Provides information on how
well the project is meeting its computer resource utilization goals/requirements.

Actual vs. planned profiles of computer resource utilization

Training

Provides information on the
training program and staff skills.

Actual vs. planned number of personnel attending classes

so these are nine implementations to start with you can use the suitable combination which ever you feel the need for. Feel free to share comments / feedbacks