Thursday, December 16, 2010

Metrics implementation and estimation start

Any discussion of metrics has to start with a foundation. Over the years, a consensus has arisen to describe at least a core set of four metrics. These are: size, time, effort, and defects as we have discussed it all in the last discussions. Now let’s have look at the guiding process body like Software Engineering Institute (SEI). The SEI has issued a useful publication that discusses the background to the core measures and offers recommendations for their use. There are several additional SEI documents available that go into further depth on the measures individually. And lastly, prior to the SEI, some of the first writings on the core set can be found in various web sites.

As discussed in last blog, this "minimum data nods / control points" links management's bottom line in a cohesive relationship. As project teams, we spend a certain amount of time (Days, weeks, months), expending a certain amount of work effort (person days, person-months). At the end of our hard work, the system is ready to be deployed. It represents a certain amount of functionality at a certain level of quality / user satisfaction. Anyone embarking on a measurement program should start with at least these four core measures as a foundation.

A good manager should be keeping these types of records. For projects that have been completed, size represents what has been built, as countable entities. Knowing what's been accomplished, at what speed, at what cost, and at what level of quality can tell us how well we did. This is what we call "benchmarking" or "knowing your what we achieved." And an extension to this can guide us for “what we can do” is stretched for our efficiencies.

For new projects the sizing issue becomes one of estimation. The best way of approximating what needs to be built is to have records about units you've built before, in order to help you scope the new job. Size estimation is a critical discipline. It represents a team's commitment as to what it will build. As Studies by the SEI indicate that the most common failing of ad hoc software organizations is an inability to make size estimates accurately. If you underestimate the size of your next project, common sense says that it doesn't matter which methodology you use, what tools you buy, or even what programmers you assign to the job.

Now let’s put some light on "NEW" trends in measurements of size for OO projects

Whenever software developers begin a project that involves a new technology (OO, SOA, etc.), there is great confusion as to how and what they should measure and what the appropriate "size" unit might be. The software development community has pondered questions like these since what seems to be the beginning of time. You name the technology, the language, the era. These questions often come down to size. Time we understand. Effort we understand. Defects we understand (yet very few tracks them or keeps good records!). That last measure of “size” is often where all these questions lead to.

For object-oriented development, useful measures of size have been shown to be units such as number of methods, objects, or classes. Common ranges seem to be about 175 to 250 lines of code (C++, Smalltalk, etc.) per object. Lines of code, function points, classes, objects, methods, processes, programs, Java scripts, and frames all represent various abstractions of system size.

Leveraging these size units means taking an inventory of past projects in order to better understand the building blocks of past systems. This also establishes the vocabulary of the organization in terms of the functionality that has been, and needs to be, built. It is the basis of negotiation when trying to decide what a team agrees to take on, within a given deadline.

New engineering way for estimation Function Points

Of all the followers of different size metrics, OO or otherwise, the "priests" of function points have been the most insistent in promoting that measure as the most important offering for all the benches.

There are still issues in FP thought process. In many organizations, function points seem to have served a useful purpose. Organizations are finally getting people to think about project size. It used to be that you'd ask the question, "How big is this application?" and someone might answer, "150 man-months," answering your question on size with a number for the effort (related to cost) spent building it.

That's like someone asking you, "How big is your house?" and you answer, "$250,000." You should have said something like, "It's a four-bedroom with 2 1/2 baths, for a total size of 2,400 square feet." High abstraction unit: rooms; low abstraction unit: square feet.

Size is a metric describing the bigness or smallness of the system. It can be broken into chunks of various descriptions. Function points can do this in certain applications. As previously mentioned, other units include programs, objects, classes, frames, modules, processes, computer software configuration items, subsystems, and others. All represent the building blocks of the projects / products from different levels of abstraction, or perspectives. They all ultimately translate to the amount of code that becomes compiled and built to run on a computer or embedded processor.

They translate down to a common unit just as the volume of fluid in a vessel might be described in terms of liters, gallons, quarts, pints, and ultimately, down to a common unit of either fluid ounces or milliliters. The key point to understand, though, is that all these units relate to each other in a proportional, scaling relationship.

So when questions like, "What new metrics should we use for . . . ?" arise, Direct it to what would make sense in the organization's vocabulary, if it comes down to project size. Remember that the objective is to communicate information about the system and maximize the flow of information by making sure everyone speaks a well-understood and familiar language. Have language fit the organization, not the other way around.

Challenges in the Function Point World

For many organizations, the underlying concept of function points is a decent fit. That is, the metamodel of a system comprising two parts, a database structure and functions that access those structures, correctly describes what they build.

However, as we rightly observe, "You have a problem if your system does anything other than approved functionality. That's not necessarily saying anything bad about the function point, other than that it simply is not a size metric for all systems Also, organizations have found metrics programs to be very valuable in tracking and controlling projects already under way. Unfortunately, function points can represent difficult entities to track midstream. In ongoing projects, function point users have reported difficulty in counting "what function points have been built so far." (On the other hand, it's relatively easy for a configuration management system to report how much code has been added as new and what code has been changed.) In order to fill that void, many organizations respond by using alternate size measures for tracking, such as modules built, number of objects or programs under configuration management, number of integration builds, and yes, amount of code built and tested to date. The later size can be measured by a system of measuring Line of Code(LOC, KLOC)

So whether function points have met their early promise of consistency in counting is up for debate. They may not have proved immune to counting controversy after all. Nevertheless, function points serve a purpose as one type of size metric. And if your organization builds applications, it might pay to consider function points as a size metric.

Remember that any one measure has its limitations. Therefore, utilize multiple approaches where possible and note the sizing relationships (the proportionality) between them.

Additionally we can try DEFECT METRICS to check how re-building /re-work occurs

No metrics discussion would be complete without addressing the subject of software defects. It is the least-measured entity, yet the one that receives the worst press when software failures occur in the real world. That should tell us something in and of itself. What we are seeing with software defects is the direct correlation of schedule pressure and bad planning. This squarely points to the potential for management to influence software quality, both positively and negatively. Causality between software project deadlines and defect levels thus places the opportunity for good quality software squarely in normal management laps.

We can build two categories for defects (1) defects found during system testing to delivery (including severity), over time, and (2) defects after roll-out / implementation/ user reported defects. The two categories are indistinguishably related. The latter will enable you to determine software reliability. There is much to interpret from this raw data. At the heart of this is causal analysis; that is, finding out the drivers behind defect rates and defect densities and then doing something about them.

No comments:

Post a Comment