Thursday, March 31, 2011

Business Analytics and Business Intelligence ground reality

Business intelligence and business analytics aren’t new concepts. The idea of understanding the relationships between bits and bytes of data extends back to the late 1950s, and BI has been around in earnest since the late 1980s. However, today, the ability to aggregate, store, mine and analyze data can make or break an enterprise. As a result, BI and BA have emerged as core tools guiding decisions and strategies for areas as diverse as marketing, credit, research and development, customer care and inventory management.

As CIO.com reports, BI and BA are evolving rapidly and meshing to meet business challenges and create new opportunities. Although nearly all global 5000 organizations already use these tools, 35 percent of them fail to make insightful decisions about significant changes in business and market conditions, according to IT consulting firm Gartner. What’s more, the task isn’t getting any easier as data streams become more intertwined and other Web 2.0 environments pull data from multiple sources at single instance.

I believe business intelligence and business analytics are on the cusp of a major change. There is a shift toward providing deeper insight into business information. And there is a growing emphasis on better tools and putting more powerful and better software in the hands of business decision makers today.

Business Intelligence & Business Analysts are quite disconnected in real world, at least that's what have seen in last many years of experience. BI is evolved as platform or bunch of tools, with architecture, albeit an enterprise-wide approach, lacks deep analytical and predictive capabilities. Traditionally, this is where the work of IT ends and business analytics starts, with statistical, quantitative and predictive work conducted outside of the framework.

This unfortunate reality has contributed to the myth that BA is something totally different from BI. The vision of BI always includes analytics, and BA is merely a subset of BI focused on analytical parts of business intelligence. Because the traditional BI architecture doesn't lend itself to advanced analytics capabilities, such as statistical modeling and data mining, it's not surprising business users collect data and reports from BI systems and then use their own analytics in spreadsheets they control. This approach is not a viable solution however, because uncontrollable processes and questionable data will seriously hamper a BA effort. Research studies estimate that roughly 94 percent of spreadsheets deployed in the field contain errors, and 5+ percent of cells in unaudited spreadsheets contain errors.

What we need is an analytics-oriented BI architecture that incorporates advance analytics and analytic modeling capabilities into the current BI framework. Traditional BI vendors need to build more advanced analytical functionalities within their BI offerings. Many major BI tools don't support advanced statistical and quantitative modeling. Some support limited analytics and require highly technical skills (such as SQL) for use, which most business users don't possess. BI vendors need to provide more user-friendly analytics tools with much broader capabilities for statisticians and business analysts to use without lots of IT support. These new capabilities should include predictive analytics, data mining, text analytics, simulation, decision analysis and advanced modeling.

Second, traditional analytics software vendors need to embed powerful analytical capabilities into the BI platform and make integration much easier for customers. Most BI applications and BA applications operate on very different platforms. Every company needs to reckon with integration and ROI before investment. BI and BA vendors should work together to make the integration much less painful and help customers unleash the best of both worlds.

An integrated solution combines advanced analytics with powerful data visualization and advanced reporting capabilities to support fact-based and data-driven decision-making. Under this new architecture, advanced analytics will be an integral part of BI. Analytics process and technology could be managed under one unified BI framework and strategy that ultimately should align with a company's business strategy. Initiatives such as data management and governance could benefit both BI and BA programs.

Companies that have high quality information that is well-defined and understood across the enterprise already have a solid foundation for BA. In terms of implementation, there could be different deployment approaches based on the conceptual architecture. For instance, analytic models might be built into a database or data warehouse to leverage its processing power.

In-database analytics has lots of advantages - analyzing data where it resides to avoid data movement and duplication. However, in-database analytics can be costly when analytics processes, which are volatile and adaptive in nature (as old models need to be updated or rebuilt with latest data input), are hindering other mission-critical OLTP or OLAP operations. It may lead to a separate environment for development and deployment of an analytic model. Meanwhile, advanced analytics capabilities are better built within existing BI tools for better compatibility and integration with existing BI features. Analytics could also be built into operational systems when less data integration is needed - analyzing data while capturing it. Organizations should choose the best deployment model to fit their business analytical needs.

Lastly, BA needs to be integrated and embedded in business process to be effective & efficient. One such example is to create a closed-loop style repeatable process in the normal workflow of business operations to feed the results back into the operational system where the data for analytics is sourced. This kind of decision automation is used in cases where decisions tend to be high volume. For instance, an online retailer can use an analytical model that predicts high probability of a customer buying a certain new product to attempt cross-selling by dynamically displaying ad banners when the customer visits the online store. An online bank can approve or reject loan applications automatically based on the criteria defined by the application processing rules engine using predictive analytics. Only the exceptions (rejected applications) will be sent to loan officers for review and follow-up. The model significantly reduces the cost and decision time for the bank and customers, a win/win for both.

According to my observations he key characteristics of the analytics-oriented BI architecture are:

  • Integrated (data, reporting, analytics)
  • Robust and flexible (for rapid changes)
  • Evolving and adaptive
  • Consistent (standards in process and data)
  • Controlled
  • Transparent (versus black-box approach) and
  • Embedded (analytics as part of business process)

With the burgeoning demand in advanced analytics and emerging analytical technologies, we will see the convergence of BI and BA in the marketplace. BI megavendors will likely acquire smaller BA players and integrate advanced analytical tools and capabilities into their BI portfolios. At the same time, traditional analytics software vendors will likely push more into the BI platform
territory.The reciprocal penetration will accelerate the consolidation, standardization and adoption of analytics while moving toward an analytics-oriented BI architecture.

Historically, this market has been served by vendors such as Business Objects and Cognos. But the competitive landscape is changing. Microsoft has now shrewdly entered the market by driving the placement of SQL servers into the space in order to broadly deploy and deliver its BI suite and reporting services in volume. Oracle has seen the effect of companies moving data out of the database to stage it for analysis. The resulting data warehouses have provided a degree of utility in housing, manipulating and delivering “strategic” information across the organization.

Also every top level boss wants an effective dashboard. To the extent that all of us are CIO/CTO/CEO’s of our own business discipline, we want a simple measurement display of how we are doing and an alert mechanism of when something goes wrong. Additionally, dashboards address the growing urgency around Sarbanes Oxley. Monitoring planning assumptions and key performance metrics has now become mission critical from a regulatory and compliance standpoint. As we all know BI reporting ends with the dashboard, which is sufficient only for some business planning, and BA picks up the rest for the Go-To Guys. Simply, this group must interact with data in a much different way from what traditional BI allows. The requirement of the BI system has been to monitor the data based on pre-configured questions requiring only a thin client environment to inform the user. In the operating world, users need to engage with the information requiring a richer client to support interactivity and the ability to ask and answer their own question without having to go back to IT. Let us make one thing clear, we don’t get business analytics when you buy business intelligence. The requirements are different and the benefits are different. The return on information and expertise achieved by arming your resources, operating managers with analytics will supercharge your existing BI investment.

Do let me know your views suggestions, These thoughts I have collected from CIO.COM, Linked discussions & various discussions with Co-workers & PMI Mumbai members. Thanks to all for sharing inputs in time with free heart. I am available at ravindrapande at gmail.com

Thursday, March 10, 2011

Estimation Basic Need, options Starting Part 1

As someone correctly point out you can’t control things which you can’t measure. Very true to it’s logical sense. In our software day to day life we need to measure our efforts to track, monitor & control. This thoughts lead me to create this write-up for everyone in IT age to control day to day professional life.

A Software product / projects are typically controlled by four major variables; time, requirements, resources (people, infrastructure/materials, and money), and risks. Unexpected changes in any of these variables will have an impact on an execution. Hence, making good estimates of time and resources required for a project is crucial. Underestimating project needs can cause major problems because there may not be enough time, money, infrastructure/materials, or people to complete the project. Overestimating needs can be very expensive for the organization because a decision may be made to defer the project because it is too expensive or the project is approved but other projects are "starved" because there is less to go around.

In my experience, making estimates of time and resources required for a project is usually a challenge for most project teams and project managers. It could be because they do not have experience doing estimates, they are unfamiliar with the technology being used or the business domain, requirements are unclear, there are dependencies on work being done by others, and so on. These can result in a situation akin to analysis paralysis as the team delays providing any estimates while they try to get a good handle on the requirements, dependencies, and issues. Alternatively, we will produce estimates that are usually highly optimistic as we have ignored items that need to be dealt with. How does one handle situations such as these?

Useful Estimation Techniques

Before we begin, we need to understand & categorize what types of estimates we can provide. Estimates can be roughly divided into these types:

Initial estimates/ Ballpark or order of magnitude: Here the estimate is probably an order of magnitude from the final figure. This can be within two or three times the actual value.

Rough estimates: Here the estimate is closer to the actual value. Ideally it will be about 50% to 100% off the actual value.

Fair estimates: This is a very good estimate. Ideally it will be about 25% to 50% off the actual value.

Deciding which of these three different estimates you can provide is crucial. Fair estimates are possible when you are very familiar with what needs to be done and you have done it many times before. This sort of estimate is possible when doing maintenance type work where the fixes are known, or one is adding well-understood functionality that has been done before. Rough estimates are possible when working with well-understood needs and one is familiar with domain and technology issues. In all other cases, the best we can hope for before we begin is order of magnitude estimates. Some may quibble than order of magnitude estimates are close to no estimate at all! However, they are very valuable because they give the organization and project team some idea of what the project is going to need in terms of time, resources, and money. It is better to know that something is going to take between two and six months to do rather than have no idea how much time it will take. In many cases, we may be able to give more detailed estimates for some items rather than others. For example, we may be able to provide a rough estimate of the infrastructure we need but only an order of magnitude estimate of the people and time needed.

For a given scenario let’s think it thru with a role of a well educated developer, whether any project manager planning for a smooth implementation of a plan or a project sponsor on whose decisions a project depends, you cannot escape from the fact that project estimation is essential to its success. In the first place, there are three basic requirements that a project must satisfy: schedule, budget, and quality. The need to work within these essential project boundaries poses a huge challenge to everyone in the central management team.

There are various aspects that affect project estimates, such as team skills and experience levels, available technology, use of full-time or part-time resources, project quality management, risks, iteration, development environment, requirements, and most of all, the level of commitment of all project members.

Moreover, project estimations do not need to be too complicated. There are tools, methodologies, and best practices that can help project management teams, from sponsors to project managers, agree on estimates and push development efforts forward. Some of these include the following:

Project estimates must be based on the application’s solution, scope and architecture. Making estimates based on an application’s architecture should give you a clear idea of the length of the entire development project phase. Moreover, an architecture-based estimation provides you a macro-level view of the resources needed to complete the project.

Project estimations should also come from the ground up. All estimates must add up, and estimating the collective efforts of the production teams that work on the application’s modules helps identify the number of in-house and outsourced consultants that you need to hire for the entire project, as well as have a clear idea of the collective man-hours required to code modules or finish all features of the application. Ground-up estimates are provided by project team members and do not necessarily match top-level estimates exactly. In this case, it is best to add a percentage of architecture-based estimates to give room to possible reworks, risks, and other events that may or may not be within the control of the project staff.

Do not forget modular estimates. Once you have a clear idea of the architecture, it becomes easier to identify the modules that make up the entirety of the application. Knowing the nature of these modules should help you identify which can be done in-house or onshore, or by an offshore development team. Moreover, given the location and team composition of each development team that works on a module, it becomes easier to identify the technical and financial resources needed to work on the codes.

Development language matters. Whether the development language is Java, .Net, C++ or any other popular language used by software engineers, team that will be hired for the project must be knowledgeable in it. Some development efforts require higher skills in these languages, while some only need basic functional knowledge, and the levels of specialization in any of these languages have corresponding rates. Most of the time, the chosen development language depends on the chosen platform, and certain platforms run on specialized hardware.

You cannot promise upper management dramatic costs from offshoring. While there are greater savings from having development work done by offshore teams composed of workers whose rates are significantly lower from onshore staff, you must consider communication, knowledge transfer, technical set-up, and software installation costs in your financial estimates. Estimating costs is often more about managing expectations, but as the project matures, it should be clearer whether the money spent on it was money that was spent well.

Project estimation software and tools help identify “what-if” scenarios. Over the years, project managers have devised ways to automate project schedule, framework, cost, and staffing estimates. Some estimation applications also have sample historical data or models based on real-world examples. If your business has a lot in common with the samples in the estimation tool, it can help you identify what-if scenarios and in turn include risks, buffers, and iteration estimates.

Price break-down helps in prioritization. Breaking down the total cost of the project helps management decide which parts of a system should be prioritized, delayed, or even cancel. Estimating costs for a new project may not be easy, but project sponsors and managers must be able to know and agree on the breakdown of costs of development, technical requirements, and overhead.

These are some guide lines mentioned to understand the need for estimation & take it as part of daily life to get more control over all software process.

Feel free to reach me at ravindrapande@gmail.com to share thoughts & suggestions.

Tuesday, January 18, 2011

Learn Android starter for Java developers

What is Android?

Android is a software stack (solution for end to end users) for mobile devices that includes an operating system, middleware and key applications. The Android SDK(software development Kit - complete Integrated development libraries with basic help) provides the tools and APIs (application programming interfaces) that are necessary to begin developing applications on the Android platform using the Java programming language.

The stack consist of

· Application framework enabling reuse and replacement of components

· A virtual machine optimized for mobile devices

· Integrated browser based on the open source WebKit engine

· Optimized graphics powered by a custom 2D graphics library; 3D graphics based on the OpenGL ES 1.0 specification (hardware acceleration optional)

· SQLite for structured data storage

· Media support for common audio, video, and still image formats (MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, GIF)

· GSM Telephony (hardware dependent)

· Bluetooth, EDGE, 3G, and WiFi (hardware dependent)

· Camera, GPS, compass, and accelerometer (hardware dependent)

· Rich development environment including a device emulator, tools for debugging, memory and performance profiling, and a plugin for the Eclipse IDE

Now let's get hands dirty (start actual work of study including trial & error) , I appreciate Google teams & also various pages available across the net.

Here is a set of links that we recommend you follow to get up to speed with Android development. Most of these are from the official documentation page. We feel like the documentation isn't arranged too great, not everything is where you'd expect, making some things hard to find. In fact, while compiling this list, we came upon some very useful pages that mentioned as further reading. The order of these links might not as per the sequence and slightly different from the official documentation, and we also skipped some of areas, so make sure you look at the rest of the documentation based on your needs / necessities.

You can start with installing the Android SDK and related tools.

http://code.google.com/android/intro/installing.html

After that, follow these step-by-step instructions for a simple Hello World app. This will confirm you have your SDK and development environment set up correctly, and also give you a good introduction to basic Android concepts. http://code.google.com/android/intro/hello-android.html

Now get your hands even more dirty, and complete the Notepad tutorial. I strongly recommend this, even if it starts out boring! You'll learn more about UI creation, creating menus, using SQLite, creating apps with multiple screens, and dealing with life-cycle events. This tutorial assumes you're comfortable with Java, but you should be fine if you have experience with any OO language.

Now would be a good time to gain a better understanding of some of the underlying concepts. Some of these are stuff we discussed, but you should still read these. The activity reference page is probably the most important and will apply to almost all applications. This gives you basic details about the architecture, layers, events etc. basically needed to start actual coding

http://code.google.com/android/intro/anatomy.html

http://code.google.com/android/reference/android/app/Activity.html

http://code.google.com/android/reference/android/view/View.html

Now you may fell to get a richer UI (user interface). Let's go back & understand a bit about "view".

In any Android application, the user interface is built using "View" and "ViewGroup" objects. There are many types of views and view groups, each of which is a descendant of the View class. View objects are the basic units of user interface expression on the Android platform. The "View" class serves as the base for subclasses called "widgets," which offer fully implemented UI objects, like text fields and buttons. The "ViewGroup" class serves as the base for subclasses called "layouts," which offer different kinds of layout architecture, like linear, tabular and relative. This is somewhat we use to do in JAVA if you can recall in good old days.

To go in more details a typical "View" object is a data structure whose properties store the layout parameters and content for a specific rectangular area of the screen. A "View" object handles its own measurement, layout, drawing, focus change, scrolling, and key/gesture interactions for the rectangular area of the screen in which it resides. As an object in the user interface, a View is also a point of interaction for the user and the receiver of the interaction events. To know more please visit http://developer.android.com/guide/topics/ui/index.html

Now time for put a light on storage options, Android provides several options for you to save persistent application data. The solution you choose depends on your specific needs & necessities, such as whether the data should be private to your application or accessible to other applications (and the user) and how much space your data requires. Your data storage options are:

· Shared Preference : Store private primitive data in key-value pairs.

· Internal Storage: Store private data on the device memory.

· External Storage: Store public data on the shared external storage.

· SQLite Databases: Store structured data in a private database.

· Network Connection: Store data on the web with your own network server.

Android provides a way for you to expose even your private data to other applications — with a content provider. A content provider is an optional component that exposes read/write access to your application data, subject to whatever restrictions you want to impose. For more information about this have look at http://developer.android.com/guide/topics/data/data-storage.html

As we use to do with JAVA JDK use samples or the API demos that come with the SDK/ JDK. These are pre-installed in the emulator. We recommend going through all the items in the demo, to get a good understanding of who is doing what / event triggers etc. If you find some functionality that is similar to what you need in your app, you can locate the source code for that particular functionality and copy it into your project. This is code reusability & makes our life lot easier for next level learning. Also you can use the famous editor Eclipse which can add a great deal of environment help etc. Create a new project in Eclipse using the existing API Demos project folder to view and fool around with the source code.

Let's ponder more on this , Developing In Eclipse. The Android Development Tools (ADT) plugin for Eclipse adds powerful extensions to the Eclipse integrated development environment. This allows you to create and debug Android applications easier and faster. If you use Eclipse, the ADT plugin gives you an incredible boost in developing Android applications:

· It gives you access to other Android development tools from inside the Eclipse IDE. For example, ADT lets you access the many capabilities like take screenshots, manage port-forwarding, set breakpoints, and view thread and process information directly from Eclipse.

· It provides a New Project Wizard, which helps you quickly create and set up all of the basic files you'll need for a new Android application.

· It automates and simplifies the process of building your Android application.

· It provides an Android code editor that helps you write valid XML for your Android manifest and resource files.

· It will even export your project into a signed APK, which can be distributed to users.

To begin developing Android applications in the Eclipse IDE with ADT, you first need to download the Eclipse IDE and then download and install the ADT plugin. Have look at this like for further details http://developer.android.com/sdk/eclipse-adt.html#installing

For further please keep visiting & I will also let you learn my experiences step by step, feel free to get connected with me on ravindrapande@gmail.com. I am also on twitter & Facebook

Thursday, December 23, 2010

Knowledge Management - my thoughts

As Knowledge is intangible, dynamic, and difficult to measure, but without it no organization can survive. unarticulated knowledge is more personal, experiential, context specific, and hard to formalize; is difficult to communicate or share with others; and is generally in the heads of individuals and teams. So to start we need to understand what is important to us.

This do we need KM ?

As our Competitive success will be based on how strategically intellectual capital is managed

As after capturing the knowledge residing in the minds of employees so that it can be easily shared across the enterprise

We can after Leveraging organizational knowledge is emerging as the solution to an increasingly fragmented and globally-dispersed workplace

As we can understand that reuse of knowledge saves work, reduces communication costs, and allows a company to take on more projects

According to Wikipedia "Knowledge Management (KM) comprises a range of strategies and practices used in an organization to identify, create, represent, distribute, and enable adoption of insights and experiences. Such insights and experiences comprise knowledge, either embodied in individuals or embedded in organizational processes or practice."

The Activities we need to do it are

  • Generating knowledge, capturing knowledge
  • Making knowledge accessible
  • Representing and embedding knowledge
  • Facilitating knowledge
  • Transferring knowledge to an archival for day to day use

In my thoughts, let's call it an art of preserving useful knowledge & archiving it for future use for efficiencies increase & continuous improvement purpose. In my adaptions we have built & used a knowledge management systems, which are comprises a range of practices used in to identify, create, represent, distribute and enable adoption of insights and experiences. These insights and experiences comprise knowledge, either embodied in teams , members, or embedded in organizational processes or practice. This acquired knowledge help us to continuously improve from the learning acquired & re-align the strategies to get maximum efficiencies and quality for the output.

Let's take a sample to understand this correctly, if someone asks what sales are apt to be next quarter, we would have to say, "It depends!" I would have to say this because although we have data and information, we have not built knowledge yet. This is a trap that many fall into, because they don't understand that data doesn't predict trends of data. What predicts trends of data is the activity that is responsible for the data. To be able to estimate the sales for next quarter, we might need the information about the competition, market size, extent of market saturation, current backlog, customer satisfaction levels associated with current product delivery, current production capacity, the extent of capacity utilization, and a whole host of other things. When we were able to amass sufficient data and information to form a complete pattern that we can understand, we would have knowledge, and would then be somewhat comfortable estimating the sales for next quarter. We are looking for documenting such complex knowledge.

In an organizational context, data represents facts or values of results, and relations between data and other relations have the capacity to represent information. Patterns of relations of data and information and other patterns have the capacity to represent knowledge. For the representation to be of any utility it must be understood, and when understood the representation is information or knowledge to the one that understands.

Recently with the advent of the Web 2.0, the concept of KM has evolved towards a vision more based on people participation and emergence. This line of evolution trend is still continuing. However, there is an ongoing debate and discussions about which area have priorities over others. So these areas & priorities can be revised as per the needs.

The Areas to be addressed for a typical IT services organizations are

  • Technological data
  • Organizational data
  • Process documentation
  • Trainings & feedback
  • Dashboards/ Measuring and reporting

As you can feels KM is an umbrella activity for continuous improvement with proactive learning from all previous best practices (implementations) & errors not to repeats (error eliminations) but not limited to just learning, includes the learning from improved tools & third party products studies from Internet & intranets

The Data elements to be achieve can be as follows

  • Previous project Documents & Data
  • Learning from previous projects
  • Code snippets, qualitative review logs
  • Prato charts & actions taken results
  • Updated framework for better productivity
  • Organizational Assets updates improvements in communication with PMO & cooperate functions
  • Environmental factors updates choosing better tools & environments for better productivity

These are not all, we have to keep learning from our & neighbors mistakes & build best practices to fellow to improve over efficiencies in organizations

Motivational factors / Process results

  • Continuous improvements in terms of deviation analysis & process fine tuning thru SEPG
  • Effort & cost estimation alignment with the LOC & Function point variable factors
  • Improved matrices in terms of proactively predicting the alarms & set triggering the right element this time.
  • Improved collaboration with client & third parties in terms of getting support proactively than reactively
  • Tool for Global / Virtual teams to access / share all project knowledge status & training details
  • Making available increased knowledge content in the development and provision of products and services
  • Achieving shorter new product development cycles (Efficiencies improved)
  • Facilitating and managing innovation and organizational learning & thus sharing best practices across the department, work unit
  • Leveraging the expertise of people across the organization & documenting the knowledge for future reference
  • Increasing network connectivity between internal and external individuals
  • Managing business environments and allowing employees to obtain relevant insights and ideas appropriate to their work
  • Solving intractable or wicked problems
  • Managing intellectual capital and intellectual assets in the workforce (such as the expertise and know-how possessed by key individuals)

The value of Knowledge Management relates directly to the effectiveness that is return on investment, with which the managed knowledge enables the members of the organization to deal with today's situations and effectively envision and create their future. Without on-demand access to managed knowledge, every situation is addressed based on what the individual or group brings to the situation with them. With on-demand access to managed knowledge, every situation is addressed with the sum total of everything anyone in the organization has ever learned about a situation of a similar nature. There is no one-size-fits-all way to effectively tap a firm's intellectual capital. To create value, companies must focus on how knowledge is used to build critical capabilities.

There are always positive & negative aspects of systems as per the usage scenarios. Let's consider an example. A firm that had invested millions of dollars in a state-of-the-art intranet intended to improve knowledge sharing got some bad news: Employees were using it most often to retrieve the daily menu from the company cafeteria. The system was just started making appearances among users or in day-to-day business activities.

Few executives would argue with the premise that knowledge management is critical—but few know precisely what to do about it. There are numerous examples of knowledge-management programs intended to improve innovation, responsiveness and adaptability that fall short of expectations. The Organizations need to spend efforts in research activities at various points & explore all such instances including the roots of the problem and have developed a method to help executives make effective knowledge management a reality in their organizations.

Much of the problem with knowledge management today lies in the way the subject has been approached by vendors and the press. Knowledge management is still a relatively young field, with new concepts emerging constantly. Often, it is portrayed simplistically; discussions typically revolve around blanket principles that are intended to work across the organization. Most knowledge-management initiatives have focused almost entirely on changes in tools and technologies, such as intranets.


Do share your thoughts at Ravindrapande@gmail.com