Friday, September 8, 2017

Deep Thinking

This is a non-technical blog, just simply understanding the order or level of thinking including the complexity in built.  Lets start with present. I personally believe we spend too much time thinking abut past. Life Is Simply a Series of Present Moments

Here we are informed that the past is simply all the present moments that have gone by. The only important time is the present, for which we think about the least. Furthermore, the present is simply future present moments waiting to go by.
Stages of Deep Thinking
Before we look at strategies you can use to become a deep thinker, let’s briefly look at the stages of deep thinking known as the Three Levels of Thought.
Level 1: Lower Order Thinking. The individual is not reflective, has a low to mixed skill level, and relies solely on gut intuition.
Level 2: Higher Order Thinking. The individual is selective on what to reflect on, has a high skill level, yet lacks critical thinking vocabulary.
Level 3: Highest Order Thinking. The individual is explicitly reflective, has the highest skill level, and routinely uses critical thinking tools.

To enter into the Highest Order Thinking, try the following strategies.
Strategies to Become a Deep Thinker
Increase Self-Awareness by Thinking About Thinking
Imagine you could become aware of how you learn. We know that we must have a baseline of previous knowledge about something to use Metacognition. Think of your Intelligence as what you think and Metacognition as how you think. Let’s look at a series of questions you can ask yourself by using the Elements of Thought.

    Purpose. What am I trying to accomplish?
    Questions: What question am I raising or addressing? Am I considering the complexities in the question?
    Information: What information am I using to get to my conclusion.
    Inferences: How did I reach this conclusion? Is there another way to interpret the information?
    Concepts: What is the main idea? Can I explain this idea?
    Assumptions: What am I taking for granted?
    Implications: If someone accepted my position, what would the implications be?
    Points of View. From what point of view am I looking at this issue? Is there another point of view I should consider?

Meta-Questioning is higher order questions we can use to explore ideas and problems. Here are some examples.
Challenge Current Learning Methods Through Meta-Questions

    Why did it happen?
    Why was it true?
    How does X relate to Y?
    Why is reasoning based on X instead of Y?
    Are there other possibilities?

Let’s look at a practical example.

    When you say: “I can’t do this.” Change this to: “What specifically can I not do?”
    You say: “I can’t exercise.” Then ask: “What is stopping me?”
    You say: “I don’t have time.” Now ask yourself: “What needs to happen for me to start exercising?”
    You discover: “What time wasters can I eliminate in order to create more time to exercise?”
    Then imagine how you could start exercising: “If I could exercise, how would I do it?”
This is just a start of better thinking in my approach. Would love to understand your view as we can better do with us all together. Please share your inputs. at ravindrapande@gmail.com

Sunday, July 16, 2017

Best Coding Practices

This is a long pending blog worked on quite few months this is closer to heart as I believe I am a programmer at heart so to make this fit for publishing is a great debate with in myself, so at last I am publishing this after spending some 6 hrs on Sunday on this, hope you would like this as good guidance point.

The complexity of a program can be particularly confounding, because there isn’t anything to put your hands on. When it breaks, you can’t pick up something solid and look around inside it. It’s all abstract, and that can be really hard to deal with. In fact, the average computer program is so complex that no person could comprehend how all the code works in its entirety. The bigger programs get, the more this is the case. Thus, programming has to become the act of reducing complexity to simplicity. Otherwise, nobody could keep working on a program after it reached a certain level of complexity.

Superior coding techniques and programming practices are hallmarks of a professional programmer. The bulk of programming consists of making a large number of small choices while attempting to solve a larger set of problems. How wisely those choices are made depends largely upon the programmer's skill and expertise.

This document addresses some fundamental coding techniques and provides a collection of coding practices from which to learn. The coding techniques are primarily those that improve the readability and maintainability of code, whereas the programming practices are mostly performance enhancements.

The readability of source code has a direct impact on how well a developer comprehends a software system. Code maintainability refers to how easily that software system can be changed to add new features, modify existing features, fix bugs, or improve performance. Although readability and maintainability are the result of many factors, one particular facet of software development upon which all developers have an influence is coding technique. The easiest method to ensure that a team of developers will yield quality code is to establish a coding standard, which is then enforced at routine code reviews.

The complex pieces of a program have to be organized in some simple way so that a programmer can work on it without having God-like mental abilities. That is the art and talent involved in programming—reducing complexity to simplicity.

A “bad programmer” is just somebody who fails to reduce the complexity. Many times this happens because people believe that they are reducing the complexity of writing in the programming language (which is definitely a complexity all in itself) by writing code that “just works,” without thinking about reducing the complexity for other programmers.

It’s sort of like this. Imagine an engineer who, in need of something to pound a nail into the ground with, invents a device involving pulleys, strings, and a large magnet. You’d probably think that was pretty ridiculous.

Now imagine that somebody tells you, “I need some code that I can use in any program, anywhere, that will communicate between any two computers, using any medium imaginable.” That’s definitely harder to reduce to something simple. So, some programmers (perhaps most programmers) in that situation will come up with a solution that involves the equivalent of strings and pulleys and a large magnet, that is only barely comprehensible to other people. They’re not irrational, and there’s nothing wrong with them. When faced with a really difficult task, they will do what they can in the short time they have. What they make will work, as far as they’re concerned. It will do what it’s supposed to do. That’s what their boss wants, and that’s what their customers seem to want, as well. But one way or another, they will have failed to reduce the complexity to simplicity. Then they will pass this device off to another programmer, and that programmer will add to the complexity by using it as part of her device. The more people who don’t act to reduce the complexity, the more incomprehensible the program becomes.

As a program approaches infinite complexity, it becomes impossible to find all the problems with it. Jet planes cost millions or billions of dollars because they are close to this complex and were “debugged.” But most software only costs the customer about $50–$100. At that price, nobody’s going to have the time or resources necessary to shake out all of the problems from an infinitely complex system. So, a “good programmer” should do everything in his power to make what he writes as simple as possible to other programmers. A good programmer creates things that are easy to understand, so that it’s really easy to shake out all the bugs.
Now, sometimes this idea of simplicity is misunderstood to mean that programs should not have a lot of code, or shouldn’t use advanced technologies. But that’s not true. Sometimes a lot of code actually leads to simplicity; it just means more writing and more reading, which is fine. You have to make sure that you have some short document explaining the big mass of code, but that’s all part of reducing complexity. Also, usually more advanced technologies lead to more simplicity, even though you have to learn about them first, which can be troublesome.

Some people believe that writing in a simple way takes more time than quickly writing something that “does the job.” Actually, spending a little more time writing simple code turns out to be faster than writing lots of code quickly at the beginning and then spending a lot of time trying to understand it later. That’s a pretty big simplification of the issue, but programming-industry history has shown it to be the case. Many great programs have stagnated in their development over the years just because it took so long to add features to the complex beasts they had become. And that is why computers fail so often—because in most major programs out there, many of the programmers on the team failed to reduce the complexity of the parts they were writing. Yes, it’s difficult. But it’s nothing compared to the endless difficulty that users experience when they have to use complex, broken systems designed by programmers who failed to simplify.

Commenting & Documentation : IDE's (Integrated Development Environment) have come a long way in the past few years. This made commenting your code more useful than ever. Following certain standards in your comments allows IDE's and other tools to utilize them in different ways.

Consistent Indentation : I assume you already know that you should indent your code. However, it's also worth noting that it is a good idea to keep your indentation style consistent. There are more than one way of indenting code.

Avoid Obvious Comments : Commenting your code is fantastic, however, it can be overdone or just be plain redundant. When the text is that obvious, it's really not productive to repeat it within comments. If you must comment on that code, you can simply combine it to a single line or few words will suffice.

Code Grouping : More often than not, certain tasks require a few lines of code. It is a good idea to keep these tasks within separate blocks of code, with some spaces between them.

Consistent Naming Scheme : Follow a consistent naming conventions so every one in team & using the similar technology will be able to understand your creations

File and Folder Organization : Technically, you could write an entire application code within a single file. But that would prove to be a nightmare to read and maintain.

During my initial learning of programming at my bachelors, I knew about the idea of creating "include files." (in C/C++) However, I was not yet even remotely organized. I created an "inc" folder, with two files in it: db.php and functions.php. As the applications grew, the functions file also became huge and un-maintainable. One of the best approaches is to either use a framework, or imitate their folder structure

Using solid coding techniques and good programming practices to create high quality code plays an important role in software quality and performance. In addition, by consistently applying a well-defined coding standard and proper coding techniques, and holding routine code reviews, a team of programmers working on a software project is more likely to yield a software system that is easier to comprehend and maintain.

Superior coding techniques and programming practices are hallmarks of a professional programmer. The bulk of programming consists of making a large number of small choices while attempting to solve a larger set of problems. How wisely those choices are made depends largely upon the programmer's skill and expertise.
This blog addresses some fundamental coding techniques and provides a collection of coding practices from which to learn. The coding techniques are primarily those that improve the readability and maintainability of code, whereas the programming practices are mostly performance enhancements.

The readability of source code has a direct impact on how well a developer comprehends a software system. Code maintainability refers to how easily that software system can be changed to add new features, modify existing features, fix bugs, or improve performance. Although readability and maintainability are the result of many factors, one particular facet of software development upon which all developers have an influence is coding technique. The easiest method to ensure that a team of developers will yield quality code is to establish a coding standard, which is then enforced at routine code reviews.

Coding Standards and Code Reviews :A comprehensive coding standard encompasses all aspects of code construction and, while developers should exercise prudence in its implementation, it should be closely followed. Completed source code should reflect a harmonized style, as if a single developer wrote the code in one session. At the inception of a software project, establish a coding standard to ensure that all developers on the project are working in concert. When the software project will incorporate existing source code, or when performing maintenance upon an existing software system, the coding standard should state how to deal with the existing code base.
Although the primary purpose for conducting code reviews throughout the development life cycle is to identify defects in the code, the reviews can also be used to enforce coding standards in a uniform manner. Adherence to a coding standard can only be feasible when followed throughout the software project from inception to completion. It is not practical, nor is it prudent, to impose a coding standard after the fact.

Coding Techniques : Coding techniques incorporate many facets of software development and, although they usually have no impact on the functionality of the application, they contribute to an improved comprehension of source code. For the purpose of this document, all forms of source code are considered, including programming, scripting, markup, and query languages.
The coding techniques defined here are not proposed to form an inflexible set of coding standards. Rather, they are meant to serve as a guide for developing a coding standard for a specific software project.
The coding techniques are divided into three sections:
Names
Comments
Format
coding techniques - Names : Perhaps one of the most influential aids to understanding the logical flow of an application is how the various elements of the application are named. A name should tell "what" rather than "how." By avoiding names that expose the underlying implementation, which can change, you preserve a layer of abstraction that simplifies the complexity. For example, you could use GetNextStudent() instead of GetNextArrayElement().
A tenet of naming is that difficulty in selecting a proper name may indicate that you need to further analyze or define the purpose of an item. Make names long enough to be meaningful but short enough to avoid being wordy. Programmatically, a unique name serves only to differentiate one item from another. Expressive names function as an aid to the human reader; therefore, it makes sense to provide a name that the human reader can comprehend. However, be certain that the names chosen are in compliance with the applicable language's rules and standards.
Suggested naming techniques: Routines -Avoid elusive names that are open to subjective interpretation, such as Analyze() for a routine, or xxK8 for a variable. Such names contribute to ambiguity more than abstraction. In object-oriented languages, it is redundant to include class names in the name of class properties, such as Book.BookTitle. Instead, use Book.Title.
Use the verb-noun method for naming routines that perform some operation on a given object, such as CalculateInvoiceTotal().
In languages that permit function overloading, all overloads should perform a similar function. For those languages that do not permit function overloading, establish a naming standard that relates similar functions. Variables - Append computation qualifiers (Avg, Sum, Min, Max, Index) to the end of a variable name where appropriate.
Use customary opposite pairs in variable names, such as min/max, begin/end, and open/close.
Since most names are constructed by concatenating several words together, use mixed-case formatting to simplify reading them. In addition, to help distinguish between variables and routines, use Pascal casing (CalculateInvoiceTotal) for routine names where the first letter of each word is capitalized. For variable names, use camel casing (documentFormatType) where the first letter of each word except the first is capitalized.
Boolean variable (Flag/s)names should contain Is which implies Yes/No or True/False values, such as fileIsFound. Avoid using terms such as Flag(Is is just 2 letters than 4) when naming status variables, which differ from Boolean variables in that they may have more than two possible values. Instead of documentFlag, use a more descriptive name such as docFormatType.
Even for a short-lived variable that may appear in only a few lines of code, still use a meaningful name. Use single-letter variable names, such as i, or j, for short-loop indexes only.
If using Charles Simonyi's Hungarian Naming Convention, or some derivative thereof, develop a list of standard prefixes for the project to help developers consistently name variables. For more information, see "Hungarian Notation."
For variable names, it is sometimes useful to include notation that indicates the scope of the variable, such as prefixing a g_ for global variables and m_ for module-level variables.
Constants should be all uppercase with underscores between words, such as NUM_DAYS_IN_WEEK. Also, begin groups of enumerated types with a common prefix, such as FONT_ARIAL and FONT_ROMAN.
Tables: When naming tables, express the name in the singular form. For example, use Employee instead of Employees. When naming columns of tables, do not repeat the table name; for example, avoid having a field called EmployeeLastName in a table called Employee.
Do not incorporate the data type in the name of a column. This will reduce the amount of work needed should it become necessary to change the data type later.
Do not prefix stored procedures with sp_, because this prefix is reserved for identifying system-stored procedures. In Transact-SQL, do not prefix variables with @@, which should be reserved for truly global variables such as @@IDENTITY.
Minimize the use of abbreviations. If abbreviations are used, be consistent in their use. An abbreviation should have only one meaning and likewise, each abbreviated word should have only one abbreviation. For example, if using min to abbreviate minimum, do so everywhere and do not later use it to abbreviate minute.
When naming functions, include a description of the value being returned, such as GetCurrentWindowName().
File and folder names, like procedure names, should accurately describe what purpose they serve.
Avoid reusing names for different elements, such as a routine called ProcessSales() and a variable called iProcessSales. Avoid homonyms when naming elements to prevent confusion during code reviews, such as write and right. When naming elements, avoid using commonly misspelled words. Also, be aware of differences that exist between American and British English, such as color/colour and check/cheque. Avoid using typographical marks to identify data types, such as $ for strings or % for integers.
Comments : Software documentation exists in two forms, external and internal. External documentation is maintained outside of the source code, such as specifications, help files, and design documents. Internal documentation is composed of comments that developers write within the source code at development time.

One of the challenges of software documentation is ensuring that the comments are maintained and updated in parallel with the source code. Although properly commenting source code serves no purpose at run time, it is invaluable to a developer who must maintain a particularly intricate or cumbersome piece of software.
Following are recommended commenting techniques:
When modifying code, always keep the commenting around it up to date.
At the beginning of every routine, it is helpful to provide standard, boilerplate comments, indicating the routine's purpose, assumptions, and limitations. A boilerplate comment should be a brief introduction to understand why the routine exists and what it can do.
Avoid adding comments at the end of a line of code; end-line comments make code more difficult to read. However, end-line comments are appropriate when annotating variable declarations. In this case, align all end-line comments at a common tab stop. Avoid using clutter comments, such as an entire line of asterisks. Instead, use white space to separate comments from code. Avoid surrounding a block comment with a typographical frame. It may look attractive, but it is difficult to maintain.
Prior to deployment, remove all temporary or extraneous comments to avoid confusion during future maintenance work. If you need comments to explain a complex section of code, examine the code to determine if you should rewrite it. If at all possible, do not document bad code—rewrite it. Although performance should not typically be sacrificed to make the code simpler for human consumption, a balance must be maintained between performance and maintainability. Use complete sentences when writing comments. Comments should clarify the code, not add ambiguity.
Comment as you code, because most likely there won't be time to do it later. Also, should you get a chance to revisit code you've written, that which is obvious today probably won't be obvious six weeks from now. Avoid the use of superfluous or inappropriate comments, such as humorous sidebar remarks.
Use comments to explain the intent of the code. They should not serve as inline translations of the code. Comment anything that is not readily obvious in the code. To prevent recurring problems, always use comments on bug fixes and work-around code, especially in a team environment.
Use comments on code that consists of loops and logic branches. These are key areas that will assist the reader when reading source code. Separate comments from comment delimiters with white space. Doing so will make comments stand out and easier to locate when viewed without color clues.
Throughout the application, construct comments using a uniform style, with consistent punctuation and structure.
Notes: Despite the availability of external documentation, source code listings should be able to stand on their own because hard-copy documentation can be misplaced. External documentation should consist of specifications, design documents, change requests, bug history, and the coding standard that was used.
Format : Formatting makes the logical organization of the code stand out. Taking the time to ensure that the source code is formatted in a consistent, logical manner is helpful to yourself and to other developers who must decipher the source code. Establish a standard size for an indent, such as four spaces, and use it consistently. Align sections of code using the prescribed indentation. Use a monospace font when publishing hard-copy versions of the source code. Except for constants, which are best expressed in all uppercase characters with underscores, use mixed case instead of underscores to make names easier to read. Align open and close braces vertically where brace pairs align.
You can also use a slanting style, where open braces appear at the end of the line and close braces appear at the beginning of the line.
Whichever style is chosen, use that style throughout the source code. Indent code along the lines of logical construction. Without indenting, code becomes difficult to follow. Indenting the code yields easier-to-read code.
Establish a maximum line length for comments and code to avoid having to scroll the source code window and to allow for clean hard-copy presentation. Use spaces before and after most operators when doing so does not alter the intent of the code. For example, an exception is the pointer notation used in C++. Put a space after each comma in comma-delimited lists, such as array values and arguments, when doing so does not alter the intent of the code. For example, an exception is an ActiveX® Data Object (ADO) Connection argument.

Use white space to provide organizational clues to source code. Doing so creates "paragraphs" of code, which aid the reader in comprehending the logical segmenting of the software.
When a line is broken across several lines, make it obvious that the line is incomplete without the following line.
Where appropriate, avoid placing more than one statement per line. An exception is a loop in C, C++, Visual J++®, or JScript®, such as for (i = 0; i < 100; i++).
When writing HTML, establish a standard format for tags and attributes, such as using all uppercase for tags and all lowercase for attributes. As an alternative, adhere to the XHTML specification to ensure all HTML documents are valid. Although there are file size trade-offs to consider when creating Web pages, use quoted attribute values and closing tags to ease maintainability. When writing SQL statements, use all uppercase for keywords and mixed case for database elements, such as tables, columns, and views.
Divide source code logically between physical files. In ASP, use script delimiters around blocks of script rather than around each line of script or interspersing small HTML fragments with server-side scripting. Using script delimiters around each line or interspersing HTML fragments with server-side scripting increases the frequency of context switching on the server side, which hampers performance and degrades code readability. Put each major SQL clause on a separate line so statements are easier to read and edit.
Do not use literal numbers or literal strings, such as For i = 1 To 7. Instead, use named constants, such as For i = 1 To NUM_DAYS_IN_WEEK, for ease of maintenance and understanding. Break large, complex sections of code into smaller, comprehensible modules.
Programming Practices : Experienced developers follow numerous programming practices or rules of thumb, which typically derived from hard-learned lessons. The practices listed below are not all-inclusive, and should not be used without due consideration. Veteran programmers deviate from these practices on occasion, but not without careful consideration of the potential repercussions. Using the best programming practice in the wrong context can cause more harm than good.
To conserve resources, be selective in the choice of data type to ensure the size of a variable is not excessively large.
Keep the lifetime of variables as short as possible when the variables represent a finite resource for which there may be contention, such as a database connection.
Keep the scope of variables as small as possible to avoid confusion and to ensure maintainability. Also, when maintaining legacy source code, the potential for inadvertently breaking other parts of the code can be minimized if variable scope is limited.
Use variables and routines for one and only one purpose. In addition, avoid creating multipurpose routines that perform a variety of unrelated functions.
When writing classes, avoid the use of public variables. Instead, use procedures to provide a layer of encapsulation and also to allow an opportunity to validate value changes.
When using objects pooled by MTS, acquire resources as late as possible and release them as soon as possible. As such, you should create objects as late as possible, and destroy them as early as possible to free resources. When using objects that are not being pooled by MTS, it is necessary to examine the expense of the object creation and the level of contention for resources to determine when resources should be acquired and released. Use only one transaction scheme, such as MTS or SQL Server™, and minimize the scope and duration of transactions.
Be wary of using ASP Session variables in a Web farm environment. At a minimum, do not place objects in ASP Session variables because session state is stored on a single machine. Consider storing session state in a database instead.
Stateless components are preferred when scalability or performance are important. Design the components to accept all the needed values as input parameters instead of relying upon object properties when calling methods. Doing so eliminates the need to preserve object state between method calls. When it is necessary to maintain state, consider using alternative methods, such as maintaining state in a database.
Do not open data connections using a specific user's credentials. Connections that have been opened using such credentials cannot be pooled and reused, thus losing the benefits of connection pooling.
Avoid the use of forced data conversion, sometimes referred to as variable coercion or casting, which may yield unanticipated results. This occurs when two or more variables of different data types are involved in the same expression. When it is necessary to perform a cast for other than a trivial reason, that reason should be provided in an accompanying comment.
Develop and use error-handling routines. Be specific when declaring objects, such as ADODB.Recordset instead of just Recordset, to avoid the risk of name collisions.
Require the use Option Explicit in Visual Basic and VBScript to encourage forethought in the use of variables and to minimize errors resulting from typographical errors.
Avoid the use of variables with application scope. Use RETURN statements in stored procedures to help the calling program know whether the procedure worked properly. Use early binding techniques whenever possible. Use Select Case or Switch statements in lieu of repetitive checking of a common variable using If…Then statements.
Explicitly release object references.
Data-Specific Never use SELECT *. Always be explicit in which columns to retrieve and retrieve only the columns that are required. Refer to fields implicitly; do not reference fields by their ordinal placement in a Recordset. Use stored procedures in lieu of SQL statements in source code to leverage the performance gains they provide. Use a stored procedure with output parameters instead of single-record SELECT statements when retrieving one row of data. Verify the row count when performing DELETE operations. Perform data validation at the client during data entry. Doing so avoids unnecessary round trips to the database with invalid data.
Avoid using functions in WHERE clauses. If possible, specify the primary key in the WHERE clause when updating a single row. When using LIKE, do not begin the string with a wildcard character because SQL Server will not be able to use indexes to search for matching values. Use WITH RECOMPILE in CREATE PROC when a wide variety of arguments are passed, because the plan stored for the procedure might not be optimal for a given set of parameters.
Stored procedure execution is faster when you pass parameters by position (the order in which the parameters are declared in the stored procedure) rather than by name. Use triggers only for data integrity enforcement and business rule processing and not to return information. After each data modification statement inside a transaction, check for an error by testing the global variable @@ERROR.
Use forward-only/read-only recordsets. To update data, use SQL INSERT and UPDATE statements.
Never hold locks pending user input. Use uncorrelated subqueries instead of correlated subqueries. Uncorrelated subqueries are those where the inner SELECT statement does not rely on the outer SELECT statement for information. In uncorrelated subqueries, the inner query is run once instead of being run for each row returned by the outer query.
ADO-Specific
Tune the RecordSet.CacheSize property to what is needed. Using too small or too large a setting will adversely impact the performance of an application.
Bind columns to field objects when looping through recordsets. For Command objects, describe the parameters manually instead of using Parameters.Refresh to obtain parameter information.
Explicitly close ADO Recordset and Connection objects to insure that connections are promptly returned to the connection pool for use by other processes. Use adExecuteNoRecords for non-row-returning commands.

Solution Design: Every programmer working on a software project is involved in design. The lead developer is in charge of designing the overall architecture of the entire program. The senior programmers are in charge of designing their own large areas. And the junior programmers are in charge of designing their parts of the program, even if they’re as simple as one part of one file. There is even a certain amount of design involved in writing a single line of code.
Even when you are programming all by yourself, there is still a design process that goes on. Sometimes you make a decision immediately before your fingers hit the keyboard, and that’s the whole process. Sometimes you think about how you’re going to write the program when you’re in bed at night.

There are three broad mistakes that software designers make when attempting to cope
with the Law of Change, listed here in order of how common they are:
1. Writing code that isn’t needed
2. Not making the code easy to change
3. Being too generic

Don’t write code until you actually need it, and remove any code that isn’t being used.

One of the great killers of software projects is what we call “rigid design.” This when a programmer designs code in a way that is difficult to change. There are two ways to get a rigid design:
1. Make too many assumptions about the future.
2. Write code without enough design.

Code should be designed based on what you know now, not on what you
think will happen in the future.

When faced with the fact that their code will change in the future, some developers attempt to solve the problem by designing a solution so generic that (they believe) it will accommodate every possible future situation. We call this “overengineering.”
The dictionary defines overengineering as a combination of “over” (meaning “too much”) and “engineer” (meaning “design and build”). So, per the dictionary, it means designing or building too much for your situation.

Be only as generic as you know you need to be right now. There is a method of software development that avoids the three flaws by its very nature, called “incremental development and design.” It involves designing and building a system piece by piece, in order.

Conclusion : Using solid coding techniques and good programming practices to create high quality code plays an important role in software quality and performance. In addition, by consistently applying a well-defined coding standard and proper coding techniques, and holding routine code reviews, a team of programmers working on a software project is more likely to yield a software system that is easier to comprehend and maintain. The ease of maintenance of any piece of software is proportional to the simplicity of its individual pieces.

Feel free to contact me at ravindrapande@gmail.com. I would like to understand if I am missing some important angle in this,  also your view on my writing as well.  

Tuesday, May 30, 2017

Big data analytics



Big data analytics is new emerging topic & also need of the market in next few months.  In shortest terms, Big data analytics is process that examines large amounts of data to uncover hidden patterns, correlations and other insights. With today’s technology, it’s possible to analyze your data and get answers from it almost immediately – an effort that’s slower and less efficient with more traditional business intelligence solutions.

Regardless of how one defines it, the phenomenon of Big Data is ever more present, ever more pervasive, and ever more important. There is enormous value potential in Big Data: innovative insights, improved understanding of problems, and countless opportunities to predict—and even to shape—the future. Data Science is the principal means to discover and tap that potential. Data Science provides ways to deal with and benefit from Big Data: to see patterns, to discover relationships, and to make sense of stunningly varied images and information.

Not everyone has studied statistical analysis at a deep level. People with advanced degrees in applied mathematics are not a commodity. Relatively few organizations have committed resources to large collections of data gathered primarily for the purpose of exploratory analysis. And yet, while applying the practices of Data Science to Big Data is a valuable differentiating strategy at present, it will be a standard core competency in the not so distant future. How does an organization operationalize quickly to take advantage of this trend? that exact purpose we should discuss. India Training Services has been listening to the industry and organizations, observing the multi-faceted transformation of the technology landscape, and doing direct research in order to create curriculum and content
to help individuals and organizations transform themselves. For the domain of Data Science and Big Data Analytics, our educational strategy balances three things:
people—especially in the context of data science teams,
processes—such as the analytic lifecycle approach presented in this book, and
tools and technologies—in this case with the emphasis on proven analytic tools.

The concept of big data has been around for years; most organizations now understand that if they capture all the data that streams into their businesses, they can apply analytics and get significant value from it. But even in the 1950s, decades before anyone uttered the term “big data,” businesses were using basic analytics (essentially numbers in a spreadsheet that were manually examined) to uncover insights and trends.
The new benefits that big data analytics brings to the table, however, are speed and efficiency. Whereas a few years ago a business would have gathered information, run analytics and unearthed information that could be used for future decisions, today that business can identify insights for immediate decisions. The ability to work faster – and stay agile – gives organizations a competitive edge they didn’t have before.
As an analyst let’s start with definition of big data, Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.
So we are discussing

  • Volume: Size of data (how big it is)
  • Velocity: How fast data is being generated
  • Variety: Variation of data types to include source, format, and structure
  • Changing rapidly (I have added this)

There is a lot of data, it is coming into the system rapidly, and it comes from many different sources in many different formats.  The definition may seem vague given that it is describing a technical item, but to accurately capture the scope of Big Data the definition itself must be “big.”
IT companies are investing billions of dollars into research and development for Big Data, Business Intelligence (BI), data mining, and analytic processing technologies. This fact underscores the importance of accessing and making sense of Big Data in a fast, agile manner. Big Data is important; those who can harness Big Data will have the edge in critical decision making. Companies utilizing advanced analytics platforms to gain real value from Big Data will grow faster than their competitors and seize new opportunities.

Changing scenario, explosive data growth by itself, however, does not accurately describe how data is changing; the format and structure of data are changing. Rather than being neatly formatted, cleaned, and normalized data in a corporate database, the data is coming in as raw, unstructured text via Twitter Tweets on smart phones, spatial data from tracking devices, Radio Frequency Identification (RFID) devices, and audio and image files updated via smart devices.
Mission critical example, NASA reportedly has accumulated so much data from space probes, generating such a data backlog, that scientists are having difficulty processing and analyzing data before the storage media it resides on physically degrades.
Traditional BI tools that rely exclusively on well-defined data warehouses are no longer sufficient. A well-established RDBMS does not effectively manage large datasets containing unstructured and semi-structured formats. To support Big Data, modern analytic processing tools must
ü  Shift away from traditional, rearward-looking BI tools and platforms to more forward-thinking analytic platforms
ü  Support a data environment that is less focused on integrating with only traditional, corporate data warehouses and more focused on easy integration with external sources
ü  Support a mix of structured, semi-structured, and unstructured data without complex, time-consuming IT engineering efforts
ü  Process data quickly and efficiently to return answers before the business opportunity is lost
ü  Present the business user with an interface that doesn’t require extensive IT knowledge to operate

Fortunately, IT vendors and the IT open source community are stepping up to the challenge of Big Data and have created tools that meet these requirements. Popular software tools include
Hadoop: Open-source software from Apache Software Foundation to store and process large nonrelational data sets via a large, scalable distributed model. Commercialized Hadoop distributions are also available
NoSQL: A class of database systems that are optimized to process large unstructured and semi-structured data sets. Commercialized NoSQL distributions are available
The impact of cloud computing on Big Data is huge. Data sources can be from public, private, or community clouds. For example, customer demographic data can come from a public cloud, but complex scientific collection information or industry-sensitive data would be from community clouds. Any Big Data Analytic platform should be able to access any cloud platform and be able to publish results to any environment.
Unlocking the value in data is the key to providing value to the business. Too often IT infrastructure folks focus on data capacity or throughput speed. Business Intelligence vendors extol the benefits of executive-only dashboards and visually stunning graphical reports. While both perspectives have some merit, they only play a limited role in the overall mission of bringing real value to those in the company who need it.
Value is added by using an approach and platform to bring Big Data into the hands of those who need it in a fast, agile manner to answer the right business questions at the right time. Knowing what data is needed to answer questions and where to find it is critical; having the analytic tools to capitalize on that knowledge is even more critical. It is through those platforms that real value is realized from Big Data.
In Big Data world technology alone doesn’t generate real value from Big Data. Data analysts, empowered with the right analytic technology platform, humanize Big Data, which is how companies realize value. Analytic platforms & tools make extracting value from Big Data possible. Important benefits to businesses that the analytics platform should provide

  • Improving the self-sufficiency of decision makers to run and share analytic applications with other data users.
  • Data analysts who understand the business should develop good analytic applications that are shared for everyone’s benefit
  • Injecting Big Data into strategic decisions without waiting months for an IT infrastructure and data project. the tool should cook the data into the hands of decision makers so that businesses can identify and capitalize on opportunities
  • Delivering the power of predictive analytics to everyone, not just a few executive decision makers far removed from operations. Ensuring that the right data is readily available to all authorized parties leads to making the best possible decisions


The nature of Big Data is large data, usually from multiple sources. Some data will come from internal sources, but increasing data is coming from outside sources.

Let’s start understanding tools & techniques available, used by you & share your experiences at ravindrapande@gmail.com so that we could make this blog a live & useful as a reference for next chapter. Thanks a lot for writing me on my last blog I have appreciated & applied the changes accordingly. Feel free to visit http://www.indiatrainingservices.in/ as well for suitable training.

Monday, May 15, 2017

Machine-to-machine communications



Machine-to-Machine (M2M) communication is the next-generation telemetry which is used for automatic transmission of data gathered from remote sensors to a central unit for analysis, either by human beings or software agents. Unlike traditional Human-to-Human (H2H) communication, the human is not the typical initiator of the communication process. That is, the human is merely the recipient and possibly the respondent for the output. In contrast to conventional telemetry, M2M encompasses a broad spectrum of applications rather than just relegated to highly esoteric applications such as aerospace, water treatment and natural gas pipeline monitoring. Furthermore, M2M communications systems are composed of a myriad of machines that are connected to the Internet using public fixed and/or wireless communications infrastructure. Latest commercial forecasts are for fifty billion machines connected to the Internet worldwide by the end of the decade.

A machine-to-machine (M2M) communications eco-system is a large-scale network with diverse applications and a massive number of interconnected heterogeneous machines (e.g., sensors, vending machines and vehicles). Cellular wireless technologies will be a potential candidate for providing the last mile M2M connectivity. Thus, the Third-Generation Partnership Project (3GPP) and IEEE 802.16p, have both specified an overall cellular M2M reference architecture. The European Telecommunications Standards Institute (ETSI), in contrast, has defined a service- oriented M2M architecture. This article reviews and compares the three architectures. As a result, the 3GPP and 802.16p M2M architectures, which are functionally equivalent, complement the ETSI one. Therefore, we propose to combine the ETSI and 3GPP architectures, yielding a cellular-centric M2M service architecture. Our proposed architecture advocates the use of M2M relay nodes as a data concentrator. 

The M2M relay implements a tunnel-based aggregation scheme which coalesces data from several machines destined to the same tunnel exit-point. The aggregation scheme is also employed at the M2M gateway and the cellular base station. Numerical results show a significant reduction in protocol overheads as compared to not using aggregation at the expense of packet delay. However, the delay rapidly decreases with increasing machine density.

Let’s discuss one by one start with the underlining communications
Machine-to-machine (M2M) communication allows machines and devices to pass along small amounts of information to other machines. This includes communication to and from smoke detectors, door locks, alarms, water meters, agricultural sensors, smart buildings, smart lighting, environmental sensors, and more. Every IoT application has a different set of constraints in terms of wireless range and energy consumption it needs to achieve. Therefore, M2M network architecture is about properly utilizing radio resources. Each network listed below utilizes a different method for handling these resources. Cellular, for instance, is the only type of ubiquitous M2M network that uses its own licensed frequency space. The rest typically coexist using free, unlicensed frequencies. Due to regulatory constraints, companies are not allowed to design their networks to have an unfair advantage over other networks, so the question for these companies when creating network architecture is how to utilize the unlicensed spectrum efficiently.

Below, we’ll walk through the benefits and considerations of a few M2M network architectures currently in use. As you can see, there are many IoT networks available. Each of them is trying a unique approach to solve a standard engineering problem: how to trade off cost, performance, and complexity. Every engineer knows you can’t have the best of all of those things—but you can create a network that will cater to specific applications. We’re eager to see how these network architectures improve, evolve, and grow in the coming years. 

Cellular communication (communication based on communicating thru cellular network) has dominated the M2M space for a long time. The primary benefit of cellular is the ubiquitous coverage, but major disadvantages of cellular are short battery life, high-cost end points, and high recurring fees. Any battery-powered application will have a hard time using a cell modem. Cellular networks are constantly changing, as well. For example, when M2M started, most of the cellular world was using GSM-based technology (which is now being phased out). GSM has mostly been replaced by 3G and LTE, and there is talk that those technologies for M2M applications will eventually be phased out and replaced by LTE-M. So, companies who deployed cellular modems should be aware that their hardware may not be supported in coming years.

Great way to understand this please visit AT &T program for IoT enthusiasts with industrial application started to rollout in late sixteens https://starterkit.att.com/
 
WiFi has become a more prevalent M2M option in the last five years. This is due in part to new WiFi chip manufacturers who are now targeting the space by making lower cost, lower power chip sets with a very simple interface. With these new chips, you don’t need a computer and a WiFi driver; you can use a universal asynchronous receiver/transceiver (UART) instead.  But while cellular coverage is ubiquitous, WiFi coverage is not, which is one of WiFi’s main downfalls in the M2M market. For example, if you’re building a keycard door lock for every apartment in a New York high-rise and using WiFi, provisioning is going to be a nightmare.

Bluetooth, this option that’s become available in the last four years is Bluetooth Low Energy (BLE), which is also called Bluetooth 4.0 or Bluetooth Smart. BLE uses considerably less power than traditional Bluetooth, but like its predecessor, users are pretty limited by range and packet sizes. BLE is meant to transmit only very small bits of information online through a phone or computer. That makes BLE ideal for applications like heart rate monitors or fitness trackers, but it’s not ideal for anything that needs a stronger power draw or wider range.

ZigBee is a mesh network protocol that is trying to solve the issue of range. While it offers considerably better range than something like BLE, there are range constraints and downfalls that come with the mesh network. For example, some of the nodes in a mesh network are there just to relay information, which causes a constant (and somewhat unnecessary) power draw. This makes ZigBee a bad candidate for battery-powered devices but good for something like electric grid monitoring, which has an unlimited power source. In short, ZigBee continues to be adopted by some niche markets, but it won’t meet the needs of everyone in the M2M space.

The low power, wide-area network (LPWAN) space has recently become more saturated—and right now the leader in the group is SIGFOX. This M2M network sends small, slow bursts of data, which makes it ideal for things like alarm systems or simple meters. Due to its asymmetric link budget, the network only allows for limited r bi-directionality, so it isn’t able to send data back from the gateway to nodes at the fringes of the network. (This is a problem other LPWAN players are looking to solve.)

LoRaWAN is the M2M protocol created by the LoRa Alliance to create an ecosystem of M2M applications all using the LoRa physical layer. Like SIGFOX, LoRaWAN is an uplink-focused network and thus works well for sensor-based devices. This is partially due to regulations in Europe, which hold every device (including the gateway) to a 1% duty cycle. Because of the regulatory differences here in the U.S., a big segment of the market can be addressed by designing a protocol that allows more “command and control”-based applications. And that’s where we at Link Labs have tried to put our focus.

Symphony Link is the IoT network we at Link Labs developed in an effort to solve some of the challenges presented by other M2M architectures. For instance, a single Symphony gateway can be used to talk to 10,000 nodes, and thus cover an entire building. Symphony also targets battery life; a node on our network that sends a message every 10 minutes could feasibly last between eight to 10 years depending on the application.

this just the first part of IoT communication many to come as markets evolve. Feel free to contact me at ravindrapande@gmail.com. I would like to understand if I am missing some important angle in this technology & your view on my writing as well.