Forrester Consulting Thought Leadership Paper Commisioned By OPTNET Technologies, October 2010
The major challenge to reaching the IT efficiency objective resides in the rapid growth of business services and their increased complexity. The quest for a more efficient IT has led to a considerable amount of application software integration, facilitated by the appearance of middleware and service-oriented architectures. The consequence is an exponential growth in software size over the past few years and a considerable increase in multi-tiered applications that combine together several business services located on different platforms. This is fueled by the decreasing cost of hardware, which helps create a better value-cost ratio for a number of business services that otherwise would not seem to make economic sense. The consequence of this evolution has been clearly stated by Watts Humphrey, who reached the conclusion a few years ago that software size is multiplied by 10 every five years. This has a direct impact on application issues: Humphrey’s comment is that if the development process does not evolve in parallel with the size of software, the ratio of errors per thousand of lines of code will tend to stay constant, meaning that the total number of errors in an application will effectively double every two years.
The major challenge for an IT organization is to effectively manage and control this complexity. An immense amount of data is collected from monitoring infrastructures and applications, and it has now reached a point where it is beyond the correlating capabilities of human beings. To be used effectively, the data needs to be normalized and analyzed by tools before IT administrators and engineers can use it. A few years ago performance and quality of service issues were simply dealt with by adding capacity or “throwing hardware at the problem.” As technology complexity taxes the limits of what individuals can manage, we can’t simply “throw people at the problem.”
At the top of the list is the ability to be proactive and receive performance alerts before the end user is affected. The difficulties come clearly from the lack of an end–to-end management solution. A typical example is the way that performance alerts are received. While a minority of respondents use end user experience monitoring tools, a majority receive alerts from the end user calls to the service desk. Since the end users are already affected the only course of action is purely reactive, and this will have important business and financial consequences.
The lack of cooperation between technology teams and the lack of an end-to-end management solution are the direct reasons why many IT organizations, when faced with performance issues in critical applications, have difficulties finding a resolution within 24 hours.
IT management software is the primary tool to control performance and availability. Such management tools must compensate for the issues that many enterprises are finding in managing the performance of n-tier composite applications. The solution must:
- Implement an end-to-end management solution. As business services are increasingly complex, any component supporting the service can potentially fail. It is therefore important to understand which IT components support a given application and to have the ability to monitor and manage them all.
- Promote the cooperation between teams. IT organizations manage their infrastructure by disciplines such as network, servers, and databases rather than manage it by business services and applications. Increasing the focus on business services leads to better cross-divisional cooperation as the tools present all business service information on a single console or dashboard.
- Provide proactive alerts. A key element is to receive alerts before the issue affects end users. This proactive ability would leave the IT teams enough time to find the root cause of the problem and resolve it.
- Prioritize the critical business services. In many instances, IT organizations have implemented tools that monitor and manage a single aspect of the IT infrastructure such as network or databases. The focus should now be on the business service, not on a particular aspect of IT, and the use of more accurate and effective tools.
Proactive alerting is a key element to maintain a quality of service that will help IT reach its efficiency objective. A large majority of IT executives surveyed agreed that this is best achieved by monitoring the end user experience and providing a root-cause identification solution. This allows the IT department to act on problems and hopefully resolve them before they affect end users.
The complete application performance management solution, however, must include a broad set of features and functions, as illustrated in the following figure.
The key elements of the ideal solution are: the monitoring of the complete infrastructure, with an accent on network performance from an application standpoint and the ability to do root-cause analysis in a complex environment, which supposes the use of analytics and a complete dashboard that promotes cooperation between the IT organization’s different support teams.
All of these features and functions suppose that data is collected from a number of sources in the infrastructure and the applications. This data, because it will be analyzed as a whole, needs to be “normalized” and acquire the same meaning and a consistent time line and frequency. Beyond the simple integration that allows different software to share their data with other software tools, consistency and normalization can only be achieved in a deep integration. This obviously promotes solutions that have either the capability to cover all domain areas or the ability to integrate and normalize data from multiple sources.