Hype Cycle for Application Development, 2007

47
Research Publication Date: 29 June 2007 ID Number: G00147982 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice. Hype Cycle for Application Development, 2007 Jim Duggan, Daniel B. Stang, Partha Iyengar, Thomas E. Murphy, Allie Young, David Norton, Mark Driver, L. Frank Kenney, Greta A. James, Mark A. Beyer, Roy W. Schulte, Yefim V. Natis, David Gootzit, Frances Karamouzis, Lorrie Scardino, Michael J. Blechar, David Newman, Joseph Feiman, Neil MacDonald, Donald Feinberg, Ray Valdes, Matt Light, David W. Cearley, David W. McCoy, Jess Thompson A shift to process and service orientation is altering staffing, tools and methods of software development. In parallel, governance, planning, control and quality assurance techniques are being refined and strengthened to drive more predictability and meet the challenges of global sourcing.

Transcript of Hype Cycle for Application Development, 2007

Page 1: Hype Cycle for Application Development, 2007

ResearchPublication Date: 29 June 2007 ID Number: G00147982

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.

Hype Cycle for Application Development, 2007 Jim Duggan, Daniel B. Stang, Partha Iyengar, Thomas E. Murphy, Allie Young, David Norton, Mark Driver, L. Frank Kenney, Greta A. James, Mark A. Beyer, Roy W. Schulte, Yefim V. Natis, David Gootzit, Frances Karamouzis, Lorrie Scardino, Michael J. Blechar, David Newman, Joseph Feiman, Neil MacDonald, Donald Feinberg, Ray Valdes, Matt Light, David W. Cearley, David W. McCoy, Jess Thompson

A shift to process and service orientation is altering staffing, tools and methods of software development. In parallel, governance, planning, control and quality assurance techniques are being refined and strengthened to drive more predictability and meet the challenges of global sourcing.

Page 2: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 2 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

TABLE OF CONTENTS

Analysis ............................................................................................................................................. 4 What You Need to Know ...................................................................................................... 4 The Hype Cycle .................................................................................................................... 4 The Priority Matrix ................................................................................................................ 4 On the Rise........................................................................................................................... 6

Data Service Architectures...................................................................................... 6 Metadata Ontology Management ............................................................................ 7 Information-Centric Infrastructure............................................................................ 8 SDLC Security Methodologies ................................................................................ 9 SOA Testing .......................................................................................................... 10 Collaborative Tools for the Software Development Life Cycle .............................. 10 Enterprise Information Management ..................................................................... 11 Application Quality Dashboards ............................................................................ 12 Event-Driven Architecture...................................................................................... 13 Metadata Repositories........................................................................................... 14 RIA Platforms ........................................................................................................ 16

At the Peak ......................................................................................................................... 17 Application Testing Services ................................................................................. 17 SOA Governance Technologies............................................................................ 19 Globally Sourced Testing ...................................................................................... 21 Model-Driven Architectures ................................................................................... 22 Scriptless Testing .................................................................................................. 23 Architected, Model-Driven SODA.......................................................................... 24 Enterprise Architecture Tools ................................................................................ 25 Application Security Testing .................................................................................. 26

Sliding Into the Trough .......................................................................................................27 Project and Portfolio Management ........................................................................ 27 Business Application Package Testing ................................................................. 28 Agile Development Methodology........................................................................... 29 Unit Testing ........................................................................................................... 30 ARAD SODA ......................................................................................................... 31 SOA ....................................................................................................................... 32

Climbing the Slope ............................................................................................................. 33 Enterprise Software Change and Configuration Management.............................. 33 Enterprise Portals .................................................................................................. 33 Microsoft .NET Application Platform...................................................................... 34 OOA&D Methodologies ......................................................................................... 36 Linux as a Mission-Critical DBMS Platform........................................................... 37 Performance Testing ............................................................................................. 38 Open-Source Development Tools ......................................................................... 38 Business Process Analysis.................................................................................... 39

Entering the Plateau ........................................................................................................... 40 Automated Testing ................................................................................................ 40 Java Platform, Enterprise Edition .......................................................................... 41

Appendices......................................................................................................................... 43 Hype Cycle Phases, Benefit Ratings and Maturity Levels .................................... 45

Recommended Reading.................................................................................................................. 46

Page 3: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 3 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

LIST OF TABLES

Table 1. Hype Cycle Phases ........................................................................................................... 45 Table 2. Benefit Ratings .................................................................................................................. 45 Table 3. Maturity Levels .................................................................................................................. 46

LIST OF FIGURES

Figure 1. Hype Cycle for Application Development, 2007................................................................. 4 Figure 2. Matrix for Application Development, 2007 ......................................................................... 5 Figure 3. Hype Cycle for Application Development, 2006............................................................... 43

Page 4: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 4 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

ANALYSIS

What You Need to Know Technology and governance advances are improving the speed and quality of software delivery and the business utility of the products. Service orientation is becoming the most common architectural approach. Techniques and tools to improve the planning, measurement, control and reporting of application development and delivery activities are advancing quickly.

The Hype Cycle Application development activities are changing in two ways: 1) process and service orientation is altering the staffing, tooling and methods being used to carry software from business need to production code, and 2) governance, planning, control and quality assurance (QA) techniques are being refined and strengthened to drive more predictability and to meet the challenges of global sourcing.

Figure 1. Hype Cycle for Application Development, 2007

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

As of June 2007

Performance Testing

Enterprise Software Change and Configuration Management

Unit Testing

AgileDevelopmentMethodology

Business Application Package Testing

Project and Portfolio Management

Architected, Model-Driven SODAScriptless TestingModel-Driven Architectures

Globally Sourced Testing

Metadata Repositories

Data Service Architectures

Application QualityDashboards

Enterprise InformationManagement

Information-CentricInfrastructure

Collaborative Toolsfor the Software

Development Life CycleSOA Testing

Metadata Ontology Management

Business Process Analysis

Open-SourceDevelopment Tools

Linux as a Mission-Critical DBMS PlatformOOA&D Methodologies

Enterprise PortalsMicrosoft .NET Application Platform

SOA

ARAD SODA

Enterprise Architecture Tools

Event-Driven ArchitectureAutomated Testing

Java Platform, Enterprise Edition

SDLC SecurityMethodologies

SOA Governance Technologies Application Security Testing

RIA PlatformsApplication Testing Services

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

time

visibility

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

As of June 2007

Performance Testing

Enterprise Software Change and Configuration Management

Unit Testing

AgileDevelopmentMethodology

Business Application Package Testing

Project and Portfolio Management

Architected, Model-Driven SODAScriptless TestingModel-Driven Architectures

Globally Sourced Testing

Metadata Repositories

Data Service Architectures

Application QualityDashboards

Enterprise InformationManagement

Information-CentricInfrastructure

Collaborative Toolsfor the Software

Development Life CycleSOA Testing

Metadata Ontology Management

Business Process Analysis

Open-SourceDevelopment Tools

Linux as a Mission-Critical DBMS PlatformOOA&D Methodologies

Enterprise PortalsMicrosoft .NET Application Platform

SOA

ARAD SODA

Enterprise Architecture Tools

Event-Driven ArchitectureAutomated Testing

Java Platform, Enterprise Edition

SDLC SecurityMethodologies

SOA Governance Technologies Application Security Testing

RIA PlatformsApplication Testing Services

Source: Gartner (June 2007)

The Priority Matrix Financial effectiveness has become a major issue for many application groups. They are seeking more-formal processes to help achieve the goal of running IT as a business, with budget

Page 5: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 5 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

discipline and effective planning techniques that lead to predictable results. Although individually incremental, the convergence of changes in the governance, planning and control techniques are transformative when taken across all topic areas.

Service-oriented architecture (SOA) leads to service-oriented development and requires substantial changes in staffing, tooling and practice throughout development organizations. Business process management (BPM) techniques move companies in the same direction. Ultimately, the distinctions between service-oriented development of applications (SODA) and BPM will narrow, as both marginalize the distinctions between development time and runtime systems and processes.

Figure 2. Matrix for Application Development, 2007

low

years to mainstream adoptionbenefit

moderate

high

transformational

As of June 2007

Agile Development Methodology

Metadata Ontology Management

SDLC Security Methodologies

Application Testing Services

Enterprise Software Change and Configuration Management

Open-Source Development Tools

Business Process Analysis

OOA&D Methodologies

Performance Testing

Unit Testing

Enterprise Information Management

Model-Driven Architectures

Scriptless Testing

Application Quality Dashboards

Application Security Testing

Architected, Model-Driven SODA

Collaborative Tools for the Software Development Life Cycle

Enterprise Architecture Tools

Globally Sourced Testing

Metadata Repositories

Microsoft .NET Application Platform

Project and Portfolio Management

RIA Platforms

SOA Governance Technologies

ARAD SODA

Automated Testing

Business Application Package Testing

Java Enterprise Edition

Linux as a Mission-Critical DBMS Platform

Event-Driven Architecture

Information-Centric Infrastructure

Data Service Architectures

SOA

SOA Testing

Enterprise Portals

more than 10 years5 to 10 years2 to 5 yearsless than 2 years

low

years to mainstream adoptionbenefit

moderate

high

transformational

As of June 2007

Agile Development Methodology

Metadata Ontology Management

SDLC Security Methodologies

Application Testing Services

Enterprise Software Change and Configuration Management

Open-Source Development Tools

Business Process Analysis

OOA&D Methodologies

Performance Testing

Unit Testing

Enterprise Information Management

Model-Driven Architectures

Scriptless Testing

Application Quality Dashboards

Application Security Testing

Architected, Model-Driven SODA

Collaborative Tools for the Software Development Life Cycle

Enterprise Architecture Tools

Globally Sourced Testing

Metadata Repositories

Microsoft .NET Application Platform

Project and Portfolio Management

RIA Platforms

SOA Governance Technologies

ARAD SODA

Automated Testing

Business Application Package Testing

Java Enterprise Edition

Linux as a Mission-Critical DBMS Platform

Event-Driven Architecture

Information-Centric Infrastructure

Data Service Architectures

SOA

SOA Testing

Enterprise Portals

more than 10 years5 to 10 years2 to 5 yearsless than 2 years

Source: Gartner (June 2007)

Page 6: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 6 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

On the Rise Data Service Architectures Analysis By: Mark Beyer

Definition: Data services consist of processing routines that provide direct data manipulation pertaining to the delivery, transformation and the logical and semantic reconciliation of data. Unlike point-to-point data integration solutions, data services de-couple data storage, security and mode of delivery from each other, as well as from individual applications, to deliver them as independently designed and deployed functionality that can be connected via a registry or composite processing framework. Data services can be used in a networked fashion that is orchestrated through a composite processing model or designed separately, then reused in various, larger-grained processes.

Position and Adoption Speed Justification: Data services are, by their nature, a new style of data access strategy that replaces the data management, access and storage duties currently deployed in an application-specific manner. Data services architecture is merely a sub-class or category of SOA that does not form a new architecture, but brings emphasis to the varying services that exist within SOA. Most of the large vendors have announced road maps and plans to pursue some variant of the data service approach, but this is an evolutionary architectural style that does not warrant "rip and replace" at this time and will coexist with current application design techniques. Disillusionment will occur as organizations realize the granularity required to deploy this type of architecture, especially relative to the differences between handling data via a business operational process vs. data handling via industry delivery concepts.

User Advice: Users should focus on delivering a semantic layer that portrays the use of data and information in the organization and, at the same time, begin developing a logical business model. The logical and semantic model should be interpreted to the physical repositories throughout the organization — creating a physical-to-logical-model reconciliation. In 2006, this technology class was specifically focused on information in the former "structured" data class only. In 2007, initial advances in using model-to-model (M2M) language communication via metadata operators are blended into this technology. The M2M introduction caused a temporary retrograde in the technology position and at the same time will accelerate its movement along the cycle. Existing data integration vendors: extraction, transformation and loading (ETL), enterprise integration information (EII) and enterprise application integration, have begun to pursue common metadata repositories used as a core library to deploy all data delivery modes but have not built machine intelligence into optimization strategies. Organizations should eschew vendor development platforms that deny or refute the requirement for interoperability.

Business Impact: Data services are not an excuse for each organization to write its own, unique database management system (DBMS), as most DBMSs both store data and provide ready access. Data services can sever the tight links between application interface development and the more infrastructure-style decisions of database platforms, operating systems (OSs) and hardware. Specifically, the metadata interpretation between business process models, semantic usage models and logical/physical data models will enhance the overall adaptiveness of IT solutions. This will create a portability of applications to lower-cost repository environments when appropriate and create a direct corollary between the cost of information management and the value of the information delivered by delivering semantically consistent data and information to any available presentation format. This is opposed to the current scenario in which monolithic application design can drive infrastructure costs up because of their dependence on specific platform or DBMS capabilities.

Benefit Rating: Transformational

Page 7: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 7 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Market Penetration: Five percent to 20% of target audience

Maturity: Emerging

Sample Vendors: Ab Initio; Business Objects; IBM; Informatica; Oracle

Recommended Reading: "The Emerging Vision for Data Services: Becoming Information-Centric in an SOA World"

"Data Integration Is Key to Successful Service-Oriented Architecture Implementations"

"Service-Oriented Business Applications Require EIM Strategy"

Metadata Ontology Management Analysis By: Mark Beyer; Michael Blechar

Definition: Metadata ontology management addresses the problem of information assets created by different processes, defined by different business terms and interpreted through disparate semantics, to produce competing taxonomies. Ontology management recognizes that simultaneous metadata descriptions can exist for each information asset and proceeds to reconcile them. The various metadata sources include business process modeling, EII, ETL, metadata repository technologies and others. Ontology management allows business analysts to leverage the value of these assets better, while promoting improved understanding across business units and IT management personnel.

Position and Adoption Speed Justification: Business organizations are just embarking on the use of metadata in determining the value of data points and information delivered through information technology systems. One high business value use of metadata is found in the ability to justify and identify how decisions were made, based on information available at any given time. The new demand for metadata that describes end-user interpretations of "fact" will force the introduction of annotation metadata in daily workflows.

Presently, most metadata management functionality is a feature of existing metadata tools limited to model extension with end-user defined columns and metadata versioning with no workflow or administrative enforcement beyond the development team. With the advent of SOAs and the active use of metadata to control services flow, it will become imperative that the business becomes involved in linking the BPM workflows with information management workflows. This will force new metadata management tools development with a radically different business user interface.

User Advice:

1. Identify data management and integration tools that include metadata repository management interfaces, supporting metadata model extensions.

2. Identify data management and integration tools that expose metadata repositories via application programming interfaces (APIs) and service calls, versus metadata import/export functionality only.

3. Acclimate business personnel to their role in creating information assets and the importance of metadata as a precursor to introducing these practices.

4. Initiate a data administration task to capture various business ontologies of integrated information resources with the understanding that ontology evolves continuously.

Page 8: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 8 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Business Impact: Tighter integration between business process changes and IT systems change. Business units and users will be able to relay their concerns better regarding use of information assets throughout the organization. Better assessment by business analysts on the risks and benefits that accrue in the business regarding maintenance and security of information assets.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Information-Centric Infrastructure Analysis By: David Newman

Definition: Information-centric infrastructure (ICI) is a technology framework that enables information producers and information consumers to organize, share and exchange any content (structured and unstructured data, for example), anytime, anywhere. It is the technology building block within an organization's enterprise information management (EIM) program. Since different systems use different formats and different standards to share and exchange different types of information, the technologies that make up an ICI ensure that common processes applied to common content will produce similar results.

Position and Adoption Speed Justification: The vision for an ICI will be adopted by organizations seeking to bring a greater balance to their integration activities to address cost and complexity issues associated with silo-based, application-centric development. One of the reasons organizations cannot respond as quickly as market conditions dictate is because much of the information has been isolated within applications — each fulfilling its own unique (process-driven) requirements. As demands for access to information sources increase, organizations will use an ICI as their technical foundation to facilitate the convergence of different types of content required by industry "ecosystems" and trade exchanges. This will help resolve issues around info-glut, and will improve application integration capabilities during migration toward SOAs.

User Advice: Recognize that different project teams use different applications, formats and standards to exchange information. Look for common ways to normalize and extract meaning from all types of content so that it can be exchanged across the organization. Use existing system analysis and designs as starting points to develop common models, which can then be shared by different processing components and system entities. Use existing methods of content-centric processing to identify gaps that need to be filled to support ICI requirements. For instance, determine the usefulness of the Federal Enterprise Architecture Framework Data Reference Model (version 2.0) to your industry — regardless of whether you are a commercial or government organization. Exploit the use of emerging standards (such as XML), or data and metadata interchange and create a common components library of metadata objects based on corporate standards, thereby promoting wide-scale reuse.

Business Impact: An ICI brings balance to many application-driven environments because it "normalizes" the chaos caused by having different and diverse standards, formats and protocols. It extracts meaning and delivers context so that each content instance can be shared and exchanged to support a variety of business process needs by identifying, abstracting and rationalizing commonalities across content; applying semantics for information exchange and interoperability; and implementing metadata management for discovery, reuse and repurpose. Organizations failing to invest in building out an ICI by 2015 will experience a 30% increase in overhead costs to manage their IT operations. An ICI will make far greater use of emerging

Page 9: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 9 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

technologies than most companies are used to. It is the inevitable outcome of decoupling application logic from data management requirements (as seen in SOA).

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Recommended Reading: "Key Issues for Information-Centric Infrastructures, 2007"

"Gartner Defines the Information-Centric Infrastructure"

"Information-Centric Infrastructure: Application Integration Via Content"

"Predicts 2007: Information Infrastructure Emerges"

SDLC Security Methodologies Analysis By: Joseph Feiman

Definition: Software development life cycle (SDLC) methodologies are based on the principle that security implementation shouldn't be an isolated process, but rather part of a comprehensive software engineering process. The methodologies should make security engineering a measurable, repeatable, predictable and controlled discipline. They also will enable detection, correction and prevention of application vulnerabilities.

Position and Adoption Speed Justification: We expect that the adoption of SDLC security methodologies will follow the standard pattern of methodologies, such as the Capability Maturity Model. Only a few organizations (primarily external service providers — ESPs) will reach the highest level of maturity, while most will remain at the lower levels. Achieving lower levels of maturity will take approximately 18 months or more; in addition, it will take the same amount of time to move up to each of the higher levels.

User Advice: To enable the detection, correction and prevention of security vulnerabilities in applications, ESPs should consider formally adopting SDLC security methodologies to confirm their adherence to practices that embed security into systems and software engineering. Formal adoption may also prove to potential clients that ESPs are competent with secure software engineering. Enterprises' internal IT departments should informally (that is, without certification) consider selecting and adopting the methodologies' best practices that meet their needs and match their means.

Business Impact: The objective of SDLC security methodologies is to reduce security risks by making systems' security engineering a measurable, repeatable, predictable and controlled discipline. The discipline should ensure that security threats and vulnerabilities have been reviewed, their impact recognized, that risks have been assessed, and that appropriate, preventive organizational and management measures have been applied to the software engineering process.

Benefit Rating: Moderate

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: International Organization for Standardization; International Systems Security Engineering Association; Software Engineering Institute of Carnegie Mellon University

Page 10: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 10 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Recommended Reading: "Security as Engineering Discipline: The SSE-CMM's Objectives, Principles and Rate of Adoption"

SOA Testing Analysis By: Thomas Murphy

Definition: SOA testing tools are designed to assess service-oriented applications. Tools verify XML, perform load and stress testing of services, and promote the early, continuous testing of services as they are developed. These products have to deal with changing standards and should support the interfaces, formats, protocols and the variety of implementations available. Although similar to traditional functional and load testing tools, these products do not rely on a user interface for definition and should deal with issues such as long-running and parallel processes. As these tools mature, links should occur to leverage the produced data with service governance tools, such as security and registry management tools.

Position and Adoption Speed Justification: SOA testing tools are new in the market and tend to be from relatively new companies, with improving support from the historic testing leaders. Web services definition and standards are evolving, prompting tool manufacturers to catch up.

User Advice: If you have invested in building out Web services, then you should have a solid unit testing approach. Investigate these tools primarily to ensure load capacity for your services, to discover failure behaviors and to speed the development of new services. Testing for services should make use of an existing foundation of tests written for the underlying implementation code. Tests should be factored to enable testing of specifically affected systems when changes are made, rather than testing the entire system. This includes that ability to unit test individual elements, as well as specific orchestrations across services.

Business Impact: Web services must be stable and reliable for applications to be built on top of them. They need a solid testing focus or the services will become liabilities to application stability. Because services offer a way to transform the business, these testing tools will be critical to the strategic success of businesses.

Benefit Rating: Transformational

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: HP; iTKO; IBM; Mindreef; Parasoft; Solstice Software; SOASTA

Collaborative Tools for the Software Development Life Cycle Analysis By: James Duggan

Definition: SDLC collaborative tools enable communication and collaboration across cultural, geographical and professional boundaries throughout the application life cycle. The features, which were developed in stand-alone products (such as wikis and electronic-meeting systems), are now appearing in multiple development tool markets.

The addition of collaboration features can enhance the effectiveness and efficiency of all phases of application development, including analysis, design, construction, testing and deployment, integration, maintenance and enhancement. These features enable customer-to-developer understanding, as well as knowledge capture and transfer. Collaboration features complement and enhance the structured coordination tools that make up most of the application life cycle management suites — for example, workflow, change management and project management solutions.

Page 11: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 11 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Position and Adoption Speed Justification: Application delivery globalization — in which applications are built and maintained by teams working all over the world — is growing. However, this growth increases the risk of miscommunication and distortion. As the globalization of application delivery accelerates, it raises the priorities and expedience of technology vendors' efforts to address growing demand. Broad adoption will require collaboration features to support multiple sites, enable federated control and remote monitoring, and incorporate intellectual property and asset protection.

User Advice: Coordinate tool evaluation geographically to ensure full consideration of cultural and skill differences across groups. Pilot changes in the process to ensure that distance effects are understood.

Business Impact: Gartner expects significant mitigation of the risks posed by the globalization of application delivery. Because of collaborative and globally distributed efforts, cost savings will occur, and revenue will be produced by the new applications.

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: BMC Software; CollabNet; Digite; iRise; Sofea; VA Software

Enterprise Information Management Analysis By: David Newman

Definition: EIM is an integrative discipline for structuring, describing and governing information assets, regardless of organizational and technological boundaries, to improve operational efficiency, promote transparency and enable business insight. EIM is operationalized as a program with a defined charter, budget and resource plan.

Position and Adoption Speed Justification: Many organizations have silos of information: inconsistent, inaccurate and conflicting sources with no "single version of the truth." Project-level information management techniques have caused issues in data quality and accessibility. This has led to higher costs, integration complexity, risk and unnecessary duplication of data and process.

Results from a Gartner study confirm that EIM is in the early-adopter stage. Findings suggest that the EIM trend will need frameworks, case studies and maturity models to help guide organizations through the benefit realization curve. Certain business drivers, such as compliance, will accelerate adoption as organizations look to fulfill transparency and efficiency objectives from upstream systems to downstream applications. Other triggers for adoption include the information management implications of new development models, such as SOA. SOA places greater emphasis on a disciplined approach to information management. Enterprises will use EIM to support the increased demands for governance and accountability of information assets through formalized data quality and stewardship activities. Adoption toward EIM will also increase as pressure intensifies to consolidate related technologies within organizations for managing both structured and unstructured information assets. Organizations will look for a common framework or infrastructure to converge overlapping technologies and projects in master data management, business intelligence, metadata management, data integration, information access and content management. Organizations will adopt EIM in stages, looking first at foundational activities such as metadata management, master data management, and data mart consolidation; data quality activities; and data stewardship role definition.

Page 12: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 12 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

User Advice: End-user clients should resist vendor claims that their products "do" EIM. EIM is not a technology market. Clients should connect certain technologies and projects (such as master data management, metadata management, information life cycle management, content management and data integration) as part of an EIM program.

Secure senior-level commitment for EIM as a way to overcome information barriers, exploit information as a strategic resource, and fuel the drive toward enterprise agility. Use pressures for improving IT flexibility, adaptability, productivity and transparency as part of the EIM business-case justification. Grow the EIM program incrementally. Pursue foundational EIM activities such as master data management and metadata management. Address operational activities, such as defining the EIM strategy, creating a charter and aligning resources to the program. Operationalize EIM with a defined budget and resource plan. Establish an ICI to share and exchange all types of content. Implement governance processes such as stewardship and data quality initiatives. Set performance metrics (such as reducing the number of point-to-point interfaces or conflicting data sources) to demonstrate value.

Business Impact: EIM is foundational to complex business processes and strategic initiatives. By organizing related information management technologies into a common, ICI, an EIM program can reduce transaction costs across companies and improve the consistency, quality and governance of information assets. EIM supports transparency objectives in compliance and legal discovery. It breaks down information silos by facilitating the decoupling of data from applications — a key aspect of successful SOAs. It establishes a single version of the truth for master data assets. EIM institutes information governance processes to ensure all information assets adhere to quality, security and accessibility standards. Key components of EIM (for example, master data management, global data synchronization, semantic reconciliation, metadata management, data integration and content management) have been observed across multiple industries (such as banking, investment services, consumer goods, retail and life sciences).

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Recommended Reading: "Business Drivers and Issues in Enterprise Information Management"

"Mastering Master Data Management"

"From IM to EIM: An Adoption Model"

"Data Integration Is Key to Successful Service-Oriented Architecture Implementations"

"Gartner Study on EIM Highlights Early Adopter Trends and Issues"

"Gartner Definition Clarifies the Role of Enterprise Information Management"

"Key Issues for Enterprise Information Management, 2007"

Application Quality Dashboards Analysis By: Thomas Murphy

Definition: These tools give an overall view of code quality and integrate the various forms of testing to enable a more cohesive test strategy. They are being driven from multiple entry points, including integrated application life cycle management suites, quality management dashboards and portfolio management tools. Initial support only includes functional and performance testing integration. It misses integration with other testing tasks, such as static analysis for security, code

Page 13: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 13 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

quality and standards compliance. Project-management-office-centric tools will also include information from operational and application management tools to better understand the risks, benefits and costs of applications in production, enabling improved investment choices.

Position and Adoption Speed Justification: Acquisition and competition are pushing these tools along rapidly. Eclipse is also shaping the market with the inclusion of several underlying frameworks and metamodels that provide a foundation for integration and reporting. However, expectations will lead implementation quality. Continued integration with process guidance and the rest of the life cycle will drive maturity. Vendors will still need to expand product coverage to adequately support all phases of the testing life cycle. Operational tools will take an approach oriented toward determining whether the application passed the appropriate gates to be deployed, and portfolio tools will blend this information with operational and help-desk data to help determine projects. Overall, improved reporting will help organizations that use reports to locate areas of concern and measure improvements.

User Advice: Expect to require products from multiple software vendors for three to five years, as well as a lack of overall integration of quality metrics reporting. Seek tools with extensible repositories and that simplify the integration of additional data sources.

Business Impact: Metrics and a better understanding of software quality can lead to better planning and deployment of resources.

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Adolescent

Sample Vendors: 6th Sense Analytics; Atlassian Software Systems; Borland; Compuware; Enerjy; IBM; JetBrains; Mercury; Microsoft; Polarion

Event-Driven Architecture Analysis By: Roy Schulte; Yefim Natis

Definition: Event-driven architecture (EDA) is a subset of the more general topic of event processing. EDA is an architectural style in which some of the elements of the application execute in response to the arrival of event objects. An element decides whether to act and how to act based on the incoming event objects. In EDA, the event objects are delivered in messages that do not specify any method name (such messages are called event notifications).The event source does not tell the event receiver what operation to perform. An event is something that happens (or does not happen, but was expected or thought possible). Examples include a stock trade, customer order, address change, and a shipment arriving or failing to arrive (under specified conditions). An event may be documented in software by creating an event object (sometimes called plain "event," which then is a second meaning for the term). An event (object) represents or records a happening ("ordinary") event. Examples of event objects include a message from a financial data feed (a stock tick), an XML document containing an order or a database row. In casual discussion, programmers often call the message that conveys an event object an "event."

Position and Adoption Speed Justification: Computer systems have used event processing in many different ways for decades. Event processing is moving through the Hype Cycle now because its concepts are being applied more broadly and on a higher level. Business events, such as purchase orders, address changes, payments, credit card transactions or Web "clicks" are being used as a focus in application design. This contrasts to past treatments of events where business applications addressed events more indirectly, and event modeling was considered to

Page 14: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 14 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

be secondary to data modeling, object modeling and process modeling. Businesses have always been real-time, event-driven systems, but now more aspects of their application systems are also real-time systems. EDA concepts are also used on a technical level to make application servers and other software more-efficient and scalable. The spread of other types of SOA (conventional, request/reply SOA) is also helping to pave the way for EDA because some of the concepts, middleware tools and organizational strategies are the same.

User Advice: In an era of accelerating business processes, pervasive computing and exploding data volumes, companies must master event processing if they are to thrive. Companies should use event processing in two ways: to engineer more-flexible application software through the use of message-driven processing, and to gain better insight into current business conditions through complex-event processing (CEP). Architects can use available methodologies and tools to build good EDA applications, but must consciously impose an explicit focus on events because standard methodologies and tools do not yet make events first-class citizens in the development process. Companies should implement EDA as part of their SOA strategy because many of the same middleware tools and organizational techniques (such as using an SOA center of excellence [COE] for EDA and for other kinds of SOA) apply. Companies should not implement request/reply SOA now and wait for one or two years to implement EDA SOA because a request/reply-only SOA strategy will not be able to support some business requirements well.

Business Impact: EDA is relevant in every industry. Large companies experience literally trillions of ordinary business events every day, although only a minority of these are represented as event objects, and only a tiny minority of those event objects are fully exploited for their maximum information value. The number and size of event streams are growing as the cost of computing and networking continues to drop. Companies now generate data on events that were never reported in the past. The CEP type of business EDA was first used in financial trading, energy trading, supply chain management, fraud detection, homeland security, telecommunications, customer contact centers, logistics and sensor networks, such as those based on radio frequency identification (RFID). Event processing is a key enabler in business activity monitoring (BAM), which makes business operations more visible to end users.

Benefit Rating: Transformational

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: Actimize; Agent Logic; Agentis Software; Aleri; Avaya; Axeda; BEA Systems; coral8; Cordys; Event Zero; Exegy; firstRain; IBM; jNetX; Kabira; Kx Systems; open cloud; Oracle; Progress Software/Apama; Red Hat (Mobicents); Rhysome; SAP; SeeWhy; StreamBase Systems; Sun; Sybase; Syndera; Synthean; Systar SA; Tibco Software; Truviso; Vayusphere; Vhayu; Vitria Technology; WareLite

Metadata Repositories Analysis By: Michael Blechar; Jess Thompson

Definition: Metadata is an abstracted level of information about the characteristics of an artifact, such as its name, location, perceived importance, quality or value to the organization, and relationship to other artifacts. Technologies called "metadata repositories" are used to document, manage and perform analysis (such as change impact analysis and gap analysis) on metadata in the form of artifacts representing assets that the enterprise wants to manage. Repositories cover a wide spectrum of metadata/artifacts, such as those related to business processes, components, data/information, frameworks, hardware, organizational structure, services and software in

Page 15: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 15 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

support of focus areas like application development, data architecture, data warehousing and enterprise architecture (EA).

Position and Adoption Speed Justification: Most organizations that have tried to implement a single enterprise metadata repository have failed to meet the expected return on investment. Community-based repositories supporting business process modeling and analysis, SOA and data integration have shown benefits in improved quality and productivity through an improved understanding of the artifacts, impact queries and the reuse of assets such as services and components. For the near future, there will be no proved, viable solution that federates multiple metadata repositories (or federates repositories with other technologies that contain metadata, like service registries holding runtime metadata artifacts) sufficiently to satisfy the needs of organizations.

Mainstream IT organizations will find that the most pragmatic approach to metadata management and reporting is to have multiple, community-based repositories, which have some degree of federation and synchronization. Although it is possible to create federated queries across multiple repositories, many organizations still may want to consolidate and aggregate selected metadata information from disparate sources into a "metadata warehouse" for ease of reporting and for ad hoc query purposes. Leading metadata repository vendors are well-positioned to meet this need, but competitors will emerge, including large, independent software vendors (ISVs), which will look to provide these capabilities in their tool suites. Large vendors, such as IBM, Oracle and SAP, are adding repositories — or are improving their repository support for design-time and runtime platforms — to enhance metadata management support for their development and deployment environment. As a result, Gartner expects to see a broader degree of acceptance by customers, along with a consolidation in this market during the next few years. We position metadata repositories as being two to five years from plateau, because most Global 1000 companies have purchased metadata repositories and are not yet aggressively seeking replacements, and because most new buyers are less-sophisticated IT organizations looking to large ISVs to improve their federation capabilities before committing to the new tools. As a result, most repository purchases will be tactical in nature based on the needs of specific communities, such as data warehousing and SOAs.

User Advice: Owing to the diversification and consolidation of metadata management solutions, the enterprise uber-repository market no longer exists. Consider the acquisition or extension of using a metadata repository as part of moving to SOAs, or consider implementing BPM, data architecture, data warehousing and EA initiatives. Most organizations will be best-served by living with metadata in multiple tools or by using different repositories based on communities of interest, with some limited bridging or synchronization to promote the reuse and leveraging of knowledge and effort. Organizations that need to approximate the capabilities of an enterprise metadata repository are still best-served by solutions from leading repository vendors.

Business Impact: Metadata repository technology can be applied to aspects of business, enterprise, information and technical architectures, including the portfolio management and cataloging of software services and components; business models; data-warehousing ETL rules; business intelligence transformations and queries; data architecture; electronic data interchange; and outsourcing engagements.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: Allen Systems Group; BEA Systems; LogicLibrary

Page 16: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 16 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Recommended Reading: "Are Federated Metadata Approaches to Business Service Repositories Valid?"

"Best Practices for Metadata Management"

"Metadata Management Technology Integration Cautions and Considerations"

"Metadata Repositories Address Disparate Sets of Needs"

"The Evolving Metadata Repository Market"

RIA Platforms Analysis By: Ray Valdes

Definition: Rich Internet Application (RIA) platforms enable organizations and software vendors to build applications that provide a richer, more-responsive user experience compared to older-generation, "plain browser" Web platforms. RIA platforms and technologies span a range of approaches that, from a runtime perspective, fall into three basic categories: browser-only, enhanced browser and an outside-the-browser.

The browser-only approach is known as Ajax, which leverages the capabilities that are already built into every modern browser (for example, Firefox, Internet Explorer, Opera and Safari), such as the JavaScript language engine and the Document Object Model support. The Ajax approach is supported by vendors, such as Backbase, Jackbe and Tibco, and by open-source toolkits, such as Dojo and Kabuki. The enhanced-browser approach begins with a browser and extends it with a plug-in or other browser-specific machine-executable component (unlike the JavaScript-centric Ajax approach, which is mostly browser-independent). Examples of this approach are Adobe Flash (further enhanced by Adobe Flex server-side technology), Google Gears, Microsoft Silverlight and the Curl RIA platform from Curl.

The outside-the-browser approach means adding some large-footprint system software to the client operating environment, such as the Java Virtual Machine (JVM) runtime, the Microsoft .NET language environment or the Adobe Integrated Runtime (AIR) software stack. On top of this stack can be additional layers that add capabilities for client-side data persistence, automatic provisioning and versioning of platforms and applications, and migration of server-side component models. Examples of this approach include Adobe AIR, IBM Lotus Expeditor, Microsoft Windows Presentation Foundation and Sun JavaFX.

Position and Adoption Speed Justification: Major system vendors, such as IBM and Microsoft, have been talking about a "rich client" or "smart client" alternative to plain browser-based user interfaces since the early part of this decade. The concept and road map was driven as much by a vendor's agenda for maintaining a system software footprint on a user's devices (desktop PCs, laptops and PDAs) that was more than a basic browser, which was perceived to be commodity technology. However, in 2005, the use of Ajax (a "basic browser" technology) appeared on the scene and enjoyed explosive growth, blind-siding vendors' road maps based on heavyweight technologies (for example, Microsoft WinForms with ClickOnce technology). In 2007, there have been high-profile new initiatives — such as Adobe AIR, Microsoft Silverlight, IBM Lotus Expeditor and Sun JavaFX — that indicate a renewed effort on the part of vendors to go beyond the basic browser.

User Advice: To gain real value from RIA technology, invest in an enhanced development process based on empirically proven usability design principles and on continuous improvement before investing in any user interface technology.

Page 17: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 17 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Business Impact: A user experience that is perceptively better than other offerings in a product category can provide sustainable, competitive advantage. Consider the flagship examples of the RIA/Ajax genre, such as Google's Gmail, Maps and Calendar applications, which achieved high visibility and strong adoption despite entering late into a mature and stable product category. However, competitive advantage is not a guaranteed result of RIA technology deployment, and depends on innovations in usability (independent of technology) and on server-side architectures that complement client-side user interface technology. Many organizations do not have the process maturity to deliver a consumer-grade user experience and will need to acquire talent or consulting resources to achieve positive business impact.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Emerging

At the Peak Application Testing Services Analysis By: Frances Karamouzis; Allie Young; Lorrie Scardino

Definition: Application testing services include all types of validation, verification and testing services for quality control and assurance within the application development life cycle to deliver software that is developed according to defined specifications and will operate in a production environment. Testing services, which have always been an integral part of the application development life cycle, are now increasingly carved out as a separate competency area, often supported by a distinct development methodology. Testing services may be performed manually or with automation tools, and carried out by internal IT resources or by ESPs.

The scope of application testing services includes various functions that go by different names, such as unit testing (which is done by the application developers), integration testing, system testing, functional testing, regression testing, performance/stress testing, usability testing and security testing. Application testing applies to custom application development or packaged applications, as well as single applications or many applications. When externally sourced, application testing services may be purchased as staff augmentation, discrete project work and longer-term outsourcing engagements.

Position and Adoption Speed Justification: In the past three years, more attention and focus have been placed on testing services. Several business factors have accelerated this focus:

• Organizations increasingly recognize the business need to achieve a more predictable and consistent software development process, including all levels of testing and QA.

• The cost of software defects is better understood today than it has been historically because organizations are getting better at baselining costs and performance/service levels as part of a larger sourcing strategy.

• Accelerated release cycles for business applications is a reality, with more applications directly touching the customer and application availability directly tied to revenue performance. The cost of software defects is more visible in many industries.

• When organizations look for additional ways to cut costs, especially after already outsourcing, testing and QA emerge as good candidate services.

Page 18: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 18 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

• The rise in the use of external providers for application development — especially Indian offshore providers — has raised awareness of the need for improved processes and methodologies.

• More service providers are aggressively marketing application testing services.

These factors have converged to accelerate the hype associated with application testing services. On the demand side, organizations have great expectations when they decide to externally source testing services but often have not considered the implications of doing so, or the way in which they should structure the contract and relationship. IT decision makers generally do not engage the right number and level of developers in the planning process, and keep business users at the periphery. This leads to integration problems and conflicts among resources, made more significant because an external source is assessing the quality of others' work products, which often includes the products of other external sources.

On the supply side, the opportunity to leverage low-cost labor by using offshore resources in off-hours (when compared to the client's work day) testing is especially appealing to pure-play offshore providers. They have invested in expanding their testing services to offer them as stand-alone services to an existing client base. Niche providers also emerged in offshore locations as testing specialists. When demand reached critical mass, traditional providers started to see the opportunity and began to make investments to compete with the offshore providers. Thus, there has been a rapid proliferation of providers that claim to have application testing expertise.

Providers will accept work, especially from an existing client, in virtually any way the client wants to scope and pay for it. This opportunistic approach perpetuates an environment that lacks standards for scope of work, service levels, price, contractual terms and other attributes that are consistent with an immature and hyped service offering.

User Advice: If isolating testing functions makes sense as part of your sourcing strategy, then ensure that you have a well-defined scope, clear performance requirements, measurable success criteria and engagement with all the application and user groups that will be integral to the testing process. As a discrete function, the organization must have the resources, methodology and practices in place to provide output to the testing provider, and then receive input when the function is completed. Many organizations can operate in this type of environment, while many others prefer broader accountability, such as what exists at the application level.

When evaluating providers, ensure that you give proper weighting to the level of maturity, automation and process standardization that the provider has achieved in testing services when offered and delivered as stand-alone services. Consider providers with dedicated business units for testing with consistent revenue growth for that business area. If the business unit is relatively new, then require the provider to demonstrate its commitment to this market. Check references carefully and match your specific requirements to similar engagements.

View testing as part of the application development life cycle, even if it is externally sourced as a discrete function. Ensure alignment between the application development methodology and the testing methodology. Build knowledge transfer into the outsourcing action plan. The selected provider will need to learn your methodology, and you will need to learn its. Organizations that want to leverage a provider's intellectual property must pay special attention to knowledge transfer and training during the transition process.

Application testing services may be purchased in various ways, and organizations need to be clear about their objectives and the value proposition of each option. Staff augmentation is used to address resource constraints. Organizations are responsible for directing the resources and ensuring the outcomes. Discrete project work is typically used in two scenarios: for a specific application development effort that requires independent testing or as a consulting-led project to

Page 19: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 19 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

evaluate the efficacy of changing the way application testing is performed. These consulting-led projects are often described as pilot programs and will often lead to long-term outsourcing contracts. Finally, testing services purchased through an outsourcing contract signal the organization's commitment to leverage the market's expertise and assign delivery responsibility to an external source.

Organizations considering various sourcing options are likely to find an aggressive sales approach to broaden the scope of application services beyond the organization's intent. In many cases, a broader scope of work might provide benefits from leveraging the provider's process maturity to build quality into the software as opposed to simply testing the quality of the software. Although this is a worthwhile aspiration, organizations must ensure that they are prepared to invest in broader quality programs before engaging in relationships of this nature.

Business Impact: The major business impacts of application testing services include:

• Cost savings in the discrete application development life cycle and the longer-term

• The ongoing cost of maintaining the application; decreased time to implement new applications or functionality

• Increased rigor and productivity by resources throughout the development process

• Improved performance of applications once they're in production

• Better and more-consistent quality control processes

Many organizations do not know how much they are spending on application testing and software QA, nor do they understand the true cost of inadequate testing processes. Furthermore, most do not have discretionary budgets to develop world-class testing services. The lack of testing and QA standards and consistency often leads to business disruption, which can be costly. However, most organizations do not use a process that links testing failures to business disruption on a cost basis. Application testing is a case where the use of an external provider can be effective but sometimes difficult to clearly demonstrate.

Benefit Rating: Moderate

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: AppLabs Technologies; Aztecsoft; Cognizant; EDS; Hexaware; IBM; Infogain; Infosys Technologies; Keane; Satyam Computer Services; Tata Consultancy Services; Thinksoft; Wipro Technologies

SOA Governance Technologies Analysis By: Frank Kenney

Definition: The key to being successful with your SOA projects is to understand and control your SOA artifacts. SOA artifacts can include services, SOA policies (that is, service-level agreements), business processes and profiles of consumers and providers. The key to understanding and controlling these artifacts is SOA governance. Various technologies can help you control how your artifacts are being used, managed, secured and tested, as well as how visible they are. These technologies include:

SOA policy management provides the technology to create, discover, reference and sometimes enforce policies related to SOA artifacts, such as access control, performance and service levels.

Page 20: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 20 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

SOA registries and repositories help manage metadata related to SOA artifacts (for example, services, policies, processes and profiles) and have recently evolved to include the creation and documentation of the relationships (that is, configurations and dependencies) between various metadata and artifacts.

SOA QA and validation technologies validate the individual SOA artifacts, and determine the relationships to each other within the context of an SOA deployment. For example, these technologies will test and validate a composite service that executes specific processes, while having specific policies enforced on it.

Monitoring is present throughout the individual technical domains and enables companies to study an SOA and its environment and provide deeper, real-time business intelligence and analytics applications. It also helps them checking that the various governance processes are actually followed. Business activity management (BAM; see "MarketScope for Business Activity Monitoring Platforms, 3Q06") plays a key role in the evolution and agility of an SOA and is the foundation for future complex event processing scenarios as the SOA life cycle (a cycle of developing, testing, deploying, monitoring, analyzing and refining).

Adapters, interfaces, application program interfaces and interoperability standards enable all the technical domains to communicate and share information, as well as enable the governance suite to be integrated with existing infrastructure applications, such as business applications, integration middleware or OSs for optimal policy definition and executions.

Position and Adoption Speed Justification: SOA governance technologies, specifically the service registry, and SOA policy enforcement (service management and service security) have been hyped by vendors and end users; many end users are deploying these technologies without credible SOA governance organizational processes and strategies. As a result, service registries and policy enforcement tools are often underused today (only for cataloging and XML security). With more vendors entering into OEM agreements and partnerships with best-of-breed vendors, these technologies will reach the Peak of Inflated Expectations within 12 months. However, because most SOA deployments will likely fail without proper governance, companies will eventually move to better leverage SOA governance technologies to provide visibility, manageability, monitoring security and QA.

User Advice: Regardless of the overhyping of SOA governance, companies deploying SOAs need to first develop a strategy and process for SOA governance that encompass technologies and organizations. Deploying a service registry for reuse and developing some policies around the development of services is a good start, but companies should plan on using that registry for SOA life cycle management and for visibility into various SOA artifacts.

Business Impact: Any company or division deploying an SOA will be impacted by SOA governance. Entities providing software as a service, integration as a service, business-to-business services or hosting applications should take advantage of SOA governance technologies to enhance their offerings, better manage their SOA artifacts and obtain competitive differentiation.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: Actional; AmberPoint; BEA; HP-Mercury; iTKO; IBM; Layer 7 Technologies; LogicLibrary; Oracle; Reactivity; Software AG/webMethods; SOA Software; Tibco Software; Vordel; WebLayers

Page 21: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 21 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Recommended Reading: "Criteria for Evaluating a Vendor's SOA Governance Strategy"

"No 'Leader' Exists in SOA Governance … At Least Not Yet"

Globally Sourced Testing Analysis By: Partha Iyengar; Thomas Murphy; Allie Young

Definition: Globally sourced (offshore) testing involves the delivery and support of applications testing services — such as functional, stress, regression and usability testing — using a global delivery model (GDM).

Position and Adoption Speed Justification: Service vendors, primarily from among the leading broad-based offshore providers, are increasingly focusing on application functional, stress, regression and usability testing services using the GDM. These organizations offer a wide variety of services, with a high degree of competence, emanating from the historical focus on process and quality that they have made a differentiating factor of the offshore model. The added benefit of cost-arbitrage-driven lower pricing to clients is also a compelling factor that has made this one of the fastest-growing service lines in global sourcing.

The strong growth of this class of offerings has driven many of the large traditional service providers, as well as some pure-play testing service providers, to increasingly focus and expand on this service line. Some organizations are also building COE-style testing factories to move toward increased levels of automation support for testing, as well as to help bring about the paradigm of "building quality into the software," as opposed to "testing quality into the software."

The ability to effectively outsource testing services using an offshore labor model is challenged by poorly written or understood specifications, as well as the problems that result from inexperience with the GDM. Challenges in communicating effectively during the development process are common for first-time users; however, the allure of offshore labor cost savings is driving interest in these services, and offshore testing services engagements have been highly successful.

Many engagements will fail to meet expectations until collaborative environments improve and expectations become realistic. A higher degree of focus and emphasis also needs to be on "equalizing" the widely differing process capabilities and levels of the typical client enterprise and its service providers. However, the path is well-trod by ISVs that provide models for successful use.

Longtime users of offshore vendors for development have found their offshore testing efforts to be extremely successful, because they've figured out the requirements for process/communication issues. First-time users should start small — typically with a pilot —- and then work toward larger-scale efforts. A growing number of firms also offer mixed models with on-site, "nearshore" and offshore options to create more-effective communication paths.

User Advice: Explore outsourced testing for applications in maintenance and to assist with performance-testing upgrades to software packages. However, organizations must first have models and/or documents that enable test planning, or use outside expertise to create them. The obvious opportunity is to effectively set the stage to leverage the service providers testing offerings by using their expertise to improve basic process levels in the internal testing environment, models and artifacts of the enterprise. These models should be kept up-to-date during the project, with an effective communication and versioning process. They will provide a richer collaboration and communication medium with the service provider.

Business Impact: Offshore testing may reduce expenditures for testing and provide more-thorough testing practices if the appropriate documentation and version management processes are used. However, the long-term goal should be to move to a paradigm of building quality into

Page 22: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 22 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

the software, as opposed to testing quality in, as well as to move toward automated testing process and environments.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: AppLabs Technologies; Cognizant Technology Solutions; IBM Global Technology Services; Infogain; ReadyTestGo; Tata Consultancy Services; Wipro Technologies

Model-Driven Architectures Analysis By: David Norton; David Cearley; David McCoy

Definition: The term "Model Driven Architecture" is a registered trademark of the Object Management Group (OMG). It describes OMG's proposed approach to separating business-level functionality from the technical nuances of its implementation (see www.omg.org/mda). The premise behind OMG's Model Driven Architecture and the broader family of model-driven approaches (MDAs) is to enable business-level functionality to be modeled by standards, such as Unified Modeling Language (UML) in OMG's case; allow the models to exist independently of platform-induced constraints and requirements; and then instantiate those models into specific runtime implementations, based on the target platform of choice.

"Model-driven," as in "model-driven software engineering," is a commonly (if sometimes generically) used prefix that denotes concepts in which an initial model creation period precedes and guides subsequent efforts, including model-driven application development, such as SODA; model-driven engineering; and model-driven processes, such as BPM. "Model-driven" has become a "catchall" phrase for an entire genre of approaches.

Position and Adoption Speed Justification: Core supporting standards, such as UML (referenced by OMG's Model Driven Architecture) are well-established; however, comprehensive MDAs as a whole are less mature than their constituent supporting standards in terms of vendor support and actual deployment in the application architecture, construction and deployment cycle. An MDA represents a long-standing goal of software construction that has seen prior incarnations and waves of Hype Cycle positioning (for example, computer-aided software engineering technology). The goal remains the same: Create a model of the new system, and then enable the model to become transformed into the final system as a separate and significantly simplified step. As always, such grand visions take time to catch on, and they face significant hurdles along the way. A new wave of model-driven hype is emerging.

User Advice: Technical and enterprise architects should strongly consider the implications of implementing architectural solutions that are not MDA-compliant. However, all major vendors will provide adherence, to at least some degree, in their tools, coupled with best-practice extensions beyond MDA standards. Organizations implementing SOAs should pay close attention to the MDA standards and consider acquiring tools that automate models and rules. These include architected rapid application development (ARAD) and architected model-driven (AMD) technologies and rule engines supporting code-generating and non-code-generating (late binding) implementations.

AMD is primarily suited to complex projects that require a high degree of reuse of business services, where you can put significant time into business process analysis (BPA) and design. At the same time, no competent organization would want to do AMD-only development, because the additional time and cost of the analysis and design steps would not bring adequate return on

Page 23: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 23 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

investment or agility for time- and/or budget-constrained application development projects. The ideal solution is to mix AMD, ARAD and rapid application development (RAD) methods and tools.

Business Impact: MDAs reinforce the focus on business first and technology second. The concepts focus attention on modeling the business: business rules, business roles, business interactions and so on. The instantiation of these business models in specific software applications or components flows from the business model. By reinforcing the business-level focus and coupling MDAs with SOA concepts, you end up with a system that is inherently more flexible and adaptable. If OMG's Model Driven Architecture or the myriad related MDAs gain widespread acceptance, then the impact on software architecture will be substantial. All vertical domains would benefit from the paradigm.

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: BEA Systems; Borland; Compuware; IBM; Kabira; OMG; Pegasystems; Telelogic; Unisys

Scriptless Testing Analysis By: Thomas Murphy

Definition: Scriptless-testing tools are second-generation testing tools that reduce the amount of manual scripting needed to create tests using data-driven approaches. The goal is to keep the test project from becoming another development project, and to enable business user testing. These tools have a broad set of pre-defined objects that can interact with the application being tested, including error handling and data management. As the tools mature, they'll continue to shift toward a more MDA.

Position and Adoption Speed Justification: Although these tools reduce the amount of code to be written, they don't remove the need for skilled testers. Scriptless testing makes it easier for business analysts to be involved in testing efforts, but the analysts must still be paired with quality engineers to drive testing effectiveness. This is especially important with packaged applications.

The emergence and changing nature of SOA and the tools supporting it will extend the time needed for this market to mature, and additional areas (such as data management) will suppress the expected results. Tools and users will reach the Slope of Enlightenment during the next two years and take another three to five years to reach the Plateau of Productivity.

The promise of being "script-free" has existed for several years; however, although improvements have been made, it's unlikely that all scripts can be removed for all applications. Expect the greatest benefits to come from domain-limited tools. Tools will also gain capabilities as model-oriented approaches appear, but these will require skills and model management to be effective.

User Advice: Evaluate tools that reduce the cost of testing. In addition, recognize that these tools aren't meaningfully well-integrated with leading application life cycle management suites, which reduces a team's ability to coordinate effectively. Although these tools will reduce the need for scripting, well-designed tests still require skill — and business users typically don't have the right skills and mind-set for this.

Business Impact: Scripting-centric tools are labor-intensive not only for the initial creation, but also for maintenance. Scriptless testing will reduce overall testing costs and enable better coverage, which should lead to improved defect detection earlier in the development cycle (thus

Page 24: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 24 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

further reducing overall application costs). However, expectations should be managed. Organizations still need qualified testers, and tools continue to have limitations.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Agitar Software; HP; Original Software; Worksoft

Architected, Model-Driven SODA Analysis By: David Norton

Definition: AMD project approaches to SODA are appropriate for applications, services and components that require robust analysis to understand business rules and requirements, and to automate their design and delivery for maximum reuse and performance.

Model-driven can also be subdivided into a transformation approach where 100% code generation is the norm or the models are executable. The second approach is elaboration, with pattern and frameworks being used to partially generate implementation.

Position and Adoption Speed Justification: Business process automation, UML methodologies and best practices are still evolving. They must capture service-oriented business models and rules at a sufficient level of detail for integrated tools to automate or facilitate the Enterprise JavaBeans (EJB) and C# components based on them. The more-widely used ARAD tools are increasingly adding integrated UML and business process modeling capabilities, and bidirectional bridges to leading modeling tools — evolving their technologies beyond ARAD into AMD. Moreover, as users of traditional client/server integrated model-driven development tools migrate their application portfolios to new SOAs, they are expected to replace them with the next-generation of SODA AMD tools.

User Advice: Organizations that use traditional client/server integrated model-driven development tools should consider using a next-generation AMD tool in conjunction with developing new or replacement SOA applications. Organizations that have no experience with integrated model-driven technologies are advised to evolve to AMD approaches as they extend the use of their ARAD tools into more model-driven SODA projects. Other organizations that have committed to top-down enterprise or business architecture modeling efforts should strongly consider adding an AMD tool to leverage their models through code or rule automation to improve productivity, quality and compliance. Mature AMD organizations developing applications for legacy third-generation language and fourth-generation language (4GL) need to assess migration to new AMD tools carefully. The newer tools support a more-bidirectional model-code, elaboration approach compared with the old 100% transformation method that some older tools use.

Warning: Model-driven development requires a level of sophistication beyond the capability of most developers. Therefore, be selective in the applications you choose to implement and with whom you staff the effort. The time to improve AMD is more than two years. Consider this for long-term but major improvement.

Business Impact: Design tools, coupled with code-rule generators, are used to ensure compliance with business and technical models and architectures, while providing productivity and quality improvements. Coupled with service-oriented and component-based methodology focused on reuse, and an established base of reusable business and technical artifacts, factor productivity gains of 10 times or more across the development life cycle are common — but this generally takes three or more years to achieve. Moreover, AMD approaches are appropriate only

Page 25: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 25 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

for a subsection of the application portfolio, so they should be coupled with ARAD and other rapid development tools as part of an application development tool suite.

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Emerging

Sample Vendors: CA; Compuware; IBM; Mia-Software; Oracle; Telelogic; Wyde

Enterprise Architecture Tools Analysis By: Greta James

Definition: Enterprise architects need to bring together information on a variety of subjects, including business processes, organization structures, applications, data (structured and unstructured), technology of various kinds and interfaces. Architects need to understand and represent the relationships between this information and communicate it to their stakeholders. EA tools address this need by storing information in a repository and providing capabilities to structure, analyze and present the information in a variety of ways.

An EA tool should also have a metamodel that supports the business, information and technology viewpoints, as well as the solution architecture. The repository should support relationship integrity among and between objects in these viewpoints/architectures. It should also have the ability to create or import models and artifacts and to extract repository information to support stakeholder needs, including extracts in graphical, text and executable forms.

Position and Adoption Speed Justification: Most tools have come from a modeling or a metadata repository origin, with the modeling-heritage tools having better visualization capabilities, and the repository-heritage tools generally having better import/export and management capabilities. As the market has matured, vendors have rounded out their capabilities. Small, private companies, several of which are European, predominate in this market. While there were two mergers/acquisitions in 2005, more activity of this kind is anticipated. We expect large technology vendors such as IBM, Microsoft and Oracle to enter this market, most likely through acquisition. Although all vendors have sales offices in North America and Europe, only four companies — ASG, IDS Scheer, Sybase and Telelogic — have an extensive direct presence elsewhere. This is unlikely to change in the short term without additional acquisition activity.

This market has gradually matured over the past year, with vendors continuing, by and large, to enjoy healthy license revenue growth. Vendors have also continued to add features to their products, such as improving their ability to import information about packaged applications and to analyze information in or derived from their repositories. Vendor support for developing Web sites that make their repository information available and understandable to a range of stakeholders has increased in power and become more widespread.

User Advice: When choosing an EA tool, consider five broad functional capabilities: the ability to flexibly structure information in a repository in meaningful ways; the ability of the tool to exchange information with other related tools, possibly supplemented by the ability to generate models and other artifacts within the tool; the ability to analyze the information in the repository; the ability to communicate information to address the needs of EA stakeholders; and the ability to administer and manage information in the repository.

As well as the tool functions, consider other factors such as the viability of the vendor; the availability and capability of the vendor's sales and support organization; the vendor's experience

Page 26: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 26 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

in your industry and any related tool capabilities, such as support for an industry-specific architecture framework; and the vendor's understanding of EA.

Business Impact: Business strategists, planners and analysts can derive considerable benefit from an EA tool, because it helps them to better understand the complex system of IT resources and its support of the business. Crucially, this visibility helps to better align IT with the business strategy, as well as providing other benefits, such as improved disaster recovery planning.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: ASG; Casewise; IDS Scheer; Mega International; Proforma; Sybase; Telelogic; Troux Technologies

Recommended Reading: "Telelogic's System Architect for Enterprise Architecture"

"Follow These Best Practices to Optimize Architecture Tool Benefits"

"Troux: Innovative Enterprise Architecture Tools"

"Cool Vendors in Enterprise Architecture, 2007"

Application Security Testing Analysis By: Joseph Feiman; Neil MacDonald

Definition: Application security testing is the detection of applications' conditions that are indicative of exploitable vulnerabilities.

Position and Adoption Speed Justification: Two technology markets for application security testing have been evolving rapidly — static application security testing (SAST) and dynamic application security testing (DAST). SAST is a source code and binary code testing technology market. Its technologies are applicable at the construction and testing phases of the application life cycle. DAST is a dynamic, black-box application testing (the source code is unavailable to DAST tools) technology market. DAST technologies are applicable at the testing and operation phases of the application life cycle.

The adoption of SAST and DAST application security testing is impeded, owing to a lack of application security competence and resources in application development organizations. The solution to this problem is coming in the form of emerging security-as-a-service offerings from technology and service providers, when providers test applications (often remotely) and provide application development organizations with vulnerability reports and security breach remedies.

The speed of adoption for application security testing is accelerating because of a pressing need to resolve the collision of two trends: the growing exposure of e-business applications on the Web and the relentless attacks on these applications. The plateau of technology productivity will be reached in two to five years.

User Advice: Enterprises must adopt application-testing technologies and processes, because the need is strategic. Yet, they should use a tactical approach to vendor selection, because of the immaturity of this emerging market. Application development organizations should accept that they, not network security specialists, are responsible for the adoption of application security discipline.

Page 27: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 27 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Business Impact: Enterprises adopting application security testing technologies and processes will benefit from risk and cost reductions, because these technologies and processes provide early detection and correction of vulnerabilities before applications move into production and become open to attack.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: Acunetix; Cenzic; Coverity; Fortify Software; Klocwork; Ounce Labs; SPI Dynamics; Veracode; Watchfire

Recommended Reading: "MarketScope for Web Application Security Vulnerability Scanners, 2006"

“Market Definition and Vendor Selection Criteria for Source Code Security Testing Tools”

Sliding Into the Trough Project and Portfolio Management Analysis By: Daniel Stang

Definition: Project and portfolio management (PPM) systems support the business process of effective allocation of capital to projects. They also track and monitor the use of time, people and money to deliver different types of "work." In IT organizations, work can include strategic and nonstrategic IT and non-IT projects, new and existing applications, and new and existing IT services made up of software services and technology, application change or enhancement requests, bug and error fixes, routine maintenance procedures, and help desk and trouble tickets. By tracking work demand and execution against the resources (time, people and money) used to complete the work, PPM systems provide visibility into work performance and allow for more-effective planning, decision making and management of strategic and operational work delivered by IT departments.

Position and Adoption Speed Justification: PPM is first and foremost about changing work execution behaviors. The most robust PPM systems are sophisticated enough to manage time reporting through top-down portfolio analysis, optimization and planning, and can manage various types of work items — from simple IT service requests to multiyear, formally defined projects and programs. Organizations interested in these PPM systems, however, are not in a position to support all potential processes suggested by these systems, and, therefore, cannot realize all the benefits without undergoing significant change management.

PPM systems have been available for years and have grown in maturity, but the intended audience is PPM process immature. The acquisition of PPM technology is steady, but implementation times can be slowed considerably owing to PPM process immaturity and/or lack of management buy-in. Midmarket solutions are emerging in response to the resonance of the PPM value proposition with smaller IT organizations (fewer than 100 resources in the IT organization), and alternative deployment models are appearing in the marketplace. In addition, PPM systems are tracking more than just projects, and we expect PPM systems to continue expansion from the project portfolio level to support, track and monitor work from IT service management and application life cycle management portions of the IT function. It will be another two or three years before end users and PPM systems reach the Slope of Enlightenment and another two to three years before they reach the Plateau of Productivity.

Page 28: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 28 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

User Advice:

• IT project organizations should pursue a PPM initiative that includes three parts:

• Organizational and process definition and adoption

• PPM process automation via technology

• Parallel program management office creation or improvement initiative

The PPM system evaluation is an important part of a PPM initiative, but it is not the only part, and the change management required for a successful PPM initiative should not be ignored. PPM systems are not "off the shelf" systems. Initial implementations should be controlled and narrow, reflecting the commonly low PPM process maturity levels among IT departments.

Business Impact: Visibility into work demand, committed work and resources (time, people and money) used and available is the first step toward cost savings and a more-effective response to rapid business change in IT departments. PPM systems help organizations lower investment risk, reduce operational costs, increase work execution efficiency and quality, and more effectively align IT output with business strategy.

Benefit Rating: High

Market Penetration: One percent to 5% of target audience

Maturity: Early mainstream

Sample Vendors: CA; Compuware; HP; IBM; Planview; Primavera Systems

Business Application Package Testing Analysis By: Thomas Murphy

Definition: These are testing tools to test the functionality and performance of packaged applications, such as SAP, Oracle and PeopleSoft. The products generally provide pre-built routines for package functions to reduce the time needed to effectively test, and they may also provide data model support or integration with package vendor tools. Software packages provide unique challenges for custom development, and these tools offer specific support for specific packages.

Position and Adoption Speed Justification: Most of the core technology is robust, but users are generally disappointed with the manual scripting and maintenance costs of the tools, which can often exceed the package implementation costs. The broadest uses are for performance testing and to help ensure that upgrades don't create regression. A growing number of vendors and services take advantage of better integration with the package, creating much faster return on testing efforts and reducing test maintenance costs. During the next three years, we expect that the majority of users of Oracle and SAP application stacks will go through major upgrades, and these will place a strain on testing resources. Companies that lack effective testing strategies and technologies will ultimately experience higher implementation costs and rollout delays. These will drive increased adoption of tools, but the initial costs will surprise many organizations that look at a package as already tested.

User Advice: The cost to test package software can often exceed implementation costs, slowing deployments and maintenance updates significantly. This constrains the agility of the business. Purpose-specific testing tools should be used for any package deployment with complex integrations or process customization. Organizations should seek to build automation for key scenarios to prevent regressions during maintenance and to ensure that geography-specific rules

Page 29: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 29 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

are accurate. The amount and type of testing that an organization will require is driven by the complexity of the implementation: multiple geographies, workflow customization, integration with other applications and regulatory compliance.

Business Impact: Performance tuning can save considerable amounts on deployment hardware (as much as 50%). Effective functional testing early in development will reduce the cost of repairs and rollout failures. Building automation tests for key workflows will enable faster rollout of maintenance releases and add-on functionality while reducing the risk of application failure.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Arsin; Gamma Technologies; HP; Newmerix; SAP; Sucid; Worksoft

Agile Development Methodology Analysis By: Matt Light; James Duggan

Definition: Agile development methodology is a highly accelerated, iterative development process with frequent monthly, weekly and even daily deliverables of priority requirements as captured in user stories, often documented in test cases prior to coding.

The principles of collaboration, re-factoring and promoting ownership are key differentiators. Agile methods prefer to define themselves in terms of "values, principles and best practices," rather than "process and procedure." Agile methods attempt to establish a high level of collaboration among developers and business users. They also attempt to flatten the project and organizational structure, often through self-organizing teams. They embrace the notion that requirements change, unexpected requirements appear, priorities shift and that development practices must enable quick, accurate adaptation to those changes.

Position and Adoption Speed Justification: All but the most aggressive adopters, Type B and C, development organizations are encountering difficulty adopting agility judiciously and effectively. A high level of suspicion and resistance to agile methods exists from development management in large organizations. Project management and quality functions frequently find it difficult to redefine their functions in an agile context. There are also significant barriers to agile development when multisourcing. The popularity of other iterative approaches, including more-traditional RAD, also influence the expected pace of adoption in organizations trying agile approaches. Through 2008, 90% of agile development efforts will be confined to departmental or workgroup product development (0.7 probability).

Many organizations doubt agile methods' effectiveness because of the perception that agile methods have no structure. Unlike "cowboy coding," agile approaches actually have, in many ways, closer control (daily control) of development activities and offer clear practices, while shunning some of the traditional formalism (for example, big documents) that have traditionally provided comfort, but have not resulted in improved execution. Agile techniques are gradually transforming more-traditional RAD approaches, such as Rational Unified Process (which is incorporated in the official IBM incarnation of many of the techniques and practices of agility). Many of the practices — more consistent unit testing, daily builds, scrum meetings, being use-case-driven — have been put in place in successful application development organizations. Too often, less-disciplined developers have honored such practices more in the breach. Only true adoption of the spirit of agility, and consistent use of its practices, will lead to broader and more-successful use.

Page 30: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 30 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

User Advice: Adopt agile approaches judiciously, on internal projects with sophisticated teams that are not averse to process discipline. "Coding cowboys" are not the developers who will capture lessons learned from agile development projects and help build agile capability in the organization. Agile projects need smart, disciplined developers that understand patterns. Agile proponents state that they value working software "over" process and documentation, whereas less-sophisticated developers value it to the exclusion, or "instead of," process. A key driver of agility is the increasing number of tools that enable agile practices and provide the "comfort" that a bigger process will provide. Comfort is coming from real information and real data, not just a big, static document. Examples are code review tools, unit testing tools, continuous integration tools, metrics tools and project management tools that help automate and drive information into repositories to support management views of project status.

Business Impact: In some Type A development organizations, agile approaches on some projects are already delivering the benefits of fast, accurate delivery of priority application requirements. Although rarely the dominant approach, even the lessons that aggressive adopters (Type A) learned from agile development have a positive impact on other methodological approaches in the development organization. Tight business collaboration (on-site customer) is a key success factor with agility, but it's also the most broken principle ("You can have my domain expert for three weeks, no more"). The successful adoption of agility will require more business involvement in the development process than most organizations have committed or experienced in older methods.

Benefit Rating: Moderate

Market Penetration: One percent to 5% of target audience

Maturity: Early mainstream

Sample Vendors: Borland; IBM; Jaczone AB; Rally Software Development; Thought Equity Motion; ThoughtWorks; VersionOne

Recommended Reading: "Borland Runs With the Continuous Integration Gauntlet"

"Pairing Agility with Quality: Gartner's 10 Principles of NeoRAD"

"Agile Requirements Definition and Management Will Benefit Application Development"

"Agile Development: Fact or Fiction"

Unit Testing Analysis By: Thomas Murphy

Definition: Unit-testing tools are designed to test application functionality at the "unit" of modularity. In object-oriented programming, this is generally at the class level; in SOA, it is a service. Unit testing is advocated in agile development methods and is put to use by some organizations as part of test-driven development. These tools provide frameworks for automatically generating unit tests, managing and running suites of tests, and reporting results.

Position and Adoption Speed Justification: Tools in this space began in the open-source market more than five years ago and are commonly used. Environments now provide support for unit tests as part of check-in criteria and have the capability to discover needed tests and boundary conditions. In addition, tools are appearing on top of basic unit testing frameworks to support the creation of user interface functional tests. Driving testing early into the life cycle and running the test fixtures frequently can result in dramatic quality improvements and greatly reduced costs by detecting defects early.

Page 31: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 31 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Although unit testing is relatively well-known and adoption of the core frameworks is mainstream, the use of unit testing is expanding as new solutions are layered on top of current frameworks, pushing unit testing to do more — for example, functional testing. In addition, tools that automate the generation of unit tests promise big results but suffer in performance. Because most organizations use a test-second rather than test-driven approach, they do not gain all the promised benefits of unit testing.

User Advice: Drive the use of unit testing in your organization, and train developers on the effective use of these tools to help in design discovery and refactoring. Use these tools to ensure that application code presents contract-oriented interfaces, which will enable smoother transitions toward SOAs. Although open source is cost-efficient, automation provided in integrated development environments and by commercial products improves and reduces the amount of code that must be created.

Business Impact: Unit testing can reduce the amount of time needed to functionally test an application, and can also provide a solid baseline for functional and load testing. The appropriate use of the technology can drive out defects earlier in the development process, thus reducing the cost to fix defects.

Benefit Rating: Moderate

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Agitar Software; Eclipse; Instantiations; Microsoft; Open Source Applications Foundation; Parasoft; United Binary

ARAD SODA Analysis By: David Norton; Mark Driver

Definition: Architectural and design patterns are modified by an organization's technical architects to adhere to the company's architectural standards. All code involving the architecture is generated in applications in a compliant manner as part of the ARAD process for SODA.

Position and Adoption Speed Justification: Architectural standards continue to evolve. Technologies that can use these standards to generate Java 2 Platform, Enterprise Edition (J2EE) and .NET code are moving toward mainstream use. As mainstream users of RAD tools experience the steep learning curves and lost productivity associated with building more-complex service-oriented applications, they will look to ARAD tools for relief.

User Advice: To reduce the learning curve and increase the productivity of J2EE or .NET developers, or to ease the transition of traditional client/server developers using COBOL, C or 4GLs, enterprises should consider providing these developers with ARAD methods and tools.

Business Impact: Design tools, coupled with code generators, are used to ensure compliance with business and technical models and architectures, while providing productivity and quality improvements. Gartner studies indicate that productivity gains using ARAD methods and tools are typically 30% to 40% compared with traditional client/server methods and tools. Application and development managers should look for a return within 12 months of ARAD implementation.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Page 32: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 32 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Sample Vendors: CA; Codagen Technologies; Compuware; IBM; Interactive Objects; Mia-Software; Microsoft; ObjectVenture; Wyde

SOA Analysis By: Roy Schulte; Yefim Natis

Definition: SOA is a style of application architecture. An application is an SOA application if it is modular; the modules are distributable; software developers have written or generated interface metadata that specifies an explicit contract so that another developer can find and use the service; the interface is separate from the implementation (code and data) of the service provider; and the services are shareable — that is, designed and deployed in a manner that enables them to be invoked successively by disparate consumers. Unlike some other types of distributed computing, services in SOA can be shared across applications running on disparate platforms and are inherently easier to integrate with software from other development teams.

Position and Adoption Speed Justification: The use of SOA is accelerating in response to escalating business requirements, the emergence of Web and Web services standards (such as WSDL and SOAP) and the improving availability of SOA-capable development tools and applications. Competition, globalization and technology advances are driving companies to change their products, business processes and prices more frequently than they did before the mid-1990s. The growing use of BPM and BAM is also causing companies to use more SOA because BPM and BAM are more-effective and easier to develop when using SOA. Vendors of middleware, development tools and packaged applications have committed to moving to SOA, and their product lines are well into the transition. User companies are moving more slowly, on average, and they are experiencing varying degrees of difficulty in ramping up their use of SOA. These difficulties hinder, but will not prevent, the spread of SOA throughout the application portfolios of large companies. The growing, if limited, practical experience with SOA has demonstrated the real costs and benefits of the transition to SOA. SOA skepticism is gradually giving way to a realistic anticipation of costs and benefits. Development and management best practices for SOA are still not fully mature, but companies are largely satisfied with their experience with it.

User Advice: Use SOA when designing new business applications, particularly those whose life spans are expected to be more than three years and that will undergo continuous refinement, maintenance or enlargement. SOA is well-suited especially for building composite applications. When buying packaged applications, rate those that implement SOA more highly than those that do not. Also, use SOA in application integration scenarios that involve composite applications that tie new logic to purchased packages, legacy applications or services offered by other business units. However, do not discard non-SOA applications in favor of SOA applications just on the basis of architecture. Discard non-SOA applications only if there are compelling business reasons why the non-SOA application has become unsatisfactory. Continue to use non-SOA architecture styles for some new, tactical applications of limited size and complexity, and for minor changes to installed non-SOA applications. Recognize that there are multiple patterns within SOA (such as multichannel applications, composite applications, multistep process flows and event-driven SOA), and each of these has its own best practices for design, deployment and management.

Business Impact: SOA is a durable change in application architecture, like the relational data model and the graphical user interface. The main benefit of SOA is that it reduces the effort and time needed to change application systems to support changes in the business. The implementation of the first SOA application in a business domain will generally be as difficult as, or more difficult than, building the same application using non-SOA designs. However, subsequent applications and changes to the initial SOA application are easier, faster and less expensive because they leverage the SOA infrastructure and previously built services. SOA is an

Page 33: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 33 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

essential ingredient in strategies that seek to enhance the agility of a company. SOA also reduces the cost of application integration, especially after enough applications have been converted or modernized to support an SOA model.

Benefit Rating: Transformational

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Recommended Reading: "Five Principles of SOA in Business and IT"

"SOA: Where Do I Start?"

"Applied SOA: Transforming Fundamental Principles Into Best Practices"

Climbing the Slope Enterprise Software Change and Configuration Management Analysis By: James Duggan

Definition: Enterprise software change and configuration management is an integrated business process for software change and configuration management, evolving to encompass a full range of change processes and governance. The functionalist supports the deployment of extended life cycle management processes, which extend and integrate with requirements elicitation and tracing tools, test planning and project portfolio tools. Bridges to production release and distribution processes are being improved. Organizations are seeking the key enablers of analysis and control regarding how development changes affect production. In addition, organizations are moving to standard solutions across teams and locations to improve control, reduce training costs, and improve the capability to support multisourcing and other geographically distributed development strategies.

Position and Adoption Speed Justification: Smaller vendors are completing offerings, but lack of user readiness will slow the implementation of integrated business processes.

User Advice: Changes come in steps in this technology, and are driven by changes in sourcing and delivery. Identify the software delivery processes (used for the governance and management of version, configuration and change) that will have to be upgraded. Create policies and schedules that will encourage incremental adoption as increased maturity is required.

Business Impact: Explicit processes reduce cost and risk, and enhance agility in response to business needs. Sarbanes-Oxley and similar initiatives have caused IT auditing standards to be raised, thus demanding a chain of evidence.

Benefit Rating: Moderate

Market Penetration: Five percent to 20% of target audience

Maturity: Adolescent

Sample Vendors: CA; IBM; MKS; Serena; Telelogic

Enterprise Portals Analysis By: David Gootzit

Page 34: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 34 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Definition: A portal is Web software infrastructure that provides access to and interaction with relevant information assets (such as information/content, applications and business processes), knowledge assets and human assets by select targeted audiences, delivered in a highly personalized manner. Enterprise portals may face different audiences, including employees (business-to-employee — B2E), customers (business-to-consumer) or business partners (business-to-business). B2E portals are the most relevant type of enterprise portal to the high-performance workplace, but portals serving other audiences also play important roles.

Position and Adoption Speed Justification: Portals continue to be one of the most highly sought interfaces across Fortune 2000 enterprises. They're fundamental technical components of the high-performance workplace, whether as a replacement for a first-generation intranet, the cornerstone of knowledge management initiatives, or a B2E portal that serves as the primary way for employees to access and interact with back-end systems and repositories.

User Advice: Enterprise portal use has reached mainstream enterprises. B2E portal deployments are no longer limited to early adopters and technologically aggressive enterprises. Organizations are evaluating horizontal portal technology to augment their customer and partner-facing Web presences. The personalized delivery of and interaction with relevant applications, content and business processes can yield many benefits at the enterprise level, primarily focused on reducing process cycle times and improving the quality of process execution.

Many enterprises that have deployed portals find themselves facing multiple, siloed deployments using different portal frameworks. These enterprises should investigate appropriate portal containment and rationalization policies. Enterprise portals are incorporating RIA technologies, primarily in the form of Ajax, to improve the quality of the user experience they deliver.

Business Impact: The benefits of enterprise portals include controlling the "infoflood," providing single sign-on, enhancing customer support and enabling tighter alignment with partners. The benefits of internally facing portals include cost avoidance via employee self-service, but the most compelling business impact can be improved business agility, velocity and throughput. Externally facing portals can lead to increased revenue and profitability. Enterprise portals' biggest business impact is in reducing cycle times and improving the quality of process execution.

Benefit Rating: Transformational

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Sample Vendors: BEA Systems; BroadVision; Fujitsu; IBM; Microsoft; Oracle; SAP; Sun; Tibco Software; Vignette

Recommended Reading: "Magic Quadrant for Horizontal Portal Products, 2006"

"Hype Cycle for Portal Ecosystems, 2006"

Microsoft .NET Application Platform Analysis By: Yefim Natis; Mark Driver

Definition: Microsoft does not identify a product that is an application server. However, its Microsoft Windows offering contains all the functional elements of an enterprise application server (EAS). Despite the lack of a named EAS product, Microsoft clearly (and effectively) competes against the leading EAS vendors. We refer to the subset of Microsoft Windows, with the other Microsoft offerings that amount to an EAS and a larger application platform suite, as the Microsoft Application Platform (MSAP). Today, MSAP is based on the .NET architecture. MSAP includes

Page 35: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 35 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

the .NET framework (which includes ASP.NET), Internet Information Server, enterprise services (such as Component Object Model +), Microsoft Message Queuing, BizTalk Server, Office SharePoint and more. The release of Windows Server 2008 (formerly known as Longhorn) will likely make some changes and additions to this list, including, most notably, Windows Communications Foundation. It is already clear that the .NET Framework v.3 adds significant enrichment to the previous versions of MSAP — in the areas of productivity, support of SOA, innovations in support of modern user experience and multiprotocol integration.

MSAP competes against Java Platform, Enterprise Edition (Java EE) and other enterprise platform architectures in the high-end enterprise projects. MSAP also competes against PHP platform, Ruby on Rails, ColdFusion and open-source Java frameworks, such as Spring and Struts, in lower-end productivity-oriented enterprise projects.

Position and Adoption Speed Justification: The technical quality and quality of service of .NET are suitable for the majority of enterprise projects. However, Microsoft's business strategy for mission-critical projects, including support, account management, long-term continuity in the relationship and its software offerings, lags behind leading enterprise vendors' strategies. Microsoft does not have a good reputation of seeing its customers through long-term software architecture endeavors. Its exclusive commitment to Windows as the only OS platform reduces the appeal of MSAP in large-scale enterprise settings. The impending replacement of Windows Server 2003 in the 2008 time frame and its anticipated fundamental platform changes continue to remind high-end enterprise platform users that MSAP has not yet reached dependable maturity levels, despite its increasing technical strengths.

However, in lower-end enterprise settings, where MSAP dominates with its productivity, ubiquity of skills and relatively low cost, MSAP remains a strong and growing option for the majority of projects. Yet, MSAP's market share has not been increasing dramatically lately because of the growing number of Java and other high-productivity alternatives, including open source.

MSAP is a technically strong platform option. It is used widely for small and midsize software projects and, on occasion, for very large enterprise projects. Because of business strategy shortcomings, MSAP is not considered for high-end projects as much as it could technically handle. MSAP's market share is stable and will likely remain at present levels until the next generation of MSAP is available (and proven) with the release of Windows Server 2008 and until Microsoft's business strategy in high-end enterprise settings better reflects the requirements of that environment.

User Advice: If you choose to use MSAP, then you have to use the Windows OS platform. If that is acceptable, then consider MSAP for small and midsize software projects without limitations. For large-scale projects, recognize that the technical quality of the MSAP technology is likely sufficient for the requirements of the project. However, to achieve the "whole product" experience, you will need to rely on third-party system integrators.

Microsoft excels at providing strong developer productivity at the cost of long-term flexibility. Consider MSAP as one element in a larger IT strategy (for example, that might include Java EE) or as the principle strategic platform when a Microsoft-centric strategy is acceptable or preferable.

Business Impact: MSAP frees IT organizations from the lock-in into a single programming language (such as Java or PHP), but it locks the project into Windows OS and only the hardware options that are available for Windows. MSAP can be a lower-cost option for enterprise platform technologies. However, in larger settings, this assumption is not always true and must be verified with real numbers. The history of significant discontinuities as Microsoft releases major new versions of its MSAP technologies have also increased costs of long-term use of MSAP. Above all, however, users who are happy to use Windows infrastructure for all their development needs

Page 36: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 36 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

find MSAP and its development environment easy to learn, easy to use and easy to staff —major factors reducing the costs and improving the time-to-market of software projects.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Early mainstream

Sample Vendors: Microsoft

Recommended Reading: "Magic Quadrant for Enterprise Application Servers, 2Q06"

OOA&D Methodologies Analysis By: Michael Blechar; David Norton

Definition: Object-oriented analysis and design (OOA&D) methodologies are used by IT professionals to convert business processes and requirements into software solutions in the form of objects, components and services.

Position and Adoption Speed Justification: Virtually every major software vendor is advising its customers to move to OOA&D methods, in conjunction with moving to service-oriented development targeting J2EE and .NET platforms. Most organizations are adhering to this advice by converting to OOA&D methods and tools to document requirements and perform logical design as a front end to Java and C# code construction. The UML, owned by the OMG consortium, is the de facto standard for OOA&D.

Domain-Specific Language (DSL) methods are emerging as a natural extension of and complement to UML. DSL is helping to remove the semantic gap between problem and solution domains by using tailor-made syntax and semantics. However, DSL-based modeling is largely seen as a Microsoft initiative — part of its Software Factories approach. This has split the modeling community, and as a result, a period of instability will delay the broad adoption of DSL.

Microsoft believes you shouldn't bother with UML for automated model-based development, but instead use purpose-built DSL right from the start. Microsoft argues that UML should be used for documentation and DSL should be used for development.

User Advice: Implement OOA&D methods and tools in conjunction with the development of service-oriented applications. As a best practice, use BPA/modeling methods and tools at a conceptual/planning and analysis level to define business processes, workflows and events; then bridge those models to OOA&D tools to continue IT modeling for the processes to be implemented in software solutions.

DSLs should be considered as a complementary extension to UML. Most organizations will find that they need both techniques to tackle the range and complexity of projects, and to balance modeling return on investment against productivity and quality.

Business Impact: Improved specification languages close the gap among business requirements, designs and executables.

Benefit Rating: Moderate

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Page 37: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 37 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Sample Vendors: Borland; Embarcadero Technologies; IBM; Microsoft; Sparx Systems; Sybase; Telelogic

Linux as a Mission-Critical DBMS Platform Analysis By: Donald Feinberg

Definition: The Linux OS environment can be used as a platform for DBMS deployments. This includes all the necessary infrastructure involved with managing and supporting the Linux environment in conjunction with a DBMS to support all types of applications, from simple to complex, as well as providing high availability, disaster recovery and clustered environments.

Position and Adoption Speed Justification: Again, during the past year, we saw an increase in the adoption rate of Linux as a DBMS platform, primarily because of the increase in support from hardware and management tool vendors, the increasing maturity and manageability of the DBMS engines, and the increase in expertise levels of IT staff with Linux. In addition, there has been a marked increase in the adoption of Oracle RAC on Linux, and Teradata released a Linux Version as well. All the DBMS vendors with Linux versions are showing increases in Linux revenue (see "Market Share: Relational Database Management Systems by Operating System, Worldwide, 2006"). The speed of adoption will continue to increase as skill levels increase, management tools mature and risk levels drop.

User Advice: Unless your enterprise is a "Microsoft Windows only" environment, you must begin using Linux as a choice for DBMS deployment for simple noncritical applications. This will increase your skills and knowledge base with Linux and DBMSs. Therefore, as the risk level drops, you will be able to use Linux as a regular platform choice. You do not want to change to Linux, and change your DBMS version and hardware all at the same time.

If you are using Linux today, then expand its use to take advantage of commodity hardware and its associated cost savings. Using it in mission-critical and highly available environments is acceptable if you have the proper Linux resources.

If your enterprise is risk-averse and interested in new features as an early adopter, then Linux is a necessity. Use care in choosing the Linux hardware and software environment. Many options are available from small vendors that may have cost savings tied to greater risk, and they carry the added risk of not being certified by the DBMS vendors.

If your organization is risk-averse, then choose hardware and software options from larger vendors with proven configurations on Linux, such as Dell, HP and IBM, and be sure your total configuration is certified by your DBMS vendor.

Business Impact: The No. 1 advantage to Linux as a DBMS platform is the promise of lower total cost of ownership. In enterprises starting out with Linux, these cost savings may be hidden for the first several years as resources become trained and competent in Linux, and upfront purchases of management tools are made. Most organizations report cost advantages after attaining these skills and tools. There is also a greater level of flexibility with a Linux platform because of the number of different hardware choices available. Finally, larger and more-affordable configurations can be achieved with commodity blades and clustered build-out technology of many nodes, each with fewer CPUs than the maximum in a symmetric multiprocessing environment. Oracle RAC is being implemented in this environment with success.

Benefit Rating: High

Market Penetration: Five percent to 20% of target audience

Page 38: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 38 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Maturity: Early mainstream

Sample Vendors: IBM; MySQL; Oracle; Sybase; Teradata

Recommended Reading: "Market Share: Relational Database Management Systems by Operating System, Worldwide, 2006"

Performance Testing Analysis By: Thomas Murphy

Definition: Performance testing tools are used to simulate production loads to ensure that the system being tested will perform adequately. This includes stress testing to ensure application stability in adverse conditions. Leading tools are integrated into test management solutions and support data retrieval from a wide variety of sources. Although the concept and tools are generally mature, the shift toward Web 2.0 and SOA will continue to create challenges during the next two years.

Position and Adoption Speed Justification: This technology is well-understood and provided by all major players. Outsourced performance testing has also been used successfully. However, organizations are still struggling to achieve accurate results. SOA presents a greater potential for less-linear processes, making performance testing more-complex, not a simple end-to-end script.

User Advice: Define best practices to follow when using these tools, and investigate virtualization and simulation tools to improve the completeness of tests. Use production insight to understand true workloads and test in an environment that matches production as closely as possible.

Business Impact: Proper load testing will ensure that: You don't overspend on hardware and software licenses during a deployment; and application failures are minimized.

Benefit Rating: Moderate

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Sample Vendors: AdventNet; Borland; Compuware; Empirix; HP; IBM; Microsoft; RadView Software; Worksoft

Open-Source Development Tools Analysis By: Mark Driver

Definition: Open-source tools for analyzing, designing, developing, testing and debugging programs.

Position and Adoption Speed Justification: Open-source application development tools have existed for many years. Until recently, however, most were focused on low-level functions, such as compilers and parsers. As the open-source model matures and expands, so do the tools that support it. In particular, mainstream IT developers have been introduced to open-source development tools through the use of dynamic languages such as Perl, PHP, Python and Ruby. No open-source toolset has had as much influence in recent years as the Eclipse project (www.eclipse.org). Eclipse serves as the foundation for most large-scale application development tool providers (with the exceptions of Microsoft and Sun). Eclipse has become the underlying framework and workbench for nearly all next-generation SOA and Web development efforts. An

Page 39: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 39 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

alternative OSS IDE, NetBeans (www.netbeans.org), is gaining momentum, as well proving that more than one open-source solution can prosper in a market.

Even as some tools mature, others begin to emerge within mainstream efforts. Open-source testing, modeling and configuration tools have begun to put serious pressure on closed-source incumbent market leaders and show promise to further expand market influence in the coming years.

User Advice: Open-source application development tools can provide a compelling balance among cost, performance and features. However, most (including Eclipse) come with higher levels of self-support efforts than closed-source alternatives. Consider direct adoption of open-source tools when they fit a strong need, but also consider tools that are based on open-source technologies as well. Finally, consider the need for external service and support for all application development tools when appropriate — just because open source offers the option of self-support does not mean that every application development organization should look toward this strategy exclusively.

Business Impact: Like other markets, open-source development tools will commoditize market dynamics. These tools will provide a universally accessible level of technology that is available to developers and technology providers. Vendors that find a way to coexist (even synergize) with open source will create the strongest long-term market presence. Vendors that choose to compete directly against these open-source tools will find it increasingly difficult to maintain market positions.

Benefit Rating: Moderate

Market Penetration: Twenty percent to 50% of target audience

Maturity: Early mainstream

Sample Vendors: ActiveState; Apache Software Foundation; CollabNet; Eclipse Foundation; Free Software Foundation; Interface21; NetBeans.Org; Zend

Recommended Reading: "Managing Open-Source Service and Support"

"Learn the Basic Principles of Open-Source Software"

Business Process Analysis Analysis By: Michael Blechar

Definition: Gartner defines BPA as the business modeling space in which business professionals (that is, business users, business process architects and business process analysts) and IT designers collaborate on business process designs and architectural frameworks, generally in support of business process improvement initiatives. Awareness of the details of the business process flow improves the quality of the requirements assessment for custom development, as well as that of as-is and to-be gap analysis in packaged implementations. Furthermore, BPA can serve as a bridge to improve the alignment of IT efforts with business initiatives.

Position and Adoption Speed Justification: Process modeling as part of BPA is becoming a starting point for the growing number of BPM and compliance projects. Conversely, the majority of BPM projects start with workflow tools in support of redesign, and subsequently add a BPA tool to better understand their processes and simulate possible changes. Most BPA tools include BAM capabilities, or partner with BAM tool vendors, which includes executive dashboarding into business processes.

Page 40: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 40 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

User Advice: Use BPA tools to support EA and BPM initiatives, and as a starting point for architected, model-driven application development and integration projects. BPA tools bring a powerful, visual comprehension capability to a broad audience, so use them to improve process understanding, the clarity and quality of requirements, and the credibility of business process improvement projects.

Business Impact: Understanding complex business processes is a significant challenge. Although there's a whole discipline around BPM, the assistance of a tool is essential to model processes to realize cost and time savings. IT organizations may have to contribute substantial modeling and operational support to enable BPA capabilities for the business.

Benefit Rating: Moderate

Market Penetration: Five percent to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Casewise; iGrafx; IBM; IDS Scheer; Mega; Microsoft; Proforma; Telelogic

Recommended Reading: "Magic Quadrant for Business Process Analysis Tools, 2006"

"Consider Eight Functionality Selection Criteria When Choosing BPA Tools"

"Focusing on a Business Process Analysis Tool Acquisition"

Entering the Plateau Automated Testing Analysis By: Thomas Murphy

Definition: Functional testing tools designed to automate the process of retesting and find regressions with which these tools are used to assess progress for every build of the project. Tools are being integrated into the life cycle for improved reporting capabilities. They are also beginning to address packaged applications and SOA — detailed in separate profiles — as well as emerging, next-generation, scriptless-testing tools.

Position and Adoption Speed Justification: This is a mature technology with broad adoption and an understanding of the capabilities and limitations of what can be automated, the cost of automation and maintaining automation suites. Improved integration into the life cycle has enhanced the value of automated testing as organizations have shifted toward more-iterative development paradigms. We are seeing a follow-on wave of products for SOAs and packages, as well as the removal of the need to maintain scripts. Note that automated testing replaces the application quality ecosystem item from the 2005 Application Development Hype Cycle. We have expanded the point from the 2006 Hype Cycle to better represent where different parts of the overall quality ecosystem are located in 2007.

User Advice: Organizations can reduce the cost of retesting software and better understand the status of a project by using automated testing tools. Although automated, these tools can require a good amount of manual intervention to maintain test scripts, limiting the ability of these products to keep up with rapidly changing code.

Business Impact: High numbers of defects still seep through to deployment because of inadequate testing. Automation and metrics provide a way to manage the software delivery process and drive higher quality. This results in reduced operational and maintenance costs.

Benefit Rating: High

Page 41: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 41 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Market Penetration: Twenty percent to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Borland; Compuware; Empirix; IBM; Mercury; Original Software; RadView; Seapine Software

Java Platform, Enterprise Edition Analysis By: Yefim Natis

Definition: Java EE is a Sun-sponsored and Java Community Process (JCP)-managed architecture and programming model for multiplatform Java business applications. (Prior to this version, the architecture was named J2EE). Java EE is implemented as Java EE application servers by many commercial and some open-source, community-based vendors. JCP provides compliance test cases and reference implementations for Java EE components. Many vendors claim that their Java EE-compliant application servers pass all the test cases. The test cases are available to licensees at a substantial fee. Some smaller vendors avoid the cost and claim compatibility without having passed the tests.

A Java EE application server offers several container models — for example, Servlets, session EJB, entity EJB, Java Connector Architecture and Message-Driven Beans — and other APIs, such as Java Management eXtensions, Java Messaging Service (JMS) and Java API for XML-Web services. It is a comprehensive platform architecture for modern application designs that:

• Provides unprecedented application portability

• Can offer near-mainframe-level quality of service (by some more-advanced implementations)

• Backed by a reasonably independent and large consortium of vendors — members of JCP

• Is too complex for many software projects (although Java EE 5 is an improvement) and supports only one programming language (Java).

• In its complete rendition, it offers too much functionality for many business applications

• Provides basic support for EDA via MDBs and JMS programming models

• Designed well for distributed homogeneous applications, but lacks completeness for full support of heterogeneous service-oriented applications

Position and Adoption Speed Justification: The Java EE programming model dominates high-end enterprise software projects. Thousands of mainstream enterprises use Java EE as the platform for their mission-critical applications. Leading application ISVs use Java EE for their new software development. The platform implementations range from high-cost, high-end versions to low-cost, mass-market versions. Closed-source and open-source options compete for new projects.

The Java EE specification changes have become gradual, incremental and infrequent. Discontinuous innovation is rare and typically addresses isolated specification shortcomings. The leading products are proven, dependable and interchangeable for most applications. The use practices and the development methodologies are well-established, and the skilled resources are broadly available and steadily increasing. The product is clearly near its plateau of productivity, although not quite there yet. The new Java EE 5 has introduced a mild disruption to the installed base. Introduction of EJB v.3 is technically incremental, but, in effect, discontinuous. It is a major

Page 42: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 42 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

re-architecture of the Java EE component model and, although the previous versions are required to be supported by compliant implementations, no new development should invest in anything but the more-efficient and easier-to-use EJB v.3, which, in turn, requires new skills and design practices.

User Advice: Most mainstream business applications do not need the full power of a high-end Java EE platform. Considering the high degree of compatibility between implementations of Java EE, use the proven low-cost offerings for less-demanding parts of the application, and invest in the high-end alternative platforms only for select high-demand parts of the application environment.

• Consider the open-source option as a viable alternative to the more-established, closed-source implementations.

• Expect price decreases in the low end of Java EE and price increases in the less-standard, high-end and extended Java EE arena.

• Expect continuing emergence of new easy-to-use programming model frameworks for Java (such as Interface 21 Spring Framework) and other core languages (such as PHP, Ruby and Groovy), which are likely to confine the full and native Java EE implementations to mostly high-end application projects.

Business Impact: In the mainstream enterprise practices, outside Windows-dedicated software engineering, Java is approaching the status of COBOL of the 1970s and 1980s. The significant degree of portability and the well-established interoperability of the licensed Java EE implementations reduce costs of software engineering by making vendor implementations and skilled engineering resources readily available in-house, through system integrators, or through off-shore outsourcing.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Apache Software Foundation; BEA Systems; IBM; Oracle; Red Hat (JBoss); SAP; Sun

Recommended Reading: "Key Issues for Platform Middleware"

"Magic Quadrant for Enterprise Application Servers, 2Q06"

Page 43: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 43 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Appendices

Figure 3. Hype Cycle for Application Development, 2006

Page 44: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 44 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

As of December 2006

Automated Testing.NET Managed Code PlatformBusiness Process Analysis

Basic Web ServicesPerformance

Testing

Business Rule EnginesUML MethodologiesEnterprise Software Change and Configuration Management

ARAD SODA

Business Application Package Testing

Advanced Web Services

BusinessProcess

Management

Unit Testing

SODA, Composite Applications and ISE

Architected, Model-Driven SODAEnterprise Architecture Tools

Scriptless TestingBPM Suites

Offshore Outsourced Testing

Metadata Repositories

Collaborative Toolsfor the Software

Development Life CycleApplication Quality

Dashboards SOA TestingSDLC Security Methodologies

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

Technology Trigger

Peak ofInflated

ExpectationsTrough of

Disillusionment Slope of Enlightenment Plateau of Productivity

time

visibility

time

visibility

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

Years to mainstream adoption:less than 2 years 2 to 5 years 5 to 10 years more than 10 years

obsoletebefore plateau

As of December 2006

Automated Testing.NET Managed Code PlatformBusiness Process Analysis

Basic Web ServicesPerformance

Testing

Business Rule EnginesUML MethodologiesEnterprise Software Change and Configuration Management

ARAD SODA

Business Application Package Testing

Advanced Web Services

BusinessProcess

Management

Unit Testing

SODA, Composite Applications and ISE

Architected, Model-Driven SODAEnterprise Architecture Tools

Scriptless TestingBPM Suites

Offshore Outsourced Testing

Metadata Repositories

Collaborative Toolsfor the Software

Development Life CycleApplication Quality

Dashboards SOA TestingSDLC Security Methodologies

Source: Gartner (December 2006)

Page 45: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 45 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Hype Cycle Phases, Benefit Ratings and Maturity Levels

Table 1. Hype Cycle Phases

Phase Definition

Technology Trigger A breakthrough, public demonstration, product launch or other event generates significant press and industry interest.

Peak of Inflated Expectations During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized activity by technology leaders results in some successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and magazine publishers.

Trough of Disillusionment Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of Enlightenment Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial, off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters the Plateau.

Years to Mainstream Adoption The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (January 2007)

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts in industry dynamics

High Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enterprise

Moderate Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise

Page 46: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 46 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

Benefit Rating Definition

Low Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings

Source: Gartner (January 2007)

Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic In labs None

Emerging Commercialization by vendors Pilots and deployments by industry leaders

First generation High price Much customization

Adolescent Maturing technology capabilities and process understanding Uptake beyond early adopters

Second generation Less customization

Early mainstream Proven technology Vendors, technology and adoption rapidly evolving

Third generation More out of box Methodologies

Mature mainstream Robust technology Not much evolution in vendors or technology

Several dominant vendors

Legacy Not appropriate for new developments Cost of migration constrains replacement

Maintenance revenue focus

Obsolete Rarely used Used/resale market only Source: Gartner (January 2007)

RECOMMENDED READING

"Understanding Gartner's Hype Cycles, 2007"

Page 47: Hype Cycle for Application Development, 2007

Publication Date: 29 June 2007/ID Number: G00147982 Page 47 of 47

© 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

REGIONAL HEADQUARTERS

Corporate Headquarters 56 Top Gallant Road Stamford, CT 06902-7700 U.S.A. +1 203 964 0096

European Headquarters Tamesis The Glanty Egham Surrey, TW20 9AW UNITED KINGDOM +44 1784 431611

Asia/Pacific Headquarters Gartner Australasia Pty. Ltd. Level 9, 141 Walker Street North Sydney New South Wales 2060 AUSTRALIA +61 2 9459 4600

Japan Headquarters Gartner Japan Ltd. Aobadai Hills, 6F 7-7, Aobadai, 4-chome Meguro-ku, Tokyo 153-0042 JAPAN +81 3 3481 3670

Latin America Headquarters Gartner do Brazil Av. das Nações Unidas, 12551 9° andar—World Trade Center 04578-903—São Paulo SP BRAZIL +55 11 3443 1509