Web services application performance - Demand...

16
The performance of Web services applications in Windows 2000: monitoring ASP and COM+ Mark B. Friedman Demand Technology 1020 Eighth Avenue South, Suite 6 Naples, FL USA 34102 [email protected] Abstract. Microsoft's proprietary ASP, COM+, and.Net software development technologies provide a powerful set of application runtime services that target the development of enterprise-class applications. When it comes to deploying ASP, COM+, and .Net applications, the Microsoft strategy is noticeably less coherent. Performance monitoring and capacity planning for the Microsoft application development platform is challenging due to significant gaps in the measurement data available and familiar problems correlating data from distributed transaction processing components running across multiple machines. This paper focuses on the current ASP and COM+ runtime environment and the problems that arise in monitoring the performance of web-based transaction processing applications that rely on these services. Introduction. Viewed as a whole, Microsoft's proprietary develop- ment technology is noteworthy for both its breadth and depth. Supplying both development tools and runtime services, Microsoft provides developers with an integrated software development platform for writing applications that run on top of its basic operating system services. ASP, COM+, and the .Net initiatives from Microsoft - combined with a powerful set of COM+ application runtime services - target the development of enterprise-class applications. According to one observer, the goal of the COM+ runtime services is "to simplify the development of highly concur- rent systems."[1] By focusing on the productivity of software developers, Microsoft tries to make it easy to develop applications that run on its server platform. When it comes to deploying ASP, COM+, and .Net applications and monitoring the performance of the mission-critical production systems built to exploit these services, the Microsoft strategy is noticeably less coherent. Performance monitoring and capacity planning for the Microsoft application development platform is challenging due to significant gaps in the measurement data and familiar problems correlating data from distributed transaction processing components running across multiple machines. While Microsoft and various third party vendors are working to fill these gaps, it appears likely that this environment will continue to prove chal- lenging for the foreseeable future. This paper focuses primarily on the current ASP and COM+ runtime environment and the problems that arise in monitoring the performance of transaction processing applications that rely on these services. It will discuss the performance monitoring limitations that are common to both environments that make it difficult to configure and tune large scale transaction processing application systems today. The current situation being far from hopeless, the paper will then describe the interfaces that are available that can be used to gather additional application runtime performance statistics. Finally, it will review promising new developments that should bring some relief to systems professionals with responsibility for maintaining adequate computer and network capacity for applications on the Microsoft platform. Background. Over the past several years, Microsoft Corporation has diligently pursued a strategy to develop an application development platform on top of its Windows NT operating system architecture. This application development platform relies on object-oriented programming paradigms, exploits a variety of industry-standard, open proto- cols like HTTP, XML and SOAP, and is wrapped into a proprietary framework called COM for building reusable code components. COM, which stands for the Common Object Model, is an object-oriented programming (OOP) environment. COM is also the centerpiece of a broader application development framework designed to support large scale, enterprise applications. As COM has morphed into COM+ and now .Net, Microsoft has made available extensive COM component runtime services, including an SQL database management system (MS SQL Server) and a transaction monitor called Microsoft Transaction Server (mts) which can be readily accessed by COM programs. Originating on Intel servers with inherently limited scalability, the Microsoft strategy embraces distributed processing. To enhance the scalability of its platform using a distributed processing model, Microsoft provides an

Transcript of Web services application performance - Demand...

Page 1: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

The performance of Web services applications inWindows 2000: monitoring ASP and COM+

Mark B. FriedmanDemand Technology

1020 Eighth Avenue South, Suite 6Naples, FL USA [email protected]

Abstract.Microsoft's proprietary ASP, COM+, and.Net software development technologies provide a powerful set of applicationruntime services that target the development of enterprise-class applications. When it comes to deploying ASP, COM+,and .Net applications, the Microsoft strategy is noticeably less coherent. Performance monitoring and capacity planningfor the Microsoft application development platform is challenging due to significant gaps in the measurement dataavailable and familiar problems correlating data from distributed transaction processing components running acrossmultiple machines. This paper focuses on the current ASP and COM+ runtime environment and the problems that arisein monitoring the performance of web-based transaction processing applications that rely on these services.

Introduction.Viewed as a whole, Microsoft's proprietary develop-

ment technology is noteworthy for both its breadth anddepth. Supplying both development tools and runtimeservices, Microsoft provides developers with an integratedsoftware development platform for writing applicationsthat run on top of its basic operating system services. ASP,COM+, and the .Net initiatives from Microsoft - combinedwith a powerful set of COM+ application runtime services- target the development of enterprise-class applications.According to one observer, the goal of the COM+ runtimeservices is "to simplify the development of highly concur-rent systems."[1] By focusing on the productivity ofsoftware developers, Microsoft tries to make it easy todevelop applications that run on its server platform.

When it comes to deploying ASP, COM+, and .Netapplications and monitoring the performance of themission-critical production systems built to exploit theseservices, the Microsoft strategy is noticeably less coherent.Performance monitoring and capacity planning for theMicrosoft application development platform is challengingdue to significant gaps in the measurement data andfamiliar problems correlating data from distributedtransaction processing components running acrossmultiple machines. While Microsoft and various thirdparty vendors are working to fill these gaps, it appearslikely that this environment will continue to prove chal-lenging for the foreseeable future.

This paper focuses primarily on the current ASP andCOM+ runtime environment and the problems that arise inmonitoring the performance of transaction processingapplications that rely on these services. It will discuss theperformance monitoring limitations that are common toboth environments that make it difficult to configure andtune large scale transaction processing application systems

today. The current situation being far from hopeless, thepaper will then describe the interfaces that are availablethat can be used to gather additional application runtimeperformance statistics. Finally, it will review promisingnew developments that should bring some relief to systemsprofessionals with responsibility for maintaining adequatecomputer and network capacity for applications on theMicrosoft platform.

Background.Over the past several years, Microsoft Corporation has

diligently pursued a strategy to develop an applicationdevelopment platform on top of its Windows NT operatingsystem architecture. This application development platform

• relies on object-oriented programming paradigms,

• exploits a variety of industry-standard, open proto-cols like HTTP, XML and SOAP, and

• is wrapped into a proprietary framework calledCOM for building reusable code components.

COM, which stands for the Common Object Model, is anobject-oriented programming (OOP) environment. COM isalso the centerpiece of a broader application developmentframework designed to support large scale, enterpriseapplications. As COM has morphed into COM+ and now.Net, Microsoft has made available extensive COMcomponent runtime services, including an SQL databasemanagement system (MS SQL Server) and a transactionmonitor called Microsoft Transaction Server (mts) whichcan be readily accessed by COM programs.

Originating on Intel servers with inherently limitedscalability, the Microsoft strategy embraces distributedprocessing. To enhance the scalability of its platform usinga distributed processing model, Microsoft provides an

Page 2: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

inter-process or inter-computer message queuing servicecalled MSMQ and three forms of clustering technology:

• Network Load Balancing (NLB), which operates acluster of up to 32 front-end machines that share asingle, virtual IP address,

• Component Load Balancing (CLB), which operatesexclusively on COM+ middleware programs, and

• Microsoft Cluster Server (MCS), aka Wolfpack,which provides a high availability (HA), faulttolerant cluster involving two, similarly configuredmachines.

In addition, there is a broad range of Independent SoftwareVendors (ISVs) that provide a variety of pre-packagedCOM components that support the Microsoft developmentplatform. Together, these products assist a thrivingcommunity of application developers that work withMicrosoft's proprietary application development technology.

Microsoft also builds a complete set of applicationdevelopment tools, including compilers, editors, debuggers,and a code performance profiler. In a break with a longtradition of industry practice to promote interoperability at thecomputer programming language level, Microsoft has chosento build application development tools that specificallysupport its proprietary application development platform.Microsoft's latest extensions to the standard C++ program-ming language are designed to support COM+ programdevelopment. These proprietary language extensions areknown as C# (pronounced "C Sharp").

Windows NT application servers. The client side of aCOM+ transaction processing application normally usesthe User Interface services associated with the Internet;namely HTTP running on top of TCP/IP. Of course, theoriginal focus of the Microsoft Windows operating systemwas its Graphical User Interface (GUI), modeled onresearch using high resolution graphics, and mouse-manipulated menus that was performed at Xerox PARCcirca 1980 that was later commercialized in the Xerox Starcomputer systems and derivative products at AppleComputer, including the Lisa and Macintosh computers.This is a much different lineage than the document-oriented markup languages such as SGML and thehypertext paradigm credited to Ted Nelson that thedesigners of HTML adopted. As many organizations areheavily invested with the Microsoft development platformat various stages in its evolution, it may be helpful to tracesome of the major milestones in this line of developmentas Microsoft has shifted its aim over time to focus on web-enabled applications.

Early versions of Microsoft Windows grafted graphicalelements onto Microsoft's DOS operating system (MS-DOS) that supported Intel-compatible 16-bit, and later32-bit, computers. When a joint operating system develop-

ment project between Microsoft and IBM that producedOS2 version 1 and 2 floundered, Microsoft initiated anambitious effort to produce an advanced operating systemon its own, which became Windows NT version 3,introduced to the public in 1992. As discussed in [2], thegoals of the Windows NT development project were tobuild a reliable and secure computing platform on top ofnext generation computer hardware. The clear intent wasultimately to replace MS-DOS and early versions ofWindows, which were welded on top of a DOS single-userkernel, as a foundation for future application development.

With Microsoft focused on its huge install base ofdesktop applications that exploited the Windows GUIinterface, Windows NT had to be able to run Windowsdesktop apps. From an initial version of the OS that wasessentially agnostic with regard to what GUI ran on top ofit, Windows NT version 4 was redesigned and re-posi-tioned as "a better Windows than Windows," as Microsoftstandardized its GUI application programming interface(API) into a common specification called Win32 that wassupported on all 32-bit forms of Windows, includingWindows 9x and ME and Windows NT. From the begin-ning, Windows NT also supplied a significantly moreelaborate set of application runtime services suited more tomulti-user server-based applications than the traditionaldesktop applications associated with Microsoft Windows.The NT operating system's support for a program threadScheduler with priority queuing and pre-emptivemultitasking in version 3, followed by symmetric multi-processing (SMP) support in version 4, made it far moresuitable for enterprise class server applications than MS-DOS based versions of Windows. Following the initialdelivery of its Back Office suite of server applications forWindows NT version 3.5, including MS SQL Server andMS Exchange, Microsoft succeeded in delivering an initialset of integrated runtime services capable of supportinglarge-scale, mission-critical, enterprise applications.Microsoft championed these internally-developed, multi-user server applications publicly to showcase the fullrange of its application runtime services [3].

Web services. In a well-publicized sudden change indirection [4], Microsoft maneuvered in the late 1990s toabsorb Web-based technology and the services associatedwith the burgeoning Internet into its primary applicationdevelopment platform. Microsoft moved quickly on anumber of fronts to incorporate Internet technology. Theseincluded providing support for the Internet communicationprotocols associated with TCP/IP as an alternative to theLAN-oriented communications protocols like IPX,developed by Novell, and NetBios that prevailed in theearly days of MS-DOS and Windows. This transformationwas so complete that by the time Windows 2000 wasreleased that TCP/IP became the default networkingprotocol used by Microsoft-based clients [5].

Page 3: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

In its effort to support TCP/IP networking protocols,Microsoft was able to move as rapidly as it did because itwas able to adapt proven, open source software to theproprietary Windows NT environment. The availability ofhigh quality, open source software to service the Internetapplication protocols, including FTP, SMTP, and HTTPcode, assisted the effort to add Internet Mail support toExchange, for example, and deliver a fully functional Webserver called IIS (Internet Information Server). Ultimately,Microsoft bundled IIS into Server versions of every copyof its Windows NT (and 2000 and XP) operating system.

Meanwhile, Microsoft made a determined effort tostake out a leadership position with its Internet Explorer(IE) web browser. Exploiting Internet connectivity,browser software developed rapidly into a universaltelecommunications client application, both widelyavailable and capable of accessing the unprecedentedrange of "content" available on the public access WorldWide Web and private "intranets." Microsoft entered themarket for web browsers by licensing the source code forMosaic, one of the more popular browsers of its day,which the company then integrated into Windows 9x andWindows 2000. Microsoft's decision to "bundle" IE intoits operating systems was later challenged by its competi-tors, leading the US Department of Justice to prosecuteMicrosoft for violating the country's antitrust laws. (Thecourt-imposed remedy for these actions is still beingdebated as this paper is being written.)

The legality of Microsoft's actions in this arena aside,the result of this aggressive program to incorporate Webtechnology into its operating systems was that Windows2000 when it first became Generally Available was amodel TCP/IP networking client, while Microsoft'sbundled Internet Explorer browser program and IIS webserver application were also both widely deployed.The extent to which the technology associated withthe Internet is pervasive across every major comput-ing platform today makes Microsoft's aggressivedecision to bundle basic web services into itsoperating system software look infallibly prescienttoday. It was hardly a sure thing when Microsoftbegan the effort.

While Microsoft's actions to make IE the dominantbrowser program Windows-based clients use to accessthe Internet today have been widely analyzed anddiscussed, Microsoft's efforts to extend its developmenttools to support Internet technology are often ignored.Microsoft's initial stance suggested using web-based userinterfaces only as a "thin client" alternative to the morefunctional Win32 GUI applications typified byMicrosoft's own Office productivity apps. While singleuser desktop applications that rely on the Win32 GUIservices, like MS Word and Excel, continue to play a keyrole in Microsoft's application portfolio, Microsoft's

adoption of web technology is so complete in its current.Net platform that web browser-based application develop-ment is considered the norm, rather than the exception.Recognizing that the Internet protocols are capable ofinterconnecting client-server applications with a virtuallyunlimited range of client devices, including full-featureddesktop computers, laptops, handheld devices like phonesand PDAs, and even intelligent home and industrialappliances, Microsoft's focus in .Net is on building tools todevelop and deliver web services applications almostexclusively. To appreciate the full scope of Microsoft'sachievements in this area, it will help to consider theevolution of the Web server environment from delivery ofprimarily static information contained in read-only files toweb services that perform dynamic information exchange.

Dynamic web content. The shift from the display ofweb content contained in static files to dynamic webcontent generated programmatically is still underway.Initially, web content was limited to delivering static pagesin standard HTML (HyperText Markup Language) format.Functionally, the web browser sends an HTTP (hypertexttransfer protocol) Get Request referencing a specific htmlfile (or the site-defined default one) identified by a URL(Uniform Resource Locator) to a web server application(e.g., Apache, Netscape, Websphere, or IIS), which returnsthe appropriate file data using standard TCP/IP session-oriented connections. HTML codes inserted into the filecontain formatting instructions for the browser so that thedata returned can be displayed properly. The hypertextaspects of HTML permit one file to reference other filesdesigned to be embedded in the display (gif or jpeg formatgraphics, for example), which are known as inline files. Inthis situation, several Get Requests from the client to theserver are required before the full display can be con-

FIGURE 1. WEB SERVER PROCESSING OF STATIC HTML REQUESTS.

Page 4: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

structed. See Figure 1. Embedded hypertext links alsosupport transfer of control to related web pages upon usercommand.

From the standpoint of the web server, individual GetRequests are processed independently – it is both aconnectionless and sessionless protocol. However, HTTPdoes sit in the protocol stack on top of a TCP-orientedsession that is responsible for maintaining a uniqueconnection between client and server. A unique TCP portnumber is assigned to each sessionless (i.e., stateless, noclient state information is retained between successiveinteractions) connection, which apart from periodic TCPKeep Alive messages, can remain idle for minutes, hours,and even days with no other consequences.

While these basic building blocks of HTML-based webcontent are adequate for constructing useful and attractiveweb sites, static HTML technology is not powerful enoughto build interactive web content that can be shaped by userinput. The first step taken by the Internet community toaugment static HTML with programmable capabilities toalter content dynamically was the ability of the web serverto launch a program or script in response to user inputusing the Common Gateway Interface (CGI). With CGIscripts capable of processing forms, a rudimentarytransaction processing capability became a reality usingweb browser clients to initiate programs executed on theweb server, which then fashions an appropriate reply.

The HTML specification also provides a simplemechanism which CGI scripts can use to store and retrievestate information about a client session on the client sideof the connection. A server, when returning an HTTPobject to a client (the browser), may also send a piece ofstate information that the client will retain locally, usuallyby hardening it on disk. Included in that state object is adescription of the range of URLs for which that state isvalid. Any future HTTP requests made by the client whichfall in that range will include a transmittal of the currentvalue of the state object from the client back to the server.For no especially compelling reason, this state object iscalled a cookie. The ability to maintain persistent stateinformation about the status of a client session extends thecapabilities of Web-based client/server applications intothe realm of transaction processing systems.

Rudimentary forms processing using a combination ofCGI scripts and HTML cookies inevitably generateddemand for even more Web programmingcapability to harness the potential power ofthis pervasive computing platform. Microsoft,in particular, has been active in developingseveral non-standard and proprietary exten-sions to the basic functions available in theoriginal HTTP specification. These propri-etary extensions are designed to improve website programming using Microsoft’s own

development tools. Below, we will review the functionalcharacteristics of the most important of these initiatives,including ISAPI, ASP, and COM+.

Microsoft’s web applicationprogramming extensions.

ISAPI. In addition to also supporting standard CGIscripts, Microsoft’s IIS web server supports a proprietaryISAPI (Internet Server Application Programming Inter-face) interface that allows processing of HTTP Get andPost Requests using Win32 programs packaged as DLLs(dynamic linked libraries). Because ISAPI extension DLLsare executables built by a compiler, they must be writtenin a programming language like C++. Unlike CGI requestswhich require a separate process to execute, ISAPI DLLscan be multithreaded. Depending on the level of applica-tion protection chosen for running ISAPI DLLs in IISversion 5.0, an ISAPI extension can be executed

• using a thread within the IIS inetinfo.exe addressspace thread pool,

• on a thread from a thread pool inside a containerprocess known as dllhost.exe, or,

• on a thread in a separate process (which alsohappens to be an instance of dllhost.exe) dedicatedto that execution instance (similar to CGI scripts).

The lowest level protection provides the highest levelperformance because the ISAPI extension DLL is dis-patched on an existing thread from inetinfo’s internalworker thread pool. However, this exposes the entire website to problems if the ISAPI extension DLL misbehaves.Using a medium level of protection, IIS transfers controlto a thread from a pool inside the dllhost.exe containerprocess. At the highest level of protection available, eachISAPI DLL is executed in a separate, dedicated instance ofdllhost.exe. This thread dispatching scheme is summarized inTable 1. Inetinfo invokes a COM Object called the WebApplication Manager, or WAM (wam.dll), to provide aconsistent linkage mechanism between IIS and the ISAPIextension DLL, where ever it happens to be loaded.

Besides the obvious relationship between the applica-tion protection level and web site reliability, the choice ofwhere to run ISAPI extension applications also has distinctperformance implications. At the highest protection level,

leveLnoitcetorPnoitacilppA txetnocnoitucexedaerhT

woL exe.ofniteni loopdaerhtderahs

)delooP(muideM exe.tsohlld loopdaerhtderahs

hgiH detacided exe.tsohlld loopdaerht

TABLE 1. ISAPI APPLICATION PROTECTION LEVELS IN IIS 5.0.

Page 5: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

ISAPI applications are run in dedicated processes reminis-cent of the way CGI scripts are executed. IIS does providea configuration parameter to allow the dllhost processescreated to persist for some amount of time followingexecution of the ISAPI DLL so that not every newtransaction provokes process creation and destruction.Using the medium protection level, a rogue ISAPI DLLcannot bring down IIS completely, but it certainly candamage any other applications that are sharing thedllhost.exe container process. Assuming process creationand destruction is not too big a factor, the CPU overheadconsiderations for a single shared dllhost containerprocess are roughly comparable to running dedicateddllhost.exe processes, assuming there are sufficientprocess-level worker threads available to handle theworkload in the case of a single instance of dllhost. Ineither case, WAM must perform an out-of-process callfrom inetinfo to dllhost and back to process the request.These out-of-process calls are made using very expensiveCOM+ runtime plumbing from a WAM instance insideinetinfo to another WAM object inside dllhost.exe, asillustrated in Figure 2.1

Resource accounting is problematic when ISAPIextension applications execute in their default medium (orpooled) protection mode or use the lowest protection level.When ISAPI DLLs are all executing inside the samecontainer process, it is very difficult to figure out whichapplication is responsible for consuming which computerresources. While process level resource usage statistics areavailable for both CPU and Memory consumption, it is notpossible to apportion that usage data among multipleapplications running inside the same process accurately.Thread CPU usage data is also available, but it is not anymore useful. At any point in time, any worker thread in the

FIGURE 2. THE ARCHITECTURE OF ISAPI EXTENSION DLLS WHEN

MEDIUM (OR HIGH) APPLICATION PROTECTION IS SET.

1 The mtx.exe container process is used instead in IIS version 4.0.

2 The use of I/O Completion Ports by the IIS-managed threadpool almost ensures that a dispatched thread cannot be expectedto service any one application continuously. I/O CompletionPorts are a mechanism to re-dispatch a worker thread that wouldotherwise block because it is performing I/O. A thread poolingapplication that utilizes an I/O Completion Port tends to makeefficient use of its worker threads by maintaining a minimumnumber of active threads.

thread pool can be processing any application request, so itis difficult to know what to do with this data, too. Becausethread pooling is used inside dllhost.exe (and, for thatmatter, inside inetinfo.exe, too, when the lowest protec-tion level is specified), Thread performance Counterscannot be relied upon to identify any single applicationDLL for very long.2 For an appreciation of the difficultiesinvolved, see [6] for a procedure to determine whichISAPI application program is responsible for a runawaythread inside either inetinfo or dllhost.exe. This debug-ging procedure, which requires freezing the dllhost.exeprocess using WinDebug is clearly not suitable fortrouble-shooting most high volume, production web sites.

While running ISAPI extension DLLs in isolationsolves one aspect of the resource accounting problem, itleaves another significant difficulty unresolved. Withmultiple copies of dllhost.exe often active, it is still almostimpossible to correlate process level resource consumptionstatistics with individual applications, except via thedebugging procedure referenced above. The process andthread level performance statistics that Windows NT/2000provides do not identify which active application DLL isrunning inside each specific instance of the dllhostcontainer process. One approach is to process the transac-tion-oriented data that is written to the web log and try tocorrelate that with the process-level statistics that theperformance monitoring interface does provide. We willreview the contents of the IIS extended format web logs ina moment. Another approach extends the standard perfor-mance monitoring instrumentation so that the activeapplication DLLs running inside container processes canbe identified [7].

Several tuning parameters are available to adjust the sizeof the IIS thread pool where ISAPI extentsion DLLs areexecuted. These include MaxPoolThreads, which deter-mines the number of threads per processor in the inetinfothread pool, and PoolThreadLimit, which sets an upperbound on the number of threads in the inetinfo processaddress space. Both these values are added to the Registry atthe HKLM\System\CurrentControlSet\Services\InetInfo\Parameterskey. The level of application protection influences the wayMaxPoolThreads works. At the highest level of protec-tion, each dllhost.exe container process allocates at mostMaxPoolThreads per processor. At lower levels ofprotection, MaxPoolThreads applies to the single threadpool that all ISAPI extension applications share.

Page 6: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

ASP. In an effort to make web application program-ming easier than the low level ISAPI interface, Microsoftsubsequently introduced Active Server Pages (ASP),another proprietary web application programming technol-ogy that only Microsoft platforms support.3 Active ServerPages contain a mixture of HTML codes and script codethat is executed by an IIS facility during runtime. IISsupports scripts written in either VBscript or Javascript, orin PERL, Python, TCL, and REXX, assuming you havethe appropriate language script interpreter programinstalled. The HTML code contained in an ASP pagenormally serves as a template that the script shapesdynamically, based on the current context, to render thefinal HTML response message that a web browserprogram client can understand and display. The followingsimple example illustrates the basic capability of ASPscripting to generate HTML on the fly:

<% IF HOUR(NOW) < 12 THEN %> <FONT COLOR=YELLOW>GOOD MORNING!</FONT><% ELSEIF HOUR(NOW) < 18 THEN %> <FONT COLOR=LIME>GOOD AFTERNOON!</FONT><% ELSE %> <FONT COLOR=ORANGE>GOOD EVENING!</FONT><% END IF %>

Here the script generates an appropriate Hello messagebased on the current time of day. Note: because the outputof ASP scripts is generated dynamically, IIS sets HTTPcache-control to prevent browsers from caching HTMLoutput generated by Active Server Pages scripts becausethere is no way to guarantee that an ASP page will lookthe same the next time it is requested. Since the HTMLoutput created by ASP scripts cannot be cached locally byeither the client browser or by intermediate cache enginesand proxy servers, the performance of ASP applicationsdepends almost exclusively on server capacity, along withthe usual network performance factors that predominatewhen Internet protocols are used [8].

Microsoft implemented the Active Server Pages featureas an ISAPI extension DLL. Consequently, IIS processingfor Get and Post Requests for ASP files (which are identifiedby an asp filename suffix) is quite similar to other ISAPIextensions. The ISAPI interface passes all ASP requests toasp.dll, which is also responsible for loading the appropriatescript engine. Depending on the application protection levelchosen, asp.dll is loaded in-process inside inetinfo or out-of-process inside dllhost.exe. See Figure 2 for a picture of thisweb application architecture, which illustrates the mediumor pooled level of protection to execute ASP scripts usingthreads from a pool inside the dllhost.exe containerprocess. Figure 2 also illustrates the use of wam.dllrunning inside both inetinfo.exe and dllhost.exe to link toASP application scripts regardless of which process is

hosting them. The presence of wam.dll, which is a COMcomponent (discussed below), inside dllhost.exe can beused to associate a specific instance of the dllhost processwith ASP application processing.

The first time an ASP script is executed, the script codemust be interpreted by the Script engine. Compiled scriptcode is saved and stored in a memory-resident cachecalled the Script Engine Cache, which saves time if theapplication is re-executed again soon. The size of theScript Engine Cache is limited by the amount of RAMinstalled and a tuning parameter that sets an upper limit onthe number of compiled scripts that can be retained at anyone time. See Figure 3. A performance Counter in the ASPObject called Script Engines Cached reports the currentnumber of files in the Script Engine Cache. There is also asmaller Template Cache which is used to cache handles(pointers) to the compiled scripts themselves. Hit ratiostatistics for the Template Cache are provided, but theTemplate Cache, which manages handles, is not a largeconsumer of RAM. Somewhat inexplicably, the moreimportant (and considerably larger) Script Engine Cache isnot instrumented. The best way to ensure that the ScriptEngine cache is effective is to inventory the number ofASP scripts that are being executed (this can be tabulatedfrom the web log) and verify that the Script EnginesCached counter equals this number.

Since Active Server Pages were designed to make iteasy to build transaction-processing applications, they aresession-oriented. Each client that initiates an ASP page is

3 A product called Chili!Soft ASP can host ASP pages andcomponents on a variety of non-Microsoft Web servers withoutany changes to application script code.

FIGURE 3. IIS 5.0 SETTINGS THAT CONTROL ASP COMPILED SCRIPT

CACHING.

Page 7: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

a assigned a Session ID, which is stored by the client asan HTML cookie. The programmer can write an ASPevent handler in global.asa to process the SessionOnStart event that is triggered the first time a userruns any page in the site. The OnStart event is oftenused to establish a value for TimeOut property of theuser Session and to store any additional identificationdata needed in the Session ID cookie. Utilizing theSession object to store session state creates specialconsiderations that impact network load balancingschemes such as round-robin DNS that allow a seriesof IIS web servers to appear as a single virtual IPaddress. The proper technique for preserving Session statethat works with network load balancing is known as ASPsession-aware load balancing. This involves invoking theload balancing scheme to distribute the initial applicationrequest only and redirecting all subsequent requests to thecomputer assigned to that client session. The NetworkLoad Balancing feature of Windows 2000 AdvancedServer, which allows you to create an IIS server clustercontaining up to 32 machines, provides an option tospecify that all connections from the same client IPaddress be handled by a particular server to preserveSession state.

Because the internal architecture of ASP script process-ing is identical to ISAPI extension DLLs, capacityplanners face the same difficulties trying to associateapplications that use Active Server Pages technology withthe computer resources they consume. With multiplecopies of dllhost.exe often active, correlating processlevel resource consumption statistics with individualapplications remains problematic. A combination ofrunning ASP applications in isolated processes andanalyzing the transaction-oriented data that is written tothe web log is the only viable option currently available.

COM and MTS. Active Server Pages technology,where scripts and HTML coding can be embedded in thesame file, is quite handy for crafting dynamic web pages.But developing scripts has its limitations when it comes toprogramming complex applications. Of particular concernis the fact that scripting languages do not support the typeof modularization that developing complex applicationsusually requires. In the Microsoft framework, ASPprovides the presentation layer for what Microsoftdescribes as a three-tiered approach to web applicationdevelopment. The second layer is for business logic,which Microsoft suggests should be performed usingCOM modules and MTS runtime services. (In Windows2000 COM and MTS are effectively merged into a singleruntime service called COM+.) A third and final layer isfor the back-end database processing where all persistentstate information about active transactions is usuallystored. We will not attempt to discuss back-end databaseperformance topics here; the interested Reader shouldrefer to [9]. See Figure 4 for an illustration that shows how

applications using this framework can be clustered forboth high availability and performance. In this illustration,middle tier business logic is distributed across a series ofserver machines that are executing COM+ components.Component Load Balancing, a COM+ application cluster-ing feature included in Application Center 2000, handlesrouting to the middle tier. Before we consider some of theperformance implications of COM+, we need to under-stand what it is and what it does.

Originally, COM was the runtime infrastructureassociated with ActiveX components, a developmenttechnology introduced at the same time that Active ServerPages scripting was released.4 It was designed initially asan object-oriented facility to supercede OLE (ObjectLinking and Embedding) as a way for one application inone process to call another application in a differentprocess. (Think MS Word calling Excel to edit an Excel-generated chart embedded in a Word document.) As aninterprocess communication (IPC) mechanism, the mostsalient feature of COM, compared to more conventionalmethods like RPC and the shared DLLs that were alreadyin widespread use on the Microsoft platform, was itssupport for versioning. COM programs, which are alsopackaged as DLLs, support a required interface calledIUnknown that allows the caller to discover the Methodsand Properties of the program being called at runtime. Thecalling program can also use the IUnknown interface toverify the version of the module being called beforetransferring control to it.

A complete discussion of COM+ programming is wellbeyond the scope of the present paper. Readers interestedin pursuing this topic can refer to [10] and any number ofother good books on this subject. The discussion here willbe limited to the use of COM+ as the middle tier of a

FIGURE 4. THE MICROSOFT FRAMEWORK FOR DEVELOPING

SCALABLE, 3-TIERED WEB SERVICES APPLICATIONS.

4 With this momentary penchant for calling all its new features“Active” this or that (e.g., Active Directory ), it seems likely thatsomeone in the Microsoft Marketing department responsible forproduct naming was once brutalized by a high school Englishteacher for using passive voice.

Page 8: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

transaction processing runtime environment, supportingconcurrent execution of component programs. Enhancingthe scalability of the Microsoft web services platform,modularized COM+ components can be executed in anextraordinary variety of configurations, either within thesame computer or on a remote computer using the DCOM(Distributed COM) protocol. For example, when requestedby a calling program, a COM+ program can execute inseveral different runtime environments, including

• in process in the same thread as the calling program,

• in process in a new thread separate from the callingprogram,

• in a new thread in a different process (out of pro-cess) in the same computer, and

• in a thread in a process running on a remote computer.

To a remarkable degree, a COM+ program can bewritten without regard to how the application is actuallydeployed across this range of computing environments.For example, whether a COM+ program executes in-process as a library application or in a separate containerprocess (the ubiquitous dllhost.exe) as a server applica-tion is a decision made when the compiled runtimecomponent is installed. When the COM+ component isinstalled, the System Administrator sets its activationproperty, which determines whether the program runs onthat machine as a library application or a server applica-tion. For the most part, the program itself can bedeveloped independently of this deployment decision.

Naturally, how the COM+ runtime infrastructureservices a request that activates a component is importantfrom a performance perspective. If the module requested isinstalled as a library application, COM+ merely transferscontrol in-line so that the component executes on thecaller’s thread. Obviously, this is the most efficientlinkage. However, if the caller’s thread is not set up toprovide all the COM+ services the requested modulerequires, COM+ will transfer control instead to a newthread from the same process thread pool. If the modulerequested is installed as a server application, COM+ willtransfer control from a thread in one process to a thread inanother process, with the COM+ runtime being respon-sible for constructing a new dllhost.exe container process,if necessary. If the requested component is only availableremotely – again, a configuration decision to isolate theprograms incorporating the business logic from thepresentation level, front end scripts – the dynamic modulelinkage is automatically resolved using the DCOM(Distributed COM) protocol, a close cousin of RPC.

Of course, there is a considerable difference in theoverhead associated with these three variations. A capacityplanning white paper published by Microsoft [10] referencesmeasurements taken comparing performance in all threeenvironments: in-process calls, out-of-process calls in thesame box, and calls to a remote computer, as follows:

As Table 2 illustrates, distributed processing explicitlytrades off application response time, which elongates as aresult, against overall throughput, always a tricky decisionto make. Quite obviously, the overhead associated with aDCOM call to a component executing on a remotemachine represents a serious performance penalty.However, if a single machine cannot sustain the transac-tion throughput required by the application, there may belittle alternative in the Microsoft environment other thandistributing the work across multiple machines. Thedistributed processing architecture depicted in Figure 6may be the only feasible way to handle the transactionvolume the application’s user population generates.

A COM+ component is further characterized by anumber runtime attributes, including its threading model,serialization requirements, concurrency level, and transac-tional support. These are all program attributes which areset declaratively so that the program itself can be writtenand executed (largely) independent of the implementationconsiderations. From this perspective, COM+ is essentially aruntime service that allows one program to call anotherindependent of the calling program needing to establish theproper runtime environment for the called function toexecute. Instead, COM+ runtime services provide the linkagefrom caller to callee, ensuring that the component requestedexecutes in a context that offers all the services it requires.

Similar to other transaction monitors like Tuxedo andCICS, COM+ is also designed to simplify the developmentof scalable transaction processing applications by maskingthe inherent difficulties in building multithreaded applica-tion programs that execute correctly in that complexenvironment. COM+ applications can be written as if theyare running single-threaded, but are executed in a runtimeenvironment that supports multithreaded, concurrentprocessing. To accomplish this, the COM+ runtime hasseveral noteworthy features, including multithreading,object pooling, and built-in transactional support. Next, wewill consider each of these features briefly and theirimpact on the performance of web services applications.

Threading. In theory, COM+ is an attempt to letdevelopers ignore serialization considerations in buildingscalable, multithreaded applications. In practice, serializa-tion is too implementation-dependent to jump from theparticular to the general in every case. But Microsoft’sachievements in this most difficult area of applicationdevelopment technology should still be applauded. COM+

dnocesrepsllaC deepsevitaleR

nur,noitacilpparevres+MOCkrowtenTesaB01arevo 526 1

-tuo,noitacilpparevres+MOCenihcamemas,ssecorp-fo 3291 80.3

-ni,noitacilpparevres+MOCenihcamemas,ssecorp 3333 33.5

TABLE 2. THE RELATIVE SPEED OF COM+ CALLS.

Page 9: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

runtime services manage both an application’s threadingbehavior and its serialization requirements, which are bothestablished by the programmer declaratively.

COM+ programming supports the full range of threadingoptions available to the Win32 application programmer. Asdiscussed earlier, the COM+ routine can be loaded directlyin the calling process and can execute in either the caller’sthread or a separate thread. COM+ Objects can also beexecuted out-of-process, which means that they are loadedinto a separate container process, similar to ISAPI exten-sion DLLs. Each calling thread can acquire a separatecopy of the COM+ module running in a dedicated singlethreaded apartment, or a single copy of the COM+component can service multiple callers concurrently if it isrunning in a multithreaded apartment. Instead of invokingthe serialization services of the Win32 API directly, theCOM+ programmer sets the synchronization property ofthe application declaratively. For example, a databaseupdate program is set to run in a single threaded apartmentwith synchronization required, which ensures that onlyone database update at a time is performed.

Thread pooling. At least one aspect of writing scalable,multithreaded applications can be generalized, namely,that a runtime environment that draws available resourcesfrom a common thread pool is an effective way to buildclient/server applications that scale from small machineswith single processors to large machines with multipleprocessors. The general structure of a client/server-oriented thread pooling application is one that fields workrequests and matches them to a pool of available process-ing threads. On larger systems that can handle morerequests, one simply increases the size of the thread poolto increase application throughput. In addition to IIS andASP, Microsoft has developed several other commercial,thread pooling applications for Windows 2000, includingthe network file sharing service known as Server, the SQLServer database engine, and the MS Exchange messagingand mail server.

Because thread pooling is so fundamental to multi-userserver application scalability, Microsoft decided to offerdevelopers a pre-packaged set of robust thread poolingservices to incorporate into their applications. If theychose, developers can take advantage of COM+ runtimefeatures that provide generic thread pooling, instead ofpainstakingly developing their own thread pooling logic.This feature of COM+ is known as object pooling. Usingobject pooling, the COM+ runtime environment is capable ofmanaging the concurrency level of re-usable (and reen-trant) multi-threaded components automatically.

A component that is enabled for object pooling estab-lishes minimum and maximum pool limits that determinehow many instances of the object can run concurrently. Athird pooling parameter determines how long a request fora COM+ component that is already running at its maxi-mum level will be queued before the request is timed out.

These runtime parameters can be set by the SystemAdministrator, assuming the application has itsObjectPoolingEnabled property set. See Figure 5.

In this example, the System Administrator has used theComponent Services Explorer (CSE) applet to establish alower limit on the number of instances of this componentthat the COM+ runtime environment will maintain inmemory once the application is started. (CSE provides theadministrative interface for all the COM+ programparameters that can be set at runtime.) The object poolingruntime parameters are logically equivalent to the tuningknobs that Microsoft provides for its internally-developedserver applications. For example, earlier we discussedtuning parameters that can be used to override systemdefaults that control the behavior of IIS thread pooling.The Windows 2000 file Server and MS SQL ServerDBMS also have similar thread pooling controls [12].

The reason for using object pooling is to improveperformance. A pooled component is maintained by theCOM+ runtime in a state that is ready to use, saving thetime it takes to create a new dllhost container process andinitialize the component inside it each time the componentis requested by a caller. When a pooled object is re-quested, the COM+ runtime calls the object’s ActivateMethod to transfer control from the Requestor program tothe pooled component. Instead of terminating convention-ally, the component calls Deactivate when it is finishedprocessing the caller’s request. To determine if a deacti-

FIGURE 5. OBJECT POOLING PARAMETERS ARE SET FOR EACH

COM+ PROGRAM USING THE COMPONENT SERVICES EXPLORER

(CSE) SNAP-IN.

Page 10: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

vated component can then be returned to the pool where itcan be re-used, the COM+ plumbing calls the component’sCanBePooled Method. The component program returnsTrue if it is okay to re-use the object instance. If theprogram returns a value of False, that instance of theCOM+ object is unloaded and deleted. COM+ runtimeservices always maintain the minimum number of readyobject instances specified and will manufacture a newinstance of the object if a request occurs, all currentlyactivated objects are busy, and the component is notrunning at its specified maximum.

Idle objects above the minimum concurrency levelpersist for about 5 seconds, according to [1]; there is,unfortunately, no runtime parameter to control the lengthof time idle objects are retained. Nor is any instrumenta-tion provided that monitors the concurrency level of activepooled objects. However, assuming a pooled component isinstalled as a server application and the specific instanceof the dllhost container process can be identified (asdescribed in [7]), the dllhost process Thread Count is avalid indicator of concurrency.

Transactions. The final COM+ runtime service dis-cussed here is transactional support. Within COM+, theterm transaction is used to refer to a logically distinct unitof work with specific database recovery requirements.5 If aCOM+ program that is a transaction fails to completeproperly, or aborts, the transaction support in COM+ensures that all database updates associated with the failedtransaction are backed out by any Resource Managers(typically, connections to a DBMS) that have alreadyapplied partial updates. COM+ implements the well-known two-phase commit process to ensure that databaseupdates can be applied consistently even in a distributeddatabase environment where updates to multiple ResourceManagers need to be synchronized.

A COM+ component’s transaction attribute is setdeclaratively, using the CSE, although there are alternatemethods that are programmatic. Once the transactionattribute is set, then the COM+ runtime ensures that thecomponent, when it is activated, is dispatched in atransactional context. Running inside a transaction, thecomponent program calls the SetComplete or SetAbortMethods of the ObjectContext object to indicate either asuccessful or unsuccessful outcome. If the component

issues a SetComplete to indicate successful completionstatus, the COM+ runtime communicates with the appro-priate Resource Manager(s) to commit the changes thatoccurred within the transaction boundary. Alternatively, acall to SetAbort would signal the Resource Managers toroll back any database updates that were partially applied.The COM+ runtime effectively insulates the applicationprogrammer from having to understand many of the detailsof the commit/rollback logic required by various DBMSes,which can become quite complicated, especially in thecase of distributed database transactions.

IIS extends COM+ transactional support to ISAPIextension DLLs, which by implication means that ActiveServer Pages can be transactions. Similar to COM compo-nents, an ASP application script becomes a transactiondeclaratively. In VBscript, for example, this is accom-plished with a declaration at the beginning of the page, asfollows:

<%@ LANGUAGE=VBSCRIPT TRANSACTION=REQUIRED %>This declaration causes the COM+ runtime to create an

instance of ObjectContext that the page’s script can thenreference by calling the SetComplete or SetAbortMethods. One ASP limitation is that the page (nee, script)is the boundary of the transaction – a transaction cannotspan ASP pages. The transactional unit of work is alwaysthe ASP page which contains the Transaction=Requireddeclaration. Building COM+ components from scratch,there is a great deal more flexibility. If a component has aTransaction=Supported attribute, for example, it canparticipate in the transaction context of the calling program.The COM+ runtime services, naturally, are responsible forimplementing the transition between one COM+ componentthat is a not a transaction that calls another component that is.A concrete example is an ASP script that is not a transactionthat calls a COM+ component program that is a transaction,which is then responsible for posting a customer order to adatabase. COM+ ensures that a transactional componentalways executes in the appropriate context.

Summary. The foregoing discussion highlights theCOM+ runtime services that hide much of the complexityassociated with developing and deploying scalable COMapplication component programs. The flexibility of theMicrosoft Distributed Networking Architecture raisesinitial questions about which pieces of which applicationsshould be deployed where. In addition, specific COM+deployment decisions, e.g., object pooling, raise obviouscapacity planning questions. Establishing minimum andmaximum pool sizes and monitoring pooling concurrencylevels are unmistakable concerns. Due to the degree thatMicrosoft has succeeded in hiding the details of theCOM+ runtime plumbing from developers, it is difficult toformulate reliable answers to these questions today.

Monitoring the performance of COM+ components canbe quite challenging, as we will discuss further below. In a

5 The definition of a transaction from [13] is a unit of workdefined as having the following properties: atomic, consistent,isolated, and durable, also depicted as ACID attributes. Forexample, the fact that transactions are atomic means that eitherall the work associated with a transaction is performed or noneof it is. COM+’s use of the term transaction is consistent withthis standard usage. Unfortunately, computer performanceprofessionals often use the term transaction in a queuing modeltheoretic context as a way of denoting units of work that arrivefrom a client to a server for processing. This ambiguity cancause considerable confusion.

Page 11: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

production environment, it is not unusual to see manyinstances of dllhost.exe running concurrently, each withmultiple threads. To monitor and tune a complex webservices environment, it is important to be able to deter-mine which COM+ components are running where andmap these applications to their resource consumptionprofile. We will take some preliminary steps in thatdirection in the next section where we discuss the currentstate of the performance data available for web servicesapplications in ASP, COM+ and .Net.

Web application servicesperformance monitoring.

Web application services using Active Server Pagesscripts and COM+ components have well-understoodperformance monitoring requirements to support bothtuning and capacity planning. The principal requirement isthat measurement data be on hand that captures a client’s-eye view of the human-computer interaction. In any client/server web application, the user participates in an interac-tive computer session broken into discrete request/response sequences. We denote each discrete request/response sequences as a transaction. (In this section it isnecessary that we adopt a different definition of a transac-tion than COM+ uses.) Here, a transaction corresponds toan end user’s perception of an individual request/responsesequence. In the case of a typical web services application,consider one such sequence where the end user initiates atransaction by positioning the computer’s mouse pointer toa button on a form marked “Submit” and clicking on it.From an end-to-end response time perspective, everythingthat happens next that involves the network and thecomputers involved in processing this client requestrepresents processing associated with this transaction.

In any client/server transaction processing environment,we endeavor to measure the delay or latency involved inprocessing the user request on both the client and servermachines, as well as assessing the network latencyinvolved in the communication of the request and the server’sreply. For the purpose of the present discussion, however, wewill ignore the network latency associated with transporting aweb client’s request to a web server and back. This networklatency is primarily a function of distance, but can beinfluenced by a variety of internetworking infrastructureissues that are beyond the scope of the present discussion.Here we intend to focus only on what happens to therequest from the time it arrives at an IIS web server forprocessing until IIS returns an HTTP Response message.

Using the Microsoft web services application architec-ture, we have seen that the server-side processing of theclient request is normally broken into a number of discreteprocessing components. The anatomy of a typical (andrelatively simple) ASP/COM+ transaction is depicted inFigure 6. Upon pressing the Submit button, the end user

initiates an HTTP Post Request referencing the order.aspscript. IIS processes this HTTP Request, transferringcontrol to the dllhost container process (via wam) wherethe order.asp script is executed. The script then calls aCOM+ program in a separate dllhost container processthat is responsible for updating the back-end database withinformation from the customer’s order request. When thedatabase updates complete, the COM+ transactionalcomponent makes a SetComplete Method call to committhe database update, and then returns to the script. Thescript then fashions an appropriate HTTP Responsemessage, which is relayed back to IIS. Finally, IIS returnsthe HTTP Response message that the script generated andsends it back to the client.

We have simplified this picture of the transactionprocessing performed on the server considerably, leavingout the COM+ plumbing used to transfer control from thecaller’s thread to the called program and back again, thedata base connection processing, the fact that additionalHTML, gif, and jpeg files might be embedded in theHTML response message, the processing of the HTTPrequests by lower levels of the TCP/IP protocol stack, andother salient details. Instead of making one call to aCOM+ component, the order processing script mightinvoke several component modules. Moreover, theremight well be multiple trips back and forth to the database.We have also chosen to ignore for now the possibility thatthe ASP script, the COM+ components, and the back-enddatabase might all reside on different machines, causingthe transaction to encounter additional networking delays.

Nevertheless, the simplified picture of a web servicestransaction should suffice to illustrate the need to instru-ment each discrete component involved in processing thetransaction. Furthermore, the discrete component-levelmeasurements themselves ultimately must be correlated to

reflect all the processing that wasperformed. This correlation ofindependent measurements applied

to discrete events andcomponents is amajor technicalchallenge on any ofthe major computer

FIGURE 6. THE ANATOMY OF A WEB SERVICES APPLICATION

TRANSACTION.

Page 12: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

platforms [14]. The goal of this paper is to show just howbig a challenge this problem poses for performanceanalysts responsible for web services applications built forthe Microsoft platform. In the sections that follow, we willdescribe the measurement data that is currently availablefor HTTP Requests, ASP scripts, and COM+ components.

Web event logging. If an IIS application server isenabled to generate extended format web logs, it ispossible to collect data on server response time forindividual HTTP Requests. The time taken field in thelog, which is not among the data fields enabled by default,reports internal processing time for each HTTP Request inmilliseconds. Figure 7 shows sample output from the logfor an HTTP Request. The date and time of the request,the IP address of the client, and the specific resource beingrequested are shown. At 5:11:28, a Get Request foriccm.asp is processed. According to the log, it tookslightly more than 5 seconds to process this request. TheHTTP Response message that iccm.asp generated evi-dently referenced several embedded graphic files, whichaccounts for a number of subsequent Get Requests fromthe same client. This spotlights one difficulty with thisresponse time measurement data – it is difficult to tell apriori exactly which Get Requests correspond to a discretetransaction as perceived by the end user. If only the mainin-line processing by the iccm.asp script code to create anHTTP Response message is significant, it is possible toconsider just the time taken processing that Request. But,if you want to measure the end user experience, it makessense to consider all the elements of the rendered webbrowser display. This suggests summarizing the timetaken processing all the Get Requests associated with asingle web page display. In this simple example with onlyone user request active, that seems like a relatively simplething to do. In the log on a production web server that isservicing concurrent requests from multiple connectionssimultaneously, the processing of events from differentuser requests are intermixed. Under those more typicalcircumstances, the boundaries between end user requestsare likely to blur.

A second consideration is problems interpreting thetime taken data alongside the timestamp information inthe log, which, according to Microsoft documentation,

represents the time when the log record was written. If yousum up all time taken processing the Get Requestsshown in Figure 8, the result is a total of 11.359 ms. Thatis difficult to reconcile with the timestamps of the logrecords reported in column 2 which show all the GetRequests being processed over a span of just 3 seconds.

A final concern is how the event data recorded in theIIS log corresponds to the performance data available fromother sources, including the performance data the SystemMonitor provides. The log data should be well correlatedwith other web performance data since they are all derived bythe same underlying measurement procedure. The HTTPmethod calls reported in the log, for example, shouldcorrelate well with the counters reported in the Web serviceObject, including Get Requests/sec, Post Requests/sec, etc.However, as of this writing, there is no published workthat has analyzed the validity of this data rigorously.

Interval performance data. Using the System Moni-tor, or some similar tool, it is possible to report on severalindicators of web server transaction load on an intervalbasis. For most Web servers, IP Datagrams received/secand TCP Segments received/sec show quite similarrequest rates, as illustrated in Figure 8, which is anexample showing TCP and IP activity for a busy commer-cial web site. This is because HTTP request packets areusually small enough to fit (< 1500 bytes) in a singleEthernet segment. That is something that can be verifiedagainst the cs bytes field in the web log, which reportslength of the HTTP request.

Figure 9 illustrates the relationship between HTTPMethod calls (GET, POST, etc.) and ISAPI calls, reportedby the Web services performance Object, compared to theASP Requests/sec Counter. In this instance, only about1/10 of all IP packets received and processed by the webserver represent HTTP requests. This reflects the normalamount of network traffic associated with establishingTCP connections, TCP Acknowledgement packets thatmust be processed, keep-alive messages and other over-head functions, some of which can be minimized bytuning, see [5]. However, all five indicators of the transac-tion load are autocorrelated. This is a useful finding incase not all the metrics happen to be available, for onereason or another. Moreover, as Figure 10 shows, the

FIGURE 7. OUTPUT FROM THE EXTENDED FORMAT IIS WEB LOG.

date time c-ip cs-method Cs-uri-stem sc-status sc-bytes cs-bytes time-taken5/16/2002 5:11:28 216.23.50.222 GET /iccmFORUM/public/img/ICCMLink.gif 200 4150 414 5165/16/2002 5:11:28 216.23.50.222 GET /iccm.asp 200 42104 517 50155/16/2002 5:11:28 216.23.50.222 GET /iccmFORUM/public/img/RedArrow.gif 200 1076 414 2185/16/2002 5:11:28 216.23.50.222 GET /iccmFORUM/public/img/Clear.gif 200 267 411 2355/16/2002 5:11:29 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0101.gif200 17854 432 8755/16/2002 5:11:29 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0102.gif200 17309 432 6255/16/2002 5:11:29 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0103.gif200 9680 432 3755/16/2002 5:11:29 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0104.gif200 17717 432 6255/16/2002 5:11:30 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0106.gif200 16351 432 5005/16/2002 5:11:30 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0105.gif200 16923 432 8285/16/2002 5:11:30 216.23.50.222 GET /iccmforum/public/Capacity/CPUResource/Wicks0107.gif200 7820 432 3135/16/2002 5:11:30 216.23.50.222 GET /iccmFORUM/public/img/innovation.gif 200 3531 416 3445/16/2002 5:11:30 216.23.50.222 GET /iccmFORUM/public/img/Eval3x1.gif 200 2772 413 2195/16/2002 5:11:30 216.23.50.222 GET /iccmFORUM/public/img/BlueArrow.gif 200 297 415 3285/16/2002 5:11:30 216.23.50.222 GET /iccmFORUM/public/img/BlueArrowLeft.gif 200 1077 419 343

Page 13: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

relationship between the transaction load andprocessor resource consumption at this site, forexample, is quite obvious. Certainly, this is anunsurprising result on a machine that isdedicated to a web services application, as inthis case.

Correlating the response time measurementsreported in the web log against other transac-tion statistics available in System Monitor isproblematic because, unfortunately, thiscritical measure of service levels is generallynot available for Microsoft web servicesapplications. The one measure of transactionresponse time that is available in the SystemMonitor is the ASP Request Execution Timeand Request Queue Time counters. The valuereported, which is also in milliseconds, is thetime of the last ASP request that executed. Inan environment such as the web site we havebeen discussing where ASP transactions arriveat a rate of up to 5 per second, the ASPRequest Execution Time and Request QueueTime counters are properly viewed as asampling mechanism.

As discussed in [12], there is sufficient ASPmeasurement data to calculate the mean responsetime of ASP Requests using Little’s Law, asfollows:

MEAN RESPONSE TIME =(REQUESTS EXECUTING + REQUESTS QUEUED)/REQUESTS/SEC

Figure 11 shows the estimated ASP Requestmean response time for this site, calculatedusing the Little’s Law formula. Note that thisis a mean value, appropriate for many meanvalue analysis (MVA) performance modelingtechniques, but limited for service levelreporting. We recommend that the averageresponse time calculated using this formulaalways be validated against the average

FIGURE 8. MEASUREMENTS FOR IP DATAGRAMS RECEIVED/SEC AND TCPSEGMENTS RECEIVED/SEC ARE USUALLY ISOMORPHIC DUE TO THE RELATIVELY

SMALL SIZE OF HTTP REQUEST PACKETS.

FIGURE 9. COMPARING THE RATE OF HTTP METHOD CALLS, ISAPI CALLS, AND

ASP REQUESTS/SEC.Execution Time and Queue Time calculated from thesampled performance Counters.

Transaction decomposition. As we have seen, boththe web log event data and the interval performance dataprovide measures of overall ASP transaction responsetime, at least from the standpoint of the IIS server. Butthese measurements provide no insight into the impact ofdelays associated with the processing by middle tierCOM+ components and/or calls to back-end databaseconnections, assuming the web services application isarchitected to take advantage of these facilities. In the caseof calls to COM+ applications, an event-oriented tracemechanism is available that can be used to determine the

rate of requests for component activation and the servicetime of those requests. This facility is known as COM+Events. At least one commercial software packagecurrently uses the COM+ Events trace facility to generatetransaction load and response statistics to support tuningand capacity planning. In contrast, no transaction-level oruser level measurement data currently exists to monitorcalls to an MS SQL Server 2000 back-end database.Moreover, there is no documented facility currentlyavailable in that product that could be exploited to gatherthe required SQL Server measurement data.

The good news is that at least the performance ofCOM+ components can be monitored. However, doing sorequires a 3rd party package. When a COM+ component is

Page 14: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

installed with support for events and statisticsenabled (reference the Concurrency tab in theCOM component Property page), transactionload and service time information is forthcom-ing. The COM+ statistics that are available areprovided using the COM+ Events tracingfacility. When Events are enabled, each calland return to a COM+ interface method istraced, which corresponds to the arrival rateand completion rate of specific transactions,respectively. Using a correlation ID, a COM+Event tracing application can also determinehow long an interface method executed.

As trace facility exclusive to the COM+runtime, COM+ Events are not integrated intothe Windows Measurement Instrumentation(WMI) framework via either a WMI provideror a perflib DLL. There are, however, severalspecific system management tools that enableCOM+ Event tracing and report the results.The most familiar is the Transaction Statisticsdisplay in CSE, illustrated in Figure 12. TheTransaction Statistics display shows someinteresting current and aggregate statistics.However, it is of very limited value forperformance monitoring since what you seeillustrated in Figure 12 is all you get. There isno ability to break out statistics by component,no ability to gather interval statistics, and noability to access individual transaction-leveldata. It is worth noting that a COM+ compo-nent does not have to support Transactions inorder to be counted in the Transaction Statis-tics display – this is one area where COM+’suse of the terminology is inconsistent.

The Visual Studio Analyzer, an applicationperformance profiling tool, also supportsCOM+ Event tracing to allow developers todetermine how well their COM+ applications

FIGURE 10. CPU UTILIZATION AS A FUNCTION OF ASP REQUESTS/SEC.

FIGURE 11. ASP REQUEST RESPONSE TIME ESTIMATED USING LITTLE’S LAW.

are performing. This is a useful tool that developers canuse, but it is of little help in a production environment. TheComponent Load Balancing (CLB) clustering facility inApplication Center 2000 is the other Microsoft applicationthat relies on COM+ Event tracing. CLB’s control compo-nent pulls transaction level statistics from each remotemachine in the cluster every 800 milliseconds in order tomake optimal routing decisions.

A third party COM+ performance monitoring applica-tion called AppMetrics, developed by Extremesoft, alsoexploits the COM+ Events trace facility. AppMetrics is theonly robust performance monitoring application currentlyavailable for COM+ componentware. As Figure 13illustrates, the AppMetrics software breaks out thetransaction statistics by component, reporting both arrivalrates and service times. When the AppMetrics data is

collected for identifiable COM+ server applications, it isalso possible to gather associated data on processor andmemory utilization by collecting Process counters forthose specific instances of dllhost. Together, this measure-ment data is robust enough to apply a variety of traditionalperformance engineering and modeling techniques forCOM+ web services application programs.

While the AppMetrics COM+ transaction load andresponse time data supplies a critical piece of the perfor-mance monitoring puzzle, it does not encompass the fullpicture of web services application performance for theMicrosoft architecture depicted back in Figure 4. As notedearlier, timing and resource utilization information ondatabase calls per transaction is sorely lacking when MSSQL Server is the back-end DBMS. Moreover, the

Page 15: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

measurement data we described that does exist for HTMLMethod calls, ASP scripts, and COM+ transactions cannoteasily be correlated.

Nevertheless, even when the resource accounting datacannot be attributed to specific user transactions precisely,in many circumstances reasonable steps can be taken toapportion resource consumption per transaction approxi-mately. Apportionment techniques are a well-establishedpractice in similar situations on other platforms wheneverresource accounting is not as complete as desired. Appor-tionment would permit capacity planners to employ abroad set of analytic techniques to resolve performanceissues with confidence at sites deploying Microsoft’s webservices application platform.

Summary.Capacity planners responsible for web services applica-

tions on the Microsoft Windows 2000 platform facedifficult, but not unfamiliar problems with inadequateperformance data. As Microsoft has rapidly evolved itssoftware development platform for delivery of dynamicweb-based content, deployment-oriented tools for webservices application performance monitoring have laggedbehind. This paper describes the basic architecture of webservices applications on the Microsoft platform anddiscusses the performance monitoring data that cancurrently be gathered to manage the deployment of theseapplications.

Conceptually, the Microsoft platform consists of threeapplication processing tiers that represent (1) the presenta-tion layer, (2) the business logic, and (3) the back-enddatabase processing. The presentation layer currentlyrelies on Active Server Pages (ASP) scripts, a Microsoft-proprietary technology that permits HTML codes to be

intermixed with VBscript or Javascript codeto generate HTML Response messages. Thebusiness logic layer relies on COM+, a set ofobject-oriented Component Services thatsimplify the job of building multi-threadedtransaction processing applications. COM+is also a Microsoft-proprietary technology.The back-end database processing can behandled by any of a variety of DBMSengines, including Microsoft SQL Serverand Oracle. Optionally, these processingcomponents can be deployed in a widevariety of runtime environments, includingmultiple machines that are clustered forscalable performance.

Microsoft built ASP on top of an existingInternet Information Server (IIS) web serverfacility called ISAPI that was designed toallow web pages to be createdprogrammably. Standard web server

transaction logging facilities can be used to monitor thearrival rate and service time of ASP scripts, similar to theway other HTTP Method Calls are instrumented. Inaddition, interval-oriented ASP transaction statistics arealso available from a standard Windows 2000 performancemonitor, such as the bundled System Monitor application.The ASP transaction statistics include a measure of scriptexecution service time and queue time, but these areproperly understood as a sampling technique that measuresthe response time of the last ASP script execution. UsingLittle’s Law, it is also possible to calculate averageresponse times for ASP scripts.

Resource accounting for ASP application scripts isconfounded by the Application Protection parameter, newin IIS version 5.0, which sets the execution environment ofASP scripts. They can execute inside the IIS inetinfo.exeprocess address space, inside a shared instance of thedllhost.exe container process, or in isolated instances ofdllhost. It is not currently possible to determine whichscripts are executing in which instances of the dllhostcontainer process. This makes it impossible to associateprocess level resource utilization statistics such as CPUand Memory consumption with the execution of specificapplication scripts.

Depending on whether they are defined as libraryapplications or server applications, COM+ componentprograms can execute either inside the calling callingapplication process or out-of-process inside the ubiquitousdllhost.exe container process. While the tools Microsoftprovides are woefully inadequate to the task of monitoringCOM+ transaction processing programs, 3rd party tools arebeginning to rise to the challenge. One third party toolutilizes the COM+ Events tracing facility to track COM+transaction arrival rates and service times by application.Another third party tool can determine which COM+

FIGURE 12. THE TRANSACTION STATISTICS DISPLAY IN COMPONENT SERVICES EXPLORER.

Page 16: Web services application performance - Demand Techdemandtech.com/.../WebServicesApplicationPerformance.pdf · 2019-05-31 · available and familiar problems correlating data from

application modules are resident in which dllhost.execontainer process so that adequate resource accounting can beperformed. These are both steps in the right direction.

The capability to process web services applicationsusing multiple-machine clusters presents a more formi-dable performance monitoring challenge. In clusteredenvironments, measurement data from multiple machinesneeds to be integrated to capture all application processingcomponents. Moreover, transaction-level measurementdata from the three different processing tiers – the presen-tation layer, the business logic, and the backend DBMS –needs to be correlated. This transaction level data, where itexists today, is piecemeal. Currently, no facilities of theMicrosoft web services application runtime environmentare available that could be exploited to provide an inte-grated view of application performance. This defect mayprove to be a significant obstacle to adoption of theMicrosoft application development framework for enter-prise-ready, mission critical applications.

FIGURE 13. APPMETRICS COM+ TRANSACTION PERFORMANCE MONITORING.

References[1] Tim Ewald, Transactional COM+: Building scalable

applications. Boston, MA: Addison-Wesley, 2001.

[2] G. Pascal Zachary, Showstoppers: the breakneck race tocreate Windows NT and the Next Generation at Microsoft.New York, Simon and Schuster: 1994.

[3] “Writing great Windows NT server application,” MicrosoftCorporation, Jan 18, 1995.

[4] Ken Auletta, World War 3.0: Microsoft vs. the U.S. Govern-ment, and the battle to rule the digital age. New York:Broadway Books, 2000.

[5] Thomas Lee and Joseph Davies, Windows 2000 TCP/IPProtocols and Services: Technical Reference. Redmond, WA:Microsoft Press, 2001.

[6] Anand Rajagopaian, “Debugging distributed Web applica-tions,” MSDN, Jan 2001.

[7] Performance SeNTry version 2.4 User’s Manual. Naples, FL:Demand Technology Software, 2002.

[8] Daniel A. Menasce and Virgilio A. F. Almeida, Scaling forE-Business: technologies, models, performance, and capacityplanning. Upper Saddle River, NJ: Prentice-Hall PTR, 2000.

[9] Kalen Delaney, Inside SQL Server 2000. Redmond, WA:Microsoft Press, 2000.

[10] Don Box, Essential COM. Boston, MA: Addison-Wesley,1998.

[11] “Load Balancing COM+ Components,” MSDN, March2002.

[12] Mark Friedman and Odysseas Pentakalos, Windows 2000Performance Guide, Sebastopol, CA: O’Reilly Associates,2002.

[13] Jim Gray and Andreas Reuter, Transaction Processing:Transaction Processing: concepts and techniques. SanFrancisco, CA: Morgan Kaufmann, 1993.

[14] Mark W. Johnson, “Application Response Measurement(ARM) API, Version 2,” CMG Proceedings, 2000. Alsoavailable at http://regions.cmg.org/regions/cmgarmw/marcarm.html.