ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster...

19
WHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server Imagery Performance Benchmark

Transcript of ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster...

Page 1: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

WH

IT

E

PA

PE

R

ERDAS APOLLO 2011 Standalone and Cluster Server

Imagery Performance Benchmark

Page 2: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

White Paper Title

i

Contents

Introduction ..................................................................................................................... 3

Purpose .................................................................................................................................................. 3

Methods .................................................................................................................................................. 3

Test Setup ....................................................................................................................... 5

Hardware and Software Topology .......................................................................................................... 5

Fitness Test ............................................................................................................................................ 5

Load Runner Scenario Setup ................................................................................................................. 6

Gridded Data .......................................................................................................................................... 6

Standalone vs. 2 Nodes Cluster setup ................................................................................................... 6

Analysis ......................................................................................................................... 13

Conclusion .................................................................................................................... 14

Appendix A – WMS Fitness Test ................................................................................... 15

Page 3: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

3

Introduction

Following the Information Technology (IT) trend of centralization, virtualization and interoperability is a fundamental shift in the geospatial software space; from vendor-driven desktop software packages to enterprise Service Oriented Architectures (SOA) and Web Services. This trend provides:

• improved security and governance of business critical data;

• the necessary failover, redundancy and scalability for the system under design;

• reduced system administration;

• platform independence; and

• commoditization of software components and features. The culminations of these capabilities result in greater Return on Investment (ROI) on software and hardware systems. Concurrently, in the geospatial software industry there is an increased acceptance and velocity behind the standardization of technologies to support interoperable data models, data types and application profiles that are published by internationally accepted governing bodies; specifically the Open Geospatial Consortium (OGC) and International Organization for Standardization (ISO). These published standards are commensurate with the IT trends and technologies utilized to support enterprise Geospatial Intelligence (GI) software capabilities.

Fast local desktop access to proprietary data formats have created a performance expectation of geospatial software that is required to meet the demand and efficiency with end-user workflows and the users perception of usability. Critical to the market acceptance of enterprise software and GI standards is the performance of software implementing enterprise and standards based technologies. This whitepaper publishes the results of a performance test to deliver map images through the Web Mapping Service (WMS) service interface utilizing an ERDAS APOLLO Advantage Tier system setup. This provides a real world user scenario on a quantity and scale of data common in the market.

ERDAS APOLLO Advantage 2011 is an OGC/ISO compliant enterprise class gridded data management and delivery system. This system supports the Web Mapping Service (WMS), Web Coverage Service (WCS) and the Catalog Service Web (CSW) for delivery of massive volumes of gridded data through IT standard and OGC/ISO complaint SOA architecture.

Purpose

1. To determine the throughput and expected user load in a market repeatable performance benchmark test for ERDAS APOLLO Advantage 2011 on a single server for a given hardware set and configuration on a real world quantity and scale of data.

2. To determine the throughput and expected user load in a market repeatable performance benchmark test for ERDAS APOLLO Advantage 2011 on a cluster composed of two server nodes, for a given hardware set and configuration on a real world quantity and scale of data.

Methods

A recommended hardware and software topology was built for the test system under design to test an 8 core system (this equals two 4-Core ERDAS APOLLO licensing units). Additional hardware and software was deployed to implement the HP LoadRunner software to the system under design. See “Hardware and Software Topology section below for detailed table of hardware and software deployment. This setup

Page 4: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

4

was added to an existing network on an isolated subnet and network switch to avoid other network activity from interfering with the performance test results.

The ERDAS APOLLO Advantage 2011 server was deployed with post installation tuning performed to scale the software to the server capabilities. The following post installation tuning was implemented:

1. The RDS Process min. value was set to 10 and max. value was set to 20 in the processmanager.properties file.

2. The RDS JVM Options was set to:

rds.jvm.options=-Xms64m –Xmx128m -XX:+UseParallelOldGC -

XX:ParallelGCThreads=8

3. The min and max database poolsize parameters were modified in the apollo-ds.xml file and set to 10 and 300 respectively.

A fitness test was developed and deployed to each Load Generator machine. The fitness test simulates the process of generating secured WMS requests to the ERDAS APOLLO server (see Appendix A). The Load Generator machines execute this fitness test repeatedly for each user loaded onto the system at runtime of the scenario.

440 Multispectral (6 band) LANDSAT 7 images in ERDAS IMAGINE image (.img) format at 28.5 meter resolution with coverage of the entire lower 48 states of the US were copied to the file server. Each LANDSAT image is in its constituent UTM zone projection. Each individual image contained IMAGINE desktop software generated pyramid files (.rrd).

ERDAS MosaicPro was used to create a “low resolution” mosaic image in IMG format of the coverage area at 85.5 meter resolution. IMAGINE pyramids (.rrd) were also generated for this image.

The ERDAS APOLLO Data Manager was used to create a single US aggregate in Geographic SRS (ESPG:4326) within the data model. All of the 440 were harvested and cataloged within this created aggregate. The low resolution mosaic image generated in ERDAS MosaicPro was attached to the US aggregate as an existing overview of the aggregate.

The ERDAS APOLLO Data Manager was used to generate database stats on the data model using the “Optimize DatabaseE” command.

A Load Runner scenario was developed to simulate a real world use case of user load generated within the system under design. The Load Runner Controller was configured to ramp up by 25 users every 90 seconds until 500 and run at that user load for 10 minutes. The wait time in the Controller was set at seven seconds to simulate the time between each map request for each user (seven seconds to view the map image before another WMS request was performed for that user). At that point, the Controller removes all users from the system.

The Load Runner Scenario was run, and monitors on the windows resources were collected and the performance metrics for each request to the server for all users loaded onto the system.

Page 5: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

5

Test Setup

Hardware and Software Topology

The following hardware and software topology was used for the performance benchmark system under design. Table 1 – Lists the details of the performance test system setup; machine description, processor, RAM and storage, operating systems and performance testing software and ERDAS Software deployment of the performance system setup.

Description Processor RAM Storage OS Software

File Server Dual Xeon 2.8 GHz

4GB 5 Seagate Cheetah 300GB, 10k, Ultra320

SCSI disks, RAID5

Win 2003 Server Standard SP2

(32 bit)

Database Dual Xeon 2.8 GHz

2GB 555GB Win 2003 Server Standard SP2

(32 bit)

Oracle 11g, one shared database instance and

Oracle Listener

APOLLO 2011 (Node1,

standalone or cluster setup)

2 Xeon (quad Cores) 2.5 GHz

16GB 1.9TB Win 2003 Server Standard SP2

(64 bit)

JBOSS Application server (4.2.2.GA) ERDAS APOLLO Advantage 2011

JDK 6.0

APOLLO 2011 (Node2,

cluster setup)

2 Xeon (quad Cores) 2.5 GHz

8GB 140 GB Win 2003 Server Standard SP2

(64 bit)

JBOSS Application server (4.2.2.GA) ERDAS APOLLO Advantage 2011

JDK 6.0

Load Generator Dual Xeon 3.16 GHz

4GB 233 GB Win 2003 Server Standard SP2

(32 bit)

Load Runner Load Generator

Load Generator Core2Duo 2.66 GHz

2GB 233 GB Win 2003 Server Standard SP2

(32 bit)

Load Runner Load Generator

Controller P4 (2 Cores) 3.06GHz

2GB 140 GB Win 2003 Server Standard SP2

(32 bit)

Load Runner Controller

Load Generator Core2Duo 1.66 GHz

2GB 233 GB Win 2003 Server Standard SP2

(32 bit)

Load Runner Load Generator

A 10/100/1000 HP Procurve switch, 1800-24G Ethernet Switch was used to connect all hardware to the performance test subnet.

Fitness Test

A fitness test was developed to simulate an-end user experience with a WMS client. The fitness test was designed to generate 512 x 512 dimension WMS map requests against a secured ERDAS APOLLO Advantage 2011 server. The fitness test contains a related data table that contains the required WMS GETMAP request parameters (extent, layers) that are appropriate for the dataset coverage. The following WMS request parameter constants are used; height=512, width=512, image format=image/jpeg, SRS=4326, Style=default, username=admin, password=leica123. The fitness test was deployed to each LoadGenerator client machine in the system under design.

Page 6: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

6

Load Runner Scenario Setup

HP Load Runner – end to end performance and diagnostic software.

1. A Load Runner Scenario was created to execute the fitness test with a seven second “wait time” simulating the time between each WMS request for a given user in the system.

2. The scenario was setup to load 25 users every 90 seconds up to 500 users, and then run at 500 users for 10 minutes. At that point, the Controller removes all users from the system.

Gridded Data

LANDSAT 7 SCENES covering the entire Continental US (440 scene) as IMAGINE image (.img) format with pre-generated pyramids (.rrd) in UTM projection were loaded onto the file server.

• Pre-generated RRD Pyramids for all image datasets

• 6 band Multi-Spectral Images

• 28.5 Meter resolution

• UTM Projections

ERDAS MosaicPro was used to generate the pyramid image for the aggregated imagery at 58.5 meter resolution and attached to the single aggregate in the ERDAS APOLLO data model. See whitepaper “Data Model for Gridded Data” for detailed description of the ERDAS APOLLO data model and capabilities.

Standalone vs. 2 Nodes Cluster setup

ERDAS APOLLO is a scalable Enterprise product. This means that the ERDAS APOLLO system can be scaled to the user demand, and this is achieved through clustering. Multiple server nodes can be deployed together in a cluster, each node hosting one ERDAS APOLLO instance, in order to multiply the system computation power and the number of supported concurrent users. The users will see this cluster as one single APOLLO system, as the different nodes are perfectly synchronized and are hidden behind a load-balancing system.

In this benchmark we will measure two different ERDAS APOLLO setups:

• Standalone setup: one Single ERDAS APOLLO deployed on a single 8 cores server

• 2 Nodes Cluster setup: two ERDAS APOLLO deployed on two 8 cores server nodes, acting as one single homogeneous setup

Page 7: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

7

Results

Graph 1 – Number of Virtual concurrent users vs. Elapsed Scenario Time (Standalone & 2 Nodes Cluster setup)

Table 1 – Table of statistics for virtual concurrent users data for the run scenario

Page 8: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

8

Graph 2a – Average Transaction Response Time vs. Elapsed Scenario Time (Standalone setup)

Table 2a – Table of statistics of response times for each WMS request extent (Standalone setup)

Page 9: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

9

Graph 2b – Average Transaction Response Time vs. Elapsed Scenario Time (2 Nodes Cluster setup)

Table 2b – Table of statistics of response times for each WMS request extent (2 Nodes Cluster setup)

Page 10: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

10

Graph 3 – Percent Processor Time vs. Elapsed Scenario Time (Standalone vs. 2 Nodes Cluster setup)

Table 3 – Statistics of Percent Processor Time for the scenario run (Standalone vs. 2 Nodes Cluster setup)

Page 11: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

11

Graph 4a – Average Response Time (s) vs. Elapsed Scenario Run Time (Standalone)

Table 4a – Statistics of each WMS extent average response times for the scenario run(Standalone)

Page 12: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

12

Graph 4b – Average Response Time (s) vs. Elapsed Scenario Run Time (2 Nodes Cluster)

Table 4b – Statistics of each WMS extent average response times for the scenario run (2 Nodes Cluster)

Page 13: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

13

Analysis

The fitness test is designed to exercise the ERDAS APOLLO Advantage 2011 software features that are known to have high CPU requirements at a user load. The test exercises the server’s capability to perform reprojection “on the fly” and mosaic “on the fly” of the LANDSAT imagery. All the UTM images are reprojected into the WMS requests SRS=4326. All of the 1:1 level WMS requests required mosaicking of multiple images for the WMS request.

Graph 1 displays the load of Virtual Users (VUsers) vs. the time that was simulated for this test scenario. The test ramped up the number of virtual users to 500 users in approximately 28 minutes. Graph 1 depicts the VUsers ramp up time for both standalone and 2 nodes cluster installation

A “usability threshold” of two seconds should be set for the Average Response Times to establish a point at which users will be satisfied or dissatisfied with the performance of the mapping experience.

Standalone installation

• 260 concurrent users: all requests sub 2 seconds

• 300 concurrent users: all requests sub 3 seconds

• 350 concurrent users: all requests sub 4 seconds

Graph 2a shows that the average request time throughout the ramp up at a 260 user load was below the two second usability threshold for all WMS requests on the Standalone setup. At 300 users, all response times are below 3 seconds and most are below 2 seconds.

For standalone installation as the number of users increased to 500 users, the total average response times for each extent on average exceeded the usability threshold. The majority of the map requests ranged between 6-9 seconds during the 500 concurrent users load. The time per request for each WMS extent begins to have a larger average variance. The WMS requests that require the mosaic on the fly process have higher average response times than those WMS requests that do not require this feature to be exercised by the server.

2 Nodes cluster installation

• 300 concurrent users: all requests sub second

• 380 concurrent users: all requests sub 2 seconds

• 440 concurrent users: all requests sub 3 seconds

• 480 concurrent users: all requests sub 4 seconds

The 2 nodes cluster setup is performing better than the standalone installation. Graph 2b shows that the average request time throughout the ramp up at a 380 user load was below the two second usability threshold for all WMS requests on the Standalone setup. At 440 users, all response times are below 3 seconds and most are below 2 seconds.

For the 2 node cluster installation as the number of users increased to 500 users, the average response times for each extent on average remain below 5 seconds. As a result, the 2 nodes cluster installation performs far better than the standalone installation

Page 14: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

14

Conclusion

Standalone installation

In the standalone setup scenario, the maximum number of concurrent users that achieved an acceptable mapping experience for the standalone installation under design was 300. This setup would represent an ERDAS APOLLO Advantage 2011 server with a 2 units license. The server demonstrated stability beyond the maximum throughput capacity with no failed WMS map requests throughout the duration of the entire scenario, surpassing the established usability threshold. WMS Extent requests that required the mosaic on the fly feature displayed a predictable and higher average response time as expected.

2 Nodes cluster installation

In the 2 nodes cluster setup scenario, the maximum number of concurrent users that achieved an acceptable mapping experience for the standalone installation under design was 440. This setup would represent a cluster of two ERDAS APOLLO Advantage 2011 servers, each with a 2 unit license. The cluster demonstrated stability beyond the maximum throughput capacity with no failed WMS map requests throughout the duration of the entire scenario, surpassing the established usability threshold. WMS Extent requests that required the mosaic on the fly feature displayed a predictable and higher average response time as expected.

Further tests need to be performed to determine the maximum sustainable transactions/second by removing any wait time from the scenario to determine the maximum total transactions per second. The same test scenario as this test should be run with the other encoding formats of WMS (png, png8 and gif) to determine the effect of encoding formats to determine the results of varying the encoding format. This test did not require the feature of ‘ortho’ on the fly where the server can orthorectify images with sensor models for each WMS request, so further tests must be run to determine the server load when this feature is utilized. Further tests need to be performed to determine the load with different height and width images requested by WMS.

Page 15: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

15

Appendix A – WMS Fitness Test /* * Code generated by Java-API Inspector for LoadRunner Version 8.1.0 * Session was recorded on: Tue Apr 17 11:42:55 2007 * * Using VM version : JDK version 1.5.0 * Script Author : aream * Script Description: This script is use for making a WMS request * */ import lrapi.lr; import com.lggi.esp.fixtures.imagearchive.WMSFixture; import com.lggi.esp.fixtures.security.LoginFixture; public class Actions { WMSFixture fix; /** * Initialization */ public int init() throws Throwable { fix = new WMSFixture(); fix.username = "qaadmin"; fix.password = "qaadminpw"; fix.server = "swengeaserver"; fix.iasUrl = "http://swengeaserver:8080/ionicias/ias/IAS"; fix.createMapServer(); return 0; }

Page 16: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

16

public void makeRequest(String savePath, double[] extent, String transactionName) throws Exception{ int [] size = {256, 256}; fix.srs = "4326"; // change to output EPSG ID fix.granuleName = "MSI"; fix.outputFormat="image/png"; fix.savePath = savePath; fix.size = size; fix.extent = extent; // Make WMS Request lr.start_transaction(transactionName); String result = fix.makeRequest(); if (result.equals("PASS")) { lr.end_transaction(transactionName, lr.PASS); } else { lr.end_transaction(transactionName, lr.FAIL); System.out.println(result); } fix.saveResult(); lr.think_time(8); } public int action() throws Throwable { String savePath = "C:\\output\\wms\\MSI"; // DC AOI double extent1 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent2 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent3 [] = {-126.449527,30.762817,-87.294104,49.983251}; double extent4 [] = {-109.392146,28.270413,-89.865786,37.824731}; double extent5 [] = {-89.826576,29.897926,-80.024186,34.710211}; double extent6 [] = {-80.471797,34.711256,-75.595159,37.111832}; double extent7 [] = {-77.345637,37.115220,-74.891510,38.320032}; double extent8 [] = {-76.042495,38.325897,-74.808080,38.928299}; double extent9 [] = {-76.654800,38.627829,-76.040041,38.929034}; double extent10 [] = {-76.962179,38.777691,-76.654800,38.928299}; double extent11 [] = {-77.115255,38.853183,-76.962179,38.928299};

Page 17: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

17

// SD AOI double extent12 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent13 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent14 [] = {-125.358891,30.448521,-86.329724,49.499718}; double extent15 [] = {-125.358891,39.939198,39.939198,49.499718}; double extent16 [] = {-124.950575,37.389905,-115.175150,42.171508}; double extent17 [] = {-121.533609,34.982985,-116.644553,37.384532}; double extent18 [] = {-119.099826,33.779525,-116.641867,34.982985}; double extent19 [] = {-117.869503,33.180482,-116.639181,33.782212}; double extent20 [] = {-117.257028,32.879617,-116.641867,33.180482}; double extent21 [] = {-117.259715,32.729184,-116.953477,32.879617}; double extent22 [] = {-117.267774,32.653968,-117.117341,32.729184}; // NY AOI double extent23 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent24 [] = {-126.71795914208141, 23.55641843312, -65.4008174684951, 49.9487598904901}; double extent25 [] = {-126.212144,30.836189,-87.103232,49.925418}; double extent26 [] = {-103.045218,39.090991,-83.333698,48.752861}; double extent27 [] = {-83.255166,39.090991,-73.438672,43.910201}; double extent28 [] = {-73.409222,42.297935,-68.510792,44.713402}; double extent29 [] = {-71.823858,41.090201,-69.359918,42.300866}; double extent30 [] = {-73.050920,40.877675,-71.826313,41.478610}; //double extent31 [] = {-73.664451,40.579406,-73.049693,40.879141}; double extent32 [] = {-73.972444,40.641331,-73.665678,40.791932}; double extent33 [] = {-74.125826,40.640232,-73.972750,40.715532}; // DC AOI Requests makeRequest(savePath + "1.gif", extent1, "First_Transaction"); makeRequest(savePath + "DC_EX.gif", extent2,"MSI_Images_DC_EX"); makeRequest(savePath + " DC_a.gif", extent3, "MSI_Images_DC_ZP2a"); makeRequest(savePath + " DC_b.gif", extent4, "MSI_Images_DC_ZP2b"); makeRequest(savePath + " DC_c.gif", extent5, "MSI_Images_DC_ZP2c"); makeRequest(savePath + " DC_d.gif", extent6, "MSI_Images_DC_ZP2d"); makeRequest(savePath + " DC_e.gif", extent7, "MSI_Images_DC_ZP2e"); makeRequest(savePath + " DC_f.gif", extent8, "MSI_Images_DC_ZP2f"); makeRequest(savePath + " DC_g.gif", extent9, "MSI_Images_DC_ZP2g"); makeRequest(savePath + " DC_h.gif", extent10, "MSI_Images_DC_ZP2h"); makeRequest(savePath + " DC_1to1.gif", extent11, "MSI_Images_DC_1to1"); // SD AOI Requests makeRequest(savePath + "SD_1.gif", extent12, "First_Transaction"); makeRequest(savePath + " SD_EX.gif", extent13,"MSI_Images_SD_EX"); makeRequest(savePath + " SD_a.gif", extent14, "MSI_Images_SD_ZP2a"); makeRequest(savePath + " SD_b.gif", extent15, "MSI_Images_SD_ZP2b"); makeRequest(savePath + " SD_c.gif", extent16, "MSI_Images_SD_ZP2c"); makeRequest(savePath + " SD_d.gif", extent17, "MSI_Images_SD_ZP2d"); makeRequest(savePath + " SD_e.gif", extent18, "MSI_Images_SD_ZP2e"); makeRequest(savePath + " SD_f.gif", extent19, "MSI_Images_SD_ZP2f"); makeRequest(savePath + " SD_g.gif", extent20, "MSI_Images_SD_ZP2g"); makeRequest(savePath + " SD_h.gif", extent21, "MSI_Images_SD_ZP2h"); makeRequest(savePath + " SD_1to1.gif", extent22, "MSI_Images_SD_1to1");

Page 18: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

Index

18

// NY AOI Requests makeRequest(savePath + "NY_1.gif", extent23, "First_Transaction"); makeRequest(savePath + "NY_EX.gif", extent24,"MSI_Images_NY_EX"); makeRequest(savePath + "NY_a.gif", extent25, "MSI_Images_NY_ZP2a"); makeRequest(savePath + "NY_b.gif", extent26, "MSI_Images_NY_ZP2b"); makeRequest(savePath + "NY_c.gif", extent27, "MSI_Images_NY_ZP2c"); makeRequest(savePath + "NY_d.gif", extent28, "MSI_Images_NY_ZP2d"); makeRequest(savePath + "NY_e.gif", extent29, "MSI_Images_NY_ZP2e"); makeRequest(savePath + "NY_f.gif", extent30, "MSI_Images_NY_ZP2f"); //makeRequest(savePath + "NY_g.gif", extent31, "MSI_Images_NY_ZP2g"); makeRequest(savePath + "NY_h.gif", extent32, "MSI_Images_NY_ZP2h"); makeRequest(savePath + "NY_1to1.gif", extent33, "MSI_Images_NY_1to1"); return 0; } /** * End */ public int end() throws Throwable { return 0; } }

Page 19: ERDAS APOLLO 2011 - Hexagon · PDF fileWHITE PAPER ERDAS APOLLO 2011 Standalone and Cluster Server ... Table 2a – Table of statistics of response times for each WMS request extent

For more information about Intergraph, visit our Web site at www.intergraph.com. For more information about ERDAS, visit our Web site at www.erdas.com

Intergraph, the ERDAS logo and the Intergraph logo are registered trademarks of Intergraph Corporation. Other brands and product names are trademarks of their respective owners. Intergraph believes that the information in this publication is accurate as of its publication date. Such information is subject to change without notice. Intergraph is not responsible for inadvertent errors. ©2011 Intergraph Corporation. All Rights Reserved. Date Document #