Session 3.7 - Short Presentations

119
9th International Conference on Urban Drainage Modelling Belgrade 2012 1 Evaluating the impact of climate change on urban scale extreme rainfall events: Coupling of multiple global circulation models with a stochastic rainfall generator Gonzalo Peña 1 , Assela Pathirana 2 1 MSc. Research Fellow - UNESCO-IHE, Westvest 7, Delft. The Netherlands [email protected] 2 Senior Lecturer in Urban Drainage and Sewerage - UNESCO-IHE, Westvest 7, Delft. The Netherlands [email protected] ABSTRACT Assessing the impact of climate change in an urban drainage context requires the use of tools and methodologies that work at the local scale and fine temporal resolution. This paper aims to provide a methodology for the downscaling of an ensemble of Global Circulation Models (GCMs) output in order to produce synthetic point urban scale rainfall series using an hourly stochastic rainfall generator. In order to combine the results of several GCMs, a previously developed methodology makes use of Bayesian approach to produce a probabilistic distribution of the factors of change for different statistical properties. These factors will be applied to the statistical properties of the observed data in order to re-evaluate the parameters of the rainfall generator. Hourly rainfall data from the city of Kochi (Japan) for the period of 1976-2000 has been used to assess the proposed methodology. The stochastic downscaling process makes use of twelve GCMs as input for the periods of 2046-2065 and 2081-2100 and scenarios A1b, B1 and A2 as adopted by the intergovernmental panel on climate change (IPCC). Results are derived from multiple future realizations of climate for the different scenarios and periods. The rainfall generator used is proficient in reproducing the extreme events for different rainfall durations but limited to return periods of ten years and smaller. The methodology produces results that are consistent with the different scenarios but by only taking the mean factor of change in the probability distribution, results presented are only indicative of a likely scenario. Further uncertainties present in the described methodology are discussed.

description

1

Transcript of Session 3.7 - Short Presentations

  • 9th International Conference on Urban Drainage Modelling Belgrade 2012

    1

    Evaluating the impact of climate change on urban

    scale extreme rainfall events: Coupling of multiple

    global circulation models with a stochastic rainfall

    generator

    Gonzalo Pea1 , Assela Pathirana2

    1 MSc. Research Fellow - UNESCO-IHE, Westvest 7, Delft. The Netherlands [email protected] 2 Senior Lecturer in Urban Drainage and Sewerage - UNESCO-IHE, Westvest 7, Delft. The Netherlands [email protected]

    ABSTRACT

    Assessing the impact of climate change in an urban drainage context requires the use of tools and methodologies that work at the local scale and fine temporal resolution. This paper aims to provide a methodology for the downscaling of an ensemble of Global Circulation Models (GCMs) output in order to produce synthetic point urban scale rainfall series using an hourly stochastic rainfall generator. In order to combine the results of several GCMs, a previously developed methodology makes use of Bayesian approach to produce a probabilistic distribution of the factors of change for different statistical properties. These factors will be applied to the statistical properties of the observed data in order to re-evaluate the parameters of the rainfall generator.

    Hourly rainfall data from the city of Kochi (Japan) for the period of 1976-2000 has been used to assess the proposed methodology. The stochastic downscaling process makes use of twelve GCMs as input for the periods of 2046-2065 and 2081-2100 and scenarios A1b, B1 and A2 as adopted by the intergovernmental panel on climate change (IPCC). Results are derived from multiple future realizations of climate for the different scenarios and periods.

    The rainfall generator used is proficient in reproducing the extreme events for different rainfall durations but limited to return periods of ten years and smaller. The methodology produces results that are consistent with the different scenarios but by only taking the mean factor of change in the probability distribution, results presented are only indicative of a likely scenario. Further uncertainties present in the described methodology are discussed.

  • 2

    KEYWORDS

    Climate change, extreme events, rainfall generator, stochastic downscaling

    1 INTRODUCTION

    Growing cities are becoming more vulnerable to damaging flood events as a result of rapid urbanization, population growth and changes in extreme precipitation frequency and intensity due to climate change. In the development of prevention and mitigation strategies against floods and the design of urban drainage, knowing the impact of climate change is of paramount importance.

    Climate change can be defined then as the change in the state of the statistics trends during the period of analysis for the entire globe. This definition takes into account both the changes caused by the natural variability of the earth system and the changes induces by man activity also known as anthropogenic factors. (UN 2007)

    An increasing concern in climate and the intervention of human kind in its change fostered the creation of the Intergovernmental Panel on Climate Change (IPCC) on 1988. The aim of this international body is to provide scientific basis on the current state in climate change as well as the potential social, environmental and economic impacts. This is achieved by means of periodical reports that aggregate the findings of different international research groups that at the same time serve to define a unified and coherent framework of climate impact assessment. (IPCC 2011a)

    1.1 Tools in the assessment of climate change: GCMs

    GCMs are numerical models that represent the different physical processes that take place in the atmosphere, ocean, cryosphere and land surface. The processes include the exchange of momentum, heat, moisture for the atmospheric layers plus diffusion and advection in oceanic layers. (IPCC 2011b)

    These GCMs are defined using coarse three dimensional grids throughout the entire globe having a horizontal resolution that varies between 250 - 600 km (at the equator) and using up to 20 and 30 layers in the atmosphere and oceans respectively. (Viner n.d.). Being numerical models, initial and boundary conditions are needed in order to be executed and the resolution of GCMs will be dictated by the computing power and storage available.

    Many physical processes, including clouds, cannot directly be represented at the scale GCMs work, so parameterization is needed in order to average properties. This represents a source of uncertainty in every model. Other sources might come from the fact that different models have different mechanism to take into account for instance, water vapour and warming, clouds and radiation, ocean circulation and ice and snow albedo. For this reason, different models developed by different working groups on the subject using a same set of forcing conditions can produce divergent output adding to uncertainty, as illustrated by Figure 1. At the same time initial conditions play an important role, so small differences may have a dramatic impact as simulations in the GCMs go further in time.

  • 3

    Figure 1. The time evolution of the globally averaged temperature change relative to the years (1961 to 1990). GCMs using IS92a. G: greenhouse gas only as scenario forcing. [Source: Adapted from (IPCC 2001a)]

    1.2 Special report on emission scenarios (SRES)

    The team designated by the IPCC to be in charge of the SRES development, envisioned a set of emission scenarios covering a wide range of driving forces including agriculture, population, economy, technology an available sources of energy. Each one of these scenarios develops from a set of story lines to take into account how future developments could influence the increase or decrease of green house gases (GHG) emission (IPCC 2001b). Scenarios are described and schematically represented Table 1 and Figure 2 respectively.

    Table 1. SRES Scenarios description

    Scenario Description

    A1 A world of very rapid economic growth, a global population that peaks in mid-century and rapid introduction of new and more efficient technologies

    A2 A heterogeneous world with high population growth, slow economic development and slow technological change

    B1 A convergent world, with the same global population as A1, but with more rapid changes in economic structures toward a service and information economy

    B2 Describes a world with intermediate population and economic growth, emphasising local solutions to economic, social, and environmental sustainability

    [Source: Adapted from (IPCC 2001b)]

  • 4

    Figure 2. SRES scenarios. A1FI, A1T A1B refer to the alternative directions of technological change and stand for Fossil intensive, Non-fossil intensive resources and balanced between all sources respectively. *Scenario B2 is not evaluated in this study

    According to IPCC (2007), it is very likely that heavy precipitation events and their associated frequency increase over most areas of the globe. Ongoing research in the field of climate change impact assessment has focused on the regional, catchment hydrological scale ignoring until recently the impact on urban catchments at a local scale. (Fowler et al. 2007; Willems et al. 2012)

    The development of a downscaling methodology to provide synthetic rainfall series at the urban scale from GCM output represents the initial step of any future urban climate change impact assessment study. The subsequent development of a tool will naturally lead to a better understanding of the present and future risks and vulnerabilities of large cities when complex infrastructure faces extreme precipitation events.

    1.3 Downscaling alternatives

    There are different methodologies that can be applied in the process of downscaling. A brief summary is made in the subsequent paragraphs, and a more thorough description of the stochastic rainfall generator employed in this work can be found subsequently.

    1.3.1 Dynamic downscaling

    This method refers to the use of regional climate models (RCMs) using a nested modelling technique where the output of GCMs simulations provides the time-varying boundary and initial conditions around a nite domain or region of interest inside the GCM.

    RCMs are also mathematical/numerical models that resolve at a finer grid resolution in latitude and longitude. The resolution of the models has increased in recent years thanks to advancements in computing power. Nowadays RCMs can routinely achieve resolutions of 10 km. This increased resolution allows for a better approximation of regional phenomena like orographic precipitation, extreme climate events and regional scale climate anomalies, or other effects such as El Nio Southern Oscillation (ENSO). (Fowler et al. 2007; Sunyer et al. n.d.).

    1.3.2 Statistical downscaling

    This method refers to the correlation of the coarse GCMs scale in both space and time of the state of the atmosphere, named predictor variables and the small scale precipitation named predictand variable. The statistical model is based on historical time series so an assumption taken by these methods is that

    A1 A2

    B1 B2*

    Driving forces:

    Population, economy,

    technology, Energy,

    Agriculture

    A1F A1T

    A1B

    ECONOMIC

    GLOBAL REGIONAL

    ENVIRONMENTAL

  • 5

    the predictor/predictand relationship will not change under changing climate conditions, also known as stationarity condition. (Bierkens et al. 2000). These can be further classified in:

    Transfer functions

    Resampling methods

    Stochastic downscaling

    The later set of techniques, also known as stochastic rainfall generators or weather generators consist of a mixture of empirical statistical relations and physical methods. The idea is to adopt the use of stochastic models that are guided by descriptions of physical phenomena of the modelled process. (Fatichi et al. 2011)

    Stochastic processes deal with systems that develop in time and space following probabilistic laws (Cox and Miller 1977). They can be applied to the description of rain cells and clustering, the dependence between precipitation and cloudiness and dependences between temperature and radiation among others.

    Stochastic generators traditionally are parameterized by using existing long historic series. This is done in order to adapt the underlying probabilistic functions to comply with basic statistical to the best extent to properties like the mean and other higher order moments such as the variance, the skewness and the kurtosis. (Semenov 2011)

    1.4 Poisson cluster processes

    Stochastic precipitation generators are known for already almost 20 years, but several enhancements have been developed to increasing the complexity and at the same time improving the ability to represent rainfall processes. The Neyman Scott Rectangular Pulse Generator (NSRP) and the Bartlett-Lewis rectangular pulse (BLRP) are among such stochastic methods aiming to produce rainfall based on a probabilistic model. In their basic specification both models follow a similar process described in the following steps and making reference to Figure 3:

    1. Firstly storm origins arrives according to a Poisson Process (Occurrence follows a Poisson distribution in time)"

    2. Each origin generates a random number of rain cells x

    3. The duration of each rain cell is exponentially distributed

    4. The intensity of each rain cell is exponentially distributed as well

    5. The total intensity is equal to the sum of active rain cells

    The key difference between the two models lies in the reference to which rain cells are generated, which is relative to the storm origin in the NSRP and relative to each consecutively generated rain cell in the BLRP. In the basic form the model is defined by five parameters. Modifications have been made to the models allowing cell duration to vary from storm to storm hence creating the need of an additional parameter and further statistical properties have been derived, including the skewness and the proportion of dry days (Cowpertwait 1998). This modified models have also been subject to additional configurations, like superposition to create mixed rectangular pulses in order to better represent different types of rainfall like convective and stratiform (Cowpertwait 2004).

  • 6

    Most recent development in the field is the improvement of a modified Bartlett-Lewis pulse model (BLP) to be used in the generation of subhourly rainfall. This model discards the rectangular pulse approach and replaces it with a Poisson process to generate instantaneous depths which better represent fine scale rainfall processes (Cowpertwait et al. 2011)

    Figure 3. Schematic representation of NSRP and BLRP model

    1.5 Multimodel ensembles

    A great wealth of information regarding climate impact studies is reflected in the different worldwide projects involving intercomparison of GCMs output. The Working Group on Coupled Modelling (WGCM) has been instrumental in this endeavour by collecting the different data available from research centres all around the world and making it available in through the WCRP coupled model intrcomparison project (CMIP3) multi-model dataset.

    According to (Smith et al. 2009) uncertainties in climate projection can be roughly classified in:

    Natural climate variability

    Uncertainties in the way forcing factors, like the changes in levels of greenhouse gases, affect the climatic response

    Uncertainties on what the future will be like in terms of future emissions scenarios and the like.

    For the later the different results obtained by different models and different forcing scenarios has grown the necessity to incorporate the evaluation of uncertainty using a framework that somehow assigns a weight to the different models used (Tebaldi and Knutti 2007). These approaches have showed better results when compared to simple averages where every model is given an equal weight.

    inte

    nsi

    ty

    inte

    nsi

    ty

    Storm origin Rain cell origin

    Neyman-Scott process

    Bartlett-Lewis process

    Time

    Time

    Time

  • 7

    Several methods exist today to explicitly take into account the variability of different models including the Reliability Ensemble Average (REA) which propose a direct estimation of bias and convergence for the different models. Another methodology is the use of a Bayesian analysis, where the unknown quantities are treated as random variables to finally derive a posterior probability distribution function of all the uncertain quantities of interest (Tebaldi et al. 2005a).

    These methodologies rely in the assumption of independence between models which is not strictly guaranteed taking into account the fact that models based on physical description of natural phenomena would use similar mathematical description of atmospheric and oceanic processes. Nevertheless by increasing the number of models used, narrower posterior probability distributions are obtained, hence producing better estimates with reduced uncertainty.

    1.6 Main objective

    The overall objective of the presented document is to develop and extend a downscaling methodology applicable to general circulation models output taking into account the impact of climate change for different emission scenarios in order to produce urban point scale rainfall data that allows for the estimation of extreme events for different return periods. The development of the work also included the exploration of an open source environment for scientific computing making use of the Python programming language.

    2 METHODOLOGY

    Figure 4. A scheme of the stochastic downscaling methodology.

    3. Evaluation of factors of change for the different statistical properties at different aggregation intervals using a Bayesian ensemble approach.

    6. Calculation of new set of modified parameters for the rainfall generator per month

    2. Evaluation of monthly climate statistical properties for an ensemble of climate models and three different emission scenarios (SRES-A1b, SRES-A2 and SRES-B1)

    4. Calculation of mean factors of change for each statistical property and calculation of future statistical properties (for aggregation intervals >= 24 hours)

    5. Extension to finer aggregation scales of future statistical properties ( < 24 hours)

    1. Estimation of monthly rainfall generator parameters for case study location (Kochi, Japan) using statistical properties of historical data (1976-2000)

    7. Simulation of an ensemble of future synthetic rainfall using the modified generator. (200 realizations per scenario)

    8. Evaluation of impact on extreme rainfall events using IDF curves analysis.

  • 8

    The methodology employed in this work, follows closely the methodology proposed in (Fatichi et al. 2011), and extends it to make use of 12 GCM models, which is expected to reduce the uncertainty in the results and at the same produces more robust results. At the same time several emission scenarios, namely SRES A1b, A2 and B1 are evaluated to offer further insight in the different evolution of rainfall extreme events. Figure 4. Illustrates the main steps in the downscaling processes, which will be described in more detail in the subsequent chapters.

    2.1 Case study exploratory analysis

    The basic rainfall data used in this study was required in the hourly scale. Various case studies were selected in Japan covering different latitudes and climatic conditions, ranging from Sapporo in the north to Naha in the south to check the performance of the stochastic rainfall generator.

    For the present evaluation of the influence of climate change in extreme rainfall only the city Kochi will be analyzed as it presented the highest intensity events from the historical series. All of the information was obtained through the Automated Meteorological Data Acquisition System (AMeDAS) from the period of 1976, when the system was launched, up to 2010.

    In order to be consistent with the simulations of GCMs in the 20th century, a decision to use only the records from 1976 up to 2000 was made, as this period is included in the simulations by different models. The impact of this decision will be discussed in the results.

    Figure 6. summarizes the general trends observed in the time series. For the annual regime of rainfall, a tendency to increase can be observed in the period from 1995 to 2000. The highest amount of rainfall occurs in the months of June and September which can be observed in the multiannual monthly and daily statistics. Finally for the wet and dry mean spell durations, there is no appreciable change in the trend for the recorded series.

    Figure 5. Case study locations in Japan

  • 9

    Figure 6. Exploratory analysis of historical data in Kochi Japan [1976-2000]

    The total annual rainfall does display a slight non stationary behaviour whereas the wet-dry spell durations do not. Because of these minor variations, the stationary assumption which is the base of the parametrization of the stochastic rainfall generator and further application of the change factors concept, is not expected to induce significant errors.

    2.2 GCM Output

    The dataset from the CMIP3 includes 25 different GCMs output, but not all models have available data for the same periods of time, or even the same scenarios. From these 25 models a first selection was made, taking into account that there was information for all the scenarios and for both periods to analyze, namely 2046-2065 and 2081-2100. As it was mentioned earlier, the Bayesian Ensemble concept used and described in full detail (Tebaldi et al. 2005a) relies on the independence of the climate models used as part of the ensemble is assumed.

    From the preselection of models a final ensemble was picked out taking into account:

    If different models come from a same working group or research centre, only the latest or most complete model (in terms of data availability) is used.

  • 10

    If a model contains several runs for the same period, only the latest run is used, accounting that this run has complete data record for the periods of interest.

    Figure 7. summarizes the final list of models selected and the available information for the different emission scenario where all output is available at the daily scale.

    Figure 7. GCM output data available for past and future runs for the selected GCM models

    2.3 Parametrization of Newman-Scott Rectangular Pulse (NSRP)

    The stochastic model used in this research corresponds to the rainfall component used in the Advanced Weather Generator (AWE-GEN). A full description of the governing equations can be found in (Cowpertwait 2004; Fatichi et al. 2011).

    To find the parameters the approach suggested in (Cowpertwait 2004) was to fit the statistical properties at different aggregation intervals ensuring the preservation of a wide range of rainfall scales. The adimensional statistical properties used to fit the model are:

    The normalized standard deviation: Coefficient of variation

    The third moment around the mean: Skewness

    The dry spell fraction

    The lag-1 autocovariance

    This process implied the use of an objective function in the form of equation (1) to minimize the different observed statistics with the ones produced by the model.

    (1)

  • 11

    This function was applied for the entire aggregation intervals used in calibration and the before mentioned statistical properties. The weight coefficient could be applied to different statistics or aggregation interval to give importance to higher order moments which are of interest when investigating the extreme values. Biases in the estimation of the function are partially avoided by making use of the reciprocal terms of fitted and observed statistics (inside square brackets).

    For the current models the same weight was given to all the parameters evaluated and a constrained optimization routine, namely the BroydenFletcherGoldfarbShanno (BFGS) method was used.

    To account for seasonality a set of parameters was defined per calendar month and different aggregation intervals were evaluated. The final set of aggregation intervals used was 1, 6, 12 and 96 hours.

    2.4 Multiple realizations

    The stochastic nature of the rainfall model used produces a different set of extreme rainfalls for each time a simulation that is run. If a sufficiently long simulation is executed, the statistics of the generated synthetic rainfall events (if appropriately fitted), would then match the observed statistics; however this approach cannot be used when analyzing annual extremes.

    To account for extremes, several runs must be performed. A set of 200 simulations covering the same period as the one of the observations (1976-200) were run and then the median and a 95% percentile band were compared to observed maxima using a reduced Gumbel variate as displayed in Figure 8. and Figure 9.

    Figure 8. Comparison of annual maxima using reduced variate plots. Historical data and multiple synthetic rainfall runs. 1 and 12-hourly aggregation

    The model is able to correctly represent extreme events with return periods inferior or equal to 10 years, but underestimates values for return periods of 20 or more years. Nevertheless observed values are bounded by the 95% percentile band which confirms that the model is able to reproduce the observed events.

  • 12

    Figure 9. Comparison of annual maxima using reduced variate plots. Historical data and multiple synthetic rainfall runs. 24 and 72-hourly aggregation

    2.5 Factors of change

    In order to include the impact of climate change, the factors of change have to be calculated for each calendar month and could be applied to all the statistical properties employed in the stochastic model calibration.

    Figure 10. depicts the concept of factor of change derived from a non stationary climate, where the statistical value over the future and historical period of study are used to derive a factor quotient. The curving line represents a single GCM output for both the 20th century and beyond.

    Historical State Future State

    20001970 2030 2060

    StatisticValue

    Time

    Now

    Change Factor

    Figure 10. Stationarity assumption and change factor concept

    This relation can be then applied to the observed statistics on the location to recalibrate the NSRP model parameters at the corresponding aggregation interval. .

    (2)

  • 13

    2.6 Bayesian ensemble approach

    By using the methodology employed in (Tebaldi et al. 2005b) the simulations from different GCMs can be combined through Bayes Theorem, shown in equation (3)

    (3)

    Where y represent the data available and theta the parameters.

    In general terms the Bayesian ensemble methodology produces a probability density function (PDF) for both the historical (mu) and future scenarios (nu) for the statistical property used at the specified aggregation interval.

    To produce this, the probabilistic model makes use of both future and historical values in the grid point where the case study is located for all the 12 GCMs but also takes into account the observed data in the location. By including observed data in the conditioning, the PDF produced for locations found in the same grid of the GCMs would yield different results.

    After obtaining the PDFs for historical and future values, the PDF for the change factor can be calculated as the quotient of the previously obtained values. In Figure 11. we can see the results for the mean 24-hourly aggregated rainfall in the month of January for scenarios A1b and B1 in the period 2046-2065.

    Figure 11. Bayesian Ensemble results. January 24-hourly rainfall mean scenarios A1b and B1

  • 14

    Additional assumptions have been made during the application of this procedure, for instance, the different resolution of GCMs were not taken into account but instead the value of the entire grid in the model was considered representative of the location. Although this might be seen as a misleading assumption, it is a compromise in order to not introduce any further uncertainties in the calculations, besides is worth noting that the exact values of (mu) and (nu) are not of direct interest, but the change between these two.

    With the PDF of the factor of change at hand, a Montecarlo type simulation could be performed to account for the uncertainties in the use of an ensemble of GCMs. This was however out of the scope of this research and instead the mean factor of change was used which represents a very likely event for each of the scenarios.

    2.7 Extension of statistical properties to finer scales

    The Bayesian ensemble procedure was applied only to aggregation intervals larger than 24 hours, as this was the minimum resolution available in the GCMs results dataset.

    2.7.1 Mean

    It is straightforward as illustrated by Figure 12. that the mean follows a linear relationship, so a simple extension of the data can be assumed

    Figure 12. Factor of change for the mean for different aggregation steps for the months of September, October, November and December. Scenario A1b, period 2046-2065

    2.7.2 Dry spell fraction / probability of dry spell

    The different available historical data analyzed displayed a clear exponential decaying for the DSP at different aggregation intervals (Figure 13.).

    Theoretically as the aggregation interval tends to zero, the dry spell fraction should tend to one.

    The DSP fraction obtained from the factors of change for aggregations of 24, 48, 72 and 96 hours were used to fit equation (5) using least squares method.

    (4)

    Figure 13. Dry spell fraction variation. August

  • 15

    2.7.3 Skewness

    There is not a satisfactory methodology applicable to the extension of skewness to finer scales. Nevertheless there seems to be a correlation between the shape of the coefficient of variation and the shape of the skewness for different aggregation interval (Figure 14). Finally the value for subdaily scales was assumed to remain equal to the observed values.

    Figure 14. Variation of coefficient of variation and skewness of observed data for various aggregation intervals

    2.7.4 Variance

    For this extension the procedure used a theoretical derivation by (Marani 2003, 2005) was used tested successfully by (Fatichi et al. 2011) for aggregation intervals between 1 and 6 hours.

    2.8 Extreme rainfall analysis

    The final step in the downscaling procedure was to generate a set of Intensity Duration-Frequency curves for different return periods using standard methodology. Figure 15 displays the results for the Kochi location without any type of adjustment or smoothing.

    Figure 15. IDF curves, historical data [1976-1999]

  • 16

    For the final results, equation (5) was used as a generalized equation to adjust values and the process was accomplished minimizing the error using a nonlinear constrained bound optimization routine, similar to the one used in the parametrization of the NSRP model.

    (5)

    2.9 Development infrastructure and tools used

    Once the downscaling methodology was defined and in order to carry out the different tasks and fulfil the proposed objectives, a programming language and development environment was selected.

    Python, a high level interpreted language which is now a mature language after more than 20 years of continuous development (Rossum and Drake 2011) presented as the best alternative including the following characteristics:

    Freely available, open source and cross platform

    Easy to use and scalable

    Powerful enough to carry out heavy numerical scientific computing.

    For the later Python is enhanced by additional modules and packages that have an specific purpose, like NumPy which is used for number crunching side by side with SciPy which offer some standard scientific computing routines including statistics and optimization algorithms. The processing of the entire GCMs output files was made through the Python-netCDF4 package. To perform the Bayesian ensemble, PyMC package (Patil et al. 2010), was used which proved to greatly simplify any future modification of the probability model employed.

    Finally a graphical environment resembling Matlab style console was also tested and used (Raybaut 2009). Figure 16 displays the main packages and the dependencies between them.

    Figure 16. Schematic representation of Python packages dependencies and Graphical Interface

    2.9.1 Parallel computations

    Despite the fact that one of the main advantages of stochastic downscaling is the comparative minor effort from the computational point of view, assessing uncertainty, performing evaluations of the goodness of fit of different parameter sets with several case studies and scenarios calls for the use of parallel computations. In this sense, by parallel it is meant the distribution of several repetitive tasks through a multi core system or a cluster of computers, instead of parallelizing the single task itself.

    Python

    NumPy

    Scipy Matplotlib netCDF4 PyMC

    Parallel-Python

    Packages and subpackages Graphical Interface

    Spyder

  • 17

    Parallelization of multiple tasks was achieved using the Py-PP package (Vanovschi 2012), which allowed for a seamless implementation without the need to drastically modify existing code. The computer used had 8 computational cores (2x4core Xeon processors) running Linux/Mint 10.

    3 RESULTS AND DISCUSSION

    For the month of September in the location of Kochi, it was particularly difficult to obtain a good fit between the median of 200 realizations and the observed statistics. This is expected to heavily influence the results as it was discussed in the exploratory analysis section as September historically showed the highest intensity events. This month also presents the biggest variance changes through aggregation intervals, variance that the 6 parameter NSRP might not be able to adequately capture.

    The use of a mixed rectangular pulse model might have better captured the different rainfall types occurring in the month of September, and this hypothesis should be explored in further investigations dealing with extremes.

    Figure 17. Statistical properties fit for multiple realizations. Observed data

  • 18

    Despite this limitation, the model displayed a good fit in terms of extremes when the construction of IDF curves was need as can be appreciated in Figure 18. and Figure 19. The solid black line which represents the IDF curves using the median reduced variate of 200 simulations closely matches the shape and values of the dashed line, representing the IDF curve produced from observed data.

    The before mentioned figures also contained the expected IDF curves for the different scenarios. For the location of Kochi, the general trends regarding changes in extremes show that for a return period of 2 years the expected behaviour diverges between all the scenarios. Nevertheless for long duration rainfall the expected differences between different scenarios are negligible.

    Figure 18. IDF Curve for the 2 year return period simulated events in the period of 2046-2065

    For the period of 2081-2100 scenarios A2 more strongly diverges from B1 and A1b, but all scenarios predict less intense rainfall events

    Figure 19. IDF Curve for the 2 year return period simulated events in the period of 2081-2100

  • 19

    Figure 20. predicts an increase of up to 25 % in the intensity of short duration events for the return period of 20 years on scenario B1 and almost no change in intensity for scenario A2. On the other the dowscaling model predicts a decrease in the intensity of extreme rainfall events for the different rainfall durations analyzed.

    Figure 20. Expected change in intensity for period [2046-2065] 2 and 20 years return period

    For the late 21st century period, all scenarios predict an overall decrease in the intensity of rainfall events, specially marked for long duration rainfall and return periods of 2 years. Finally for the 20 year return period, all scenario predict a decrease in intensity for short duration events but a considerable increase in scenario A2 for long duration events.

    Figure 21. Expected change in intensity for period [2081-2100] 2 and 20 years return period

    The results display a wide variety of possible futures in which short duration rainfall extremes can be expected to both increase and decrease in the order of 20% with respect to observed historical series.

    For the period of 2081-2100 an increase in emissions is predicted compared to both present time and the period of 2046-2065. This trend is not directly reflected in the results obtained by the proposed methodology and is particularly evident for short duration rainfall extremes.

  • 20

    This can be a direct result of the method used for extending to finer temporal resolutions, in particularly the case of the variance estimation. In general terms when analyzing larger regions at the continental scale, same scenarios and periods for different GCMs usually predict and overall increase or decrease for the trends in climatic variables. This however cannot be assumed to be the case when analyzing grid scale events as there is no consensus in what the expected impact of a given scenario and period would be on a given point location. The Bayesian ensemble methodology aims to overcome this situation by giving a bigger weight to GCMs that tend to produce similar results, but for different periods of analysis and for different statistics analyzed weights might vary.

    Finally by the fitness of the stochastic rainfall model is expected to be affected from cutting the available time series from 1976-2010, to only 1976-2000. This not only reduces the available pool of data but limited the array of return periods that could be accurately represented.

    4 CONCLUSIONS

    A methodology based on the use of a weather generator in climate impact studies was extended to include several GCMs, several scenarios and several future periods. A stochastic rainfall generator based on the NSRP was implemented and further developments were included in the downscaling procedure. By means of multiple realizations of synthetic rainfall series, the influence of climate change was evaluated on extreme values and the associated return periods.

    The uncertainty in the use of different GCMs output could be assessed by implementing a Montecarlo type simulation in which random samples are extracted from the PDFs of the factor of change. This approach however needs to clearly define the covariances matrix between the different statistics at different aggregation intervals in order to condition the Montecarlo sampling.

    Large uncertainties still exist in the developed procedure that are inherent to the Bayesian Ensemble approach, including the assumption of independence between different GCMs and the mismatch between the grid cell size. A more robust ensemble could be produced if more models had several runs for the same periods and scenarios as this could serve as additional input in the probability model.

    Furthermore, the present research carried out multiple realizations of a single NSRP model using 1,6,24 and 96 hourly rainfall as aggregation intervals in the evaluation of rainfall extremes and curves. This was selected as the one who show the best fit in terms of extremes. However, it is almost certain that the use of a different set of calibration statistics using different parameters would produce different results and this variability between different parametrizations amounts for an extra layer of uncertainty.

    Finally the use of freely available and open source tools was successfully explored with the additional benefit of applying a simple parallelization scheme using a multiple core computer. The use of these tools should be greatly encouraged in and by the research community as means to reach a wider audience.

    5 FURTHER RESEARCH

    The presented results offer an approach that can be applied to different locations in the world where rainfall data in the hourly scale is available. Still further research needs to be pursued in this direction in order to assess the uncertainty of the obtained extreme rainfall events. In order to make use of the methodology in places where only daily data is available a disaggregation mechanism could be

  • 21

    implemented making direct use of the rainfall generator (Frost et al. 2004) or though a multi fractal cascading scheme (Pathirana et al. 1999, 2003). However, this procedure would introduce an extra degree of uncertainty in the calculations. Instead, it would be desirable to have a single stochastic model being able to represent the rainfall characteristics at different aggregation periods ranging from the subhourly to the daily scale.

    Seasonality has been traditionally taken on a monthly basis by parametrizing a different rainfall generator for each calendar month where the factors of change are also calculated and applied for the same scale (Fatichi et al. 2011). Although any variation of this scheme would also be arbitrary, there is a need to assess the impact of using a different grouping to represent seasonality. This will be limited by the available data as smaller grouping scheme would yield less data for statistical analysis. On the other hand a too large grouping scheme might not represent seasonality adequately.

    Given the uncertainties that arise in studying climate there is a need to follow an approach that not only takes different scenarios and GCM models but several statistical downscaling techniques as well (Willems et al. 2012). This can be accomplish by making use of the different variants of Poisson cluster processes developed in recent years (Cowpertwait 2009; Cowpertwait et al. 2011), or by including the same model using different aggregation intervals for calibration.

    6 ACKNOWLEDGEMENTS

    This paper is part of an MSc research carried out at UNESCO-IHE, Delft, The Netherlands and was funded by a mixed private and public Colombian fund through the COLFUTURO Loan/Scholarship program. We would like to acknowledge the Program for Climate Model Diagnosis and Intercomparison (PCMDI) as well as the different working groups and research institutions for sharing their results with the working research community which were finally made available through the Coupled Model Intercomparison Project 3 (CMIP3). An additional word of gratitude goes to the Japanese Automated Meteorological Data Acquisition System (AMeDAS), which supplied the rainfall data used in this project. Our most sincere appreciation goes to Dr. Toshiyuki Nakaegawa of the Japan Meterological Agency and Dr. Tsuichi Tsuchiya of National institute of Land and Infrastructure Management of Japan. Finally we would like to thank Dr. Simone Fatichi for his support and guidance in the development of this research.

    7 REFERENCES

    Benestad R.E., Hanssen-Bauer I. and Chen D. (2008). Empirical-Statistical downscaling World Scientific Publishing, Singapore, p.

    Bierkens M.F.P., Finke P.A. and Willigen P. (2000). Upscaling and Downscaling Methods for Environmental Research 1st edn. Springer, p.

    Burton A., Fowler H.J., Blenkinsop S. and Kilsby C.G. (2010). Downscaling transient climate change using a Neyman-Scott Rectangular Pulses stochastic rainfall model. Journal of Hydrology, 381(1-2), 1832.

    Cowpertwait P.S.P. (1998). A Poisson-cluster model of rainfall: some high-order moments and extreme values. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1971), 885898.

    Cowpertwait P.S.P. (2004). Mixed rectangular pulses models of rainfall. Hydrology and Earth System Sciences, 8(5), 9931000.

  • 22

    Cowpertwait P.S.P. (2009). A Neyman-Scott model with continuous distributions of storm types. ANZIAM Journal, 51(SUPPL.), C97C108.

    Cowpertwait P.S.P., Xie G., Isham V., Onof C. and Walsh D.C.I. (2011). A fine-scale point process model of rainfall with dependent pulse depths within cells. Hydrological Sciences Journal, 56(7), 11101117.

    Fatichi S., Ivanov V.Y. and Caporali E. (2011). Simulation of future climate scenarios with a weather generator. Advances in Water Resources, 34(4), 448467.

    Fowler H.J., Blenkinsop S. and Tebaldi C. (2007). Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological modelling. International Journal of Climatology, 27(12), 15471578.

    Frost A.J., Cowpertwait P.S.P. and Srikanthan R. (2004). Stochastic generation of point rainfall data at subdaily timescales: a comparison of DRIP and NSRP CRC for Catchment Hydrology, Clayton, Vic., p.

    IPCC (2001a). IPCC Third Assessment Report - Climate Change 2001 - Complete online versions | UNEP/GRID-Arendal - Publications - Other. http://www.grida.no/publications/other/ipcc%5Ftar/?src=/climate/ipcc_tar/wg1/index.htm. (Accessed 5 Oct 2011)

    IPCC (2001b).An Overview of Scenarios. In: Special Report on Emissions Scenarios, GRID-Arendal, The Hague,

    IPCC (2011a). IPCC - Intergovernmental Panel on Climate Change. http://www.ipcc.ch/organization/organization_history.shtml. (Accessed 4 Oct 2011)

    IPCC (2011b). IPCC DDC: What is a GCM. http://www.ipcc-data.org/ddc_gcm_guide.html. (Accessed 27 Sep 2011)

    Marani M. (2003). On the correlation structure of continuous and discrete point rainfall. Water Resources Research, 39, 8 PP.

    Marani M. (2005). Non-power-law-scale properties of rainfall in space and time. Water Resources Research, 41, 10 PP.

    Pathirana A., Herath S. and Yamada T. (1999). Estimating rainfall distributions at high temporal resolutions using a multifractal model. Hydrol. Earth Syst. Sci., 7(5), 668679.

    Pathirana A., Herath S. and Yamada T. (2003). On the modelling of temporal correlations in spatial-cascade rainfall downscaling. IAHS-AISH Publication, (282), 7484.

    Patil A., Huard D. and Fonnesbeck C.J. (2010). PyMC: Bayesian Stochastic modelling in Python. Journal of Statistical Software, 35(4), 181.

    Raybaut P. (2009). Spyder - Documentation Spyder v2.1 documentation. http://packages.python.org/spyder/. (Accessed 18 Nov 2011)

    Rossum G. van and Drake F.L.J. (2011). The Python Language Reference Manual Network Theory Limited, p.

    Semenov M. (2011). LARS-WG stochastic weather generator. http://www.rothamsted.bbsrc.ac.uk/mas-models/larswg.php. (Accessed 6 Oct 2011)

    Smith R.L., Tebaldi C., Nychka D. and Mearns L.O. (2009). Bayesian modeling of uncertainty in ensembles of climate models. Journal of the American Statistical Association, 104(485), 97116.

    Sunyer M.A., Madsen H. and Ang P.H. (n.d.). A comparison of different regional climate models and statistical downscaling methods for extreme rainfall estimation under climate change. Atmospheric Research, In Press, Corrected Proof,

  • 23

    Tebaldi C. and Knutti R. (2007). The use of the multi-model ensemble in probabilistic climate projections. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1857), 2053 2075.

    Tebaldi C., Smith R.L., Nychka D. and Mearns L.O. (2005a). Quantifying uncertainty in projections of regional climate change: A Bayesian approach to the analysis of multimodel ensembles. Journal of Climate, 18(10), 15241540.

    Tebaldi C., Smith R.L., Nychka D. and Mearns L.O. (2005b). Quantifying Uncertainty in Projections of Regional Climate Change: A Bayesian Approach to the Analysis of Multimodel Ensembles. J. Climate, 18(10), 15241540.

    UN (2007). Climate Change 2007: The Physical Science Basis Intergovernmental Panel on Climate Change, p.

    Vanovschi V. (2012). Parallel Python - Home. http://www.parallelpython.com/. (Accessed 18 Apr 2012)

    Viner D. (n.d.). CRU Information Sheet no. 8: Modelling Climate Change. http://www.cru.uea.ac.uk/cru/info/modelcc/. (Accessed 27 Sep 2011)

    Willems P., Arnbjerg-Nielsen K., Olsson J. and Nguyen V.T.V. (2012). Climate change impact assessment on urban rainfall extremes and urban drainage: Methods and shortcomings. Atmospheric Research, 103(0), 106118.

    Willems P. and Vrac M. (2011). Statistical precipitation downscaling for small-scale hydrological impact investigations of climate change. Journal of Hydrology, 402(3-4), 193205

  • 9th International Conference on Urban Drainage Modelling Belgrade 2012

    1

    Characterization of road-deposited sediments in

    different land-use types in Tehran, Iran

    Fatemeh Kazemiparkouhi1 , Masoud Tajrishy2, Masoud Kayhanian3

    1 Sharif University of Technology, Tehran, Iran, [email protected] 2 Sharif University of Technology, Tehran, Iran, [email protected] 3 University of California, Davis, California, U.S.A., [email protected]

    ABSTRACT

    This study investigates the characteristics of road-deposited sediments (RDS) from selective impervious surfaces in the Metropolitan city of Tehran located in Iran. A total number of 19 RDS samples were collected from three different land-use types herewith denoted as residential, intense traffic and educational areas. The samples were fractionated into seven grain-size ranges (1000-2000, 600-1000, 300-600, 150-300, 75-150, 45-75, and

  • 2

    wear and de-icing operations. These pollutants are washed off during rain events and because of the large volume and short duration, practical treatment is harder.

    Urban pollution can be investigated as dissolved and particle bound. Because nearly all best management practice (BMP) treatment systems employed for urban runoff treatment are physically based unit operation, investigation of particle size distribution and size resolved pollutants is immensely valuable. To make our results more relevant to the literature, we tried to concentrate our review on available literature on areas in Asia comparable to Tehran. Many researchers have studied size-fractionated road sediments and evaluated their metal elemental compositions. Bian and Zhu (2008) found that 60-80% of RDS samples in Zhenjiang, China consist predominantly of particles

  • 3

    sampling sites were located in Azadi St. with latitude 35 42 North and longitude 51 20 East and latitude 35 41 North and longitude 51 20 East respectively. The mean traffic density in Azadi St. is about 6000 vehicle/h. The educational sampling site was the campus of Tehran University with latitude 35 42 North and longitude 51 23 East. The RDS samples were collected on September 2011 before midnight when the road cleaning was not practiced. In residential and intense traffic land use areas, RDS samples were taken every night over a one week period (7 samples from each site) and in educational land use areas samples were taken on workdays (5 samples). Each sample, approximately 100 to 400 g, was collected from the curbside using a clean plastic dustpan and a brush as described by Bian and Zhu (2008). The samples were stored separately in plastic bags and labelled before transporting them to the laboratory and subsequently each sample were processed and analysed chemically.

    Figure 1. Map of study area

    2.3 Sample processing and chemical analysis

    The RDS samples were dried for 48 h at 105 C. They were stored in a cool, dark place before further fractionations and analysis (Bian and Zhu, 2008). Generally, particles larger than 2000 m are of limited importance in transporting adsorbed metals in urban systems or easier to remove by conventional BMPs (McKenzie et al., 2008; Kayhanian and Givens, 2011). Therefore, metal concentration analysis was only performed for particles smaller than 2000 m. The dried RDS samples were sieved using a 2-mm nylon mesh to remove gravel-sized materials and then screened into seven size fractions using standard sieving methods prior to metal fraction analysis. The size fractions and their descriptive classifications as used by the United States Department of Agriculture (USDA) classifications were: 2000-1000 m (very coarse sand), 1000-600 m (coarse sand), 600-300 m (medium sand), 300-150 m (fine sand) 150-75 m (very fine sand), 75-45 m (silt and clay) and

  • 4

    3 RESULTS AND DISCUSSION

    3.1 Particle size distribution

    The average results of particle size distribution (PSD) for all collected RDS samples are shown in Figure 2a. As shown, the trend in PSD for all samples within all land use areas is similar. The histogram of mean mass percentage versus grain size fraction is shown in Figure 2b. From these results it can be seen that the mass percentage for particles 75-150 m show the opposite trend. The mass percentages of particles

  • 5

    3.2 Total concentrations of heavy metals in RDS

    The mean values of total heavy metal mass concentrations as mg/kg in each of the initial RDS collected from the three land use areas are presented in Table 1. Zn showed the highest mean concentration (536.4 mg/kg) followed by Pb, Cu, Ni and Cd, with total concentration of 422.4, 210.3, 96.6 and 22.8 mg/kg, respectively. In addition, spatial variability of heavy metal concentration was observed among different land use areas indicating that lower mean concentration in educational land use compared with intense traffic land use area.

    Table 1. Total heavy metal concentration of RDS sampled from three land use areas

    Heavy metals Total particulate mean concentration (mg/kg) from three land use areasa

    Intense Traffic Residential Educational

    Zn 536.4 319.3 430.4

    Pb 422.4 502.9 219.4

    Cu 210.3 114.2 189.8

    Ni 96.6 124.6 92.1

    Cd 16.6 30.3 16.3 aThe concentration was measured based on single composite sample prepared from a mixture of equal mass from each size fraction for each land use area. The impact of various land use on pollutant concentration is beyond the scope this study. However, as mentioned before, a variety of sources could contribute to the presence of heavy metals in the monitoring site areas. Examples of known sources of some metals such as Cd, Cu, Pb, Zn and Ni in urban areas include: the wear of tires and brake pads, fuel combustion, combustion of lubricating oils, metal finishing industrial emissions, corrosion of galvanized metals, corrosion of building parts, wear of moving parts in engines, metallurgical and industrial emissions, fungicides and pesticides, combustion of lubricating oils, power plants and trash incinerators and petroleum refinery(Makepeace et al. 1995; USEPA 1996; Hogan et al. 2011). Because of the intense traffic and semi industrial and commercial activities in and around the metropolitan area of Tehran, it is possible that most metals measured in our study are generated from combination of sources mentioned above. The exact source identification of pollutants is beyond the scope of this study and will be considered in our future monitoring study.

    3.3 Size resolved heavy metal concentrations

    Fig. 3 shows the relationship between heavy metal concentrations with different particle size fractions. The highest metals concentrations in different size fractions (except for Cd at the residential area) occurred at the intense traffic area. Zinc was the most abundant metal element in different size fractions at all land use areas. As shown in Fig. 3, the concentration of all heavy metals increased with decreasing particle size a trait which was shown to be fairly consistent among all three land use areas. Other than a few exceptions, generally, the highest concentrations were measured in the smallest particles (

  • 6

    ranges, their mass contribution can be substantially lower (see figure 2a). However, the contribution of pollution related to these finer mass particles is still significant due to their bioavailability, mobility and transformation in both the atmospheric and aquatic environment. Higher concentrations can also pose toxicity problem when discharged into receiving waters. Hence, source reduction is a viable option since the treatment and removal of finer particles (

  • 7

    4 CONCLUSION

    1. Land use can play an important role on particle size distribution of RDS. While the trend in particle size distribution in all land use areas were the same, their mass distribution was different. The results showed that the percent mass distribution of finer particles Pb > Cu > Ni > Cd and in residential landuse were Pb > Zn > Ni > Cu > Cd.

    3. The metal concentration generally increased with decreasing particle size. Maximum average heavy metal concentrations frequently occurred in particle size smaller than 75 m.

    5 ACKNOWLEDGEMENTS

    The authors would like to thank Material and Soil Lab, Department of Civil Engineering, Sharif University of Technology and Institute of Water and Energy, Sharif University of Technology for their assistance in laboratory.

    5 REFERENCES

    Bian B. and Zhu W. (2008). Particle size distribution and pollutants in road-deposited sediments in different areas of Zhenjiang, China. Environ Geochem Health, 31, 511-520.

    De Miguel E., Llamas J. F., Chacon E., Berg T., Larssen S., Rgyset O. and et al. (1997). Origin and patterns of distribution of trace elements in street dust: unleaded petrol and urban lead. Atmospheric Environment, 31, 27332740.

    Faiz Y., Tufail M., Chaudhry M. M. and Naila-Siddique (2009). Road dust pollution of Cd, Cu, Ni, Pb and Zn along Islamabad Expressway, Pakistan. Microchemical journal, 92, 186-192.

    Gourdeau J. (2004). Clouds and Particles. LaMP, Clermont-Ferrand, France.

    Harrison R. M. and Wilson S. J. (1985). The chemical composition of highway drainage waters. I: Major ions and selected trace metals. Sci of the Total Envir., 43, 63-77.

    Hogan C. M., Draggan S. and Mineral Information Institute (2011). Nickel. Encyclopedia of Earth January 18, 2008; Last revised Date September 21, 2011; Retrieved November 23, 2011.

    http://www.epa.gov/otaq/regs/fuels/additive/lead/pr-lead.txt. 1996.

    Jiries A. G., Hussein H. H., and Halash Z. (2001). The quality of water and sediments of street runoff in Amman, Jordan. Hydrological Processes, 15, 815824.

    Kayhanian M. and Givens B. (2011). Processing and analysis of micro particles. Environmental Monitoring, 13, 2720-2727.

    Mahdavi Damghani A., Savarypour Gh., Zand E. and Deihimfard R. (2008). Municipal solid waste management in Tehran: Current practices, opportunities and challenges. Waste Management, 28, 929-934.

  • 8

    Makepeace D. K., Smith D. W. and Stanley S. J. (1995). Urban stormwater quality: summary of contaminant data. Critical Rev. in Envir. Sci. and Technol, 25, 93-139.

    McKenzie R. E., Wong M. C., Green G. P., Kayhanian M. and Young M. T. (2008). Size dependent elemental composition of road particles. Sci of the Total Envir., 398, 145-153.

    Municipality of Tehran, (2009). Tehran. Retrieved 09 04, 2009, from http://en.tehran.ir/

    Sartor, J. D. and Boyd G. B. (1972). Water pollution aspects of street surface contaminants. U.S. Environmental Protection Agency. EPA-R2-72-081.

    Zhao H., Li X., Wang X. and Tian D. (2010). Grain size distribution of road-deposited sediment and its contribution to heavy metal pollution in urban runoff in Beijing, China. J of Hazard Material, 10, 183-203.

  • 9th International Conference on Urban Drainage Modelling Belgrade 2012

    1

    Flow Forecasting in Urbanized Catchments with Data

    Driven Models

    Lloyd H. C. Chua

    School of Civil and Environmental Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798. Tel: +65-6790-5249. e-mail: [email protected].

    ABSTRACT

    The motivation for this paper was to assess the suitability of data driven models, in this case the artificial neural network (ANN), in the light of forecasting applications in urban catchments. Results from two case studies are reported; the first is for an experimental catchment measuring 25 m x 1 m and comprises an impervious asphalt surface and the second, an urbanized catchment of area 5.6 km2 in the Kranji reservoir catchment, Singapore. For the plane surface, the rainfall and runoff data collected for ten natural storm events were analyzed and the ANN model results compared with results obtained by a kinematic wave model (KW) and an autoregressive moving average (ARMA) time series model. The results show that ANN model forecasts compared favourably with KW and ARMA forecasts. Specifically, the ANN was found to be superior to the KW and ARMA models especially at longer lead times. This is due to the errors in the forecast rainfall and loss estimates that are required by the KW, and the recursive nature of the ARMA model especially for longer lead times. For the Kranji sub-catchment, the results of this study show that discharge inputs significantly influence short lead time forecasts while rainfall inputs help improve longer lead time forecasts. In order to study the upper bound for longer lead time predictions, an approach incorporating forecast rainfall as inputs to the ANN model was also investigated.

    KEYWORDS

    Data driven model, drainage, flow forecasting, urban catchment

    1 INTRODUCTION

    Artificial neural networks (ANN) have gained increasing popularity in modelling the rainfall-runoff process in recent years. Neural networks have been demonstrated to be able to model the non-linear rainfall-runoff relationship in many studies including the works by Riad et al. (2004), Kumar et al. (2005) and Wu et al. (2005), carried out for a wide variety of watersheds. Another domain of ANN application in rainfall-runoff modelling is in flow forecasting. Although there are increasing efforts in combining physically-based distributed hydrological models with weather prediction models to

  • 2

    forecast runoff for a catchment (Koussis et al., 2003; Collischonn et al., 2005), these models demand substantial hydrological and meteorological data, in addition to information on the characteristics of the watershed. This is in comparison with the ANN which does not require prior knowledge of the watershed characteristics, although both modelling paradigms would require rainfall and runoff data for calibration. In this regard, it can be argued that the reduced requirement on data by the ANN renders the ANN a more attractive option as a forecasting tool over deterministic models. In addition, as is often the case in flow forecasting, the operation needs to be carried out in real-time; the advantage in speed of the neural network over distributed models make neural networks attractive for real-time flow forecasting. Some examples where ANNs have been used for flow forecasting include works by Wu et al. (2005) and Mukerji et al. (2009). Comparative studies have also been conducted to evaluate the performance of ANNs against existing flow forecasting methods (Elshorbagy et al., 2000, Raghuwanshi et al., 2006, Nilsson et al., 2006, Kisi, 2004, Pereira-Filho and dos Santos, 2006 and Huang et al., 2004. These studies show that the ANN, in general, is able to achieve either equal or better accuracy against the models compared with (Sivakumar et al., 2002).

    This paper investigates the forecast performance of the ANN for flow on an overland plane and a small urbanized catchment, using various combinations of rainfall and discharge as input to the ANN model. For the plane surface, the ANN model was compared against deterministic and time series forecast models. The forecast capability of the ANN was next tested on a small urbanized catchment. In order to extend the forecast lead-time, an approach incorporating forecast rainfall in the input of the network, analogous to the combined physically-based distributed hydrological and weather prediction models, was also investigated.

    2 DATA USED

    2.1 Experimental Plane

    The data for the asphalt plane was obtained from an outdoor experimental station set up at the Nanyang Technological University. A schematic diagram of the experimental station is provided in Figure 1(a.) and further details of the experimental station can be found in Wong (2008). The experimental station comprises four 25 m long by 1 m wide testing sections. The test section chosen for this study is the asphalt-lined overland plane with a slope of 2%, surrounded by a 1 m high concrete wall along the two longer sides and the upstream end of the section. Runoff was allowed to discharge into calibrated weigh tanks to record the flow at the downstream end of the test section. The weigh tanks were calibrated prior to use, using an electromagnetic flow meter. Two rain gauges, placed at 6.25 m from each end of the test section, were used to record rainfall. The rainfall data was checked for consistency and the average rainfall was used in the analysis. Rainfall and runoff data were recorded at 15 sec intervals. Data for ten storm events were obtained for the period between October-December 2002. The time of concentration for the overland plane is estimated to range from 2.1 4.1 min.

    2.2 Kranji Sub-Catchment

    Rainfall and flow data collected from a sub-catchment in Kranji, situated in the north-western part of Singapore (see Figure 1(b.)), was analyzed for this study. The sub-catchment has an area of about 5.6 km2, and the land use consists of 33% high density residential area and the other areas of the study are undeveloped and mainly covered by vegetation. Surface runoff in the study area is served by a concrete-lined drainage system, and the runoff from the catchment is inevitably flashy due to the short but intense tropical rains and highly channelized drainage system. A gauging station, installed at CP1

  • 3

    is equipped with a Sigma 950 submerged pressure area velocity flow meter and a tipping bucket rain gauge. Both the rainfall depth and flow measurements were taken at 5-minute intervals and stored in a data logger. Time series of rainfall depths and flows collected were used in the present study.

    (a.) (b.)

    Figure 1. Schematic diagram of (a.) asphalt plane, and (b.) Kranji sub-catchment.

    3 METHODOLOGY

    3.1 Models Used

    Three models were used for the analysis of the data for the asphalt plane. These are the autoregressive moving average (ARMA) model, the kinematic wave (KW) model and the ANN. The ARMA and KW models were used as a baseline to validate the ANN results in the case of a well-defined catchment. Only the ANN model was used for flow forecasts in the Kranji sub-catchment, studying the impacts of the inputs use on forecast accuracy.

    The general formula for the ARMA model can be stated as:

    jjttiitt ddcc 1111 (1)

    where t, the predicted value, is linearly related to the past observations, t-1 i and is the error, i, and j are the order of the series and c1 ... ci, and d1 dj are constants obtained by optimization. Final model selection was based on an evaluation of the root mean square error (RMSE), Akaike's Information Criterion (AIC) and the Mean Absolute Error (MAE). The optimization procedure and selection of the final model was carried out using the System Identification Toolbox provided in MATLAB (2008).

    The kinematic wave equations for flow on an overland plane are:

    nix

    q

    t

    y

    (2a)

    yq (2b)

    where y is the flow depth, t is time, q is the discharge per unit width of the overland plane, x is

    distance along the plane in the direction of flow, in is the net rainfall, /n,S ,3/5 S is the

  • 4

    overland slope, and n (= 0.011) is the Mannings roughness coefficient of the overland surface. For

    forecast applications, the forecast rainfall Ltni is defined as the averaged rainfall intensity for the

    previous L time intervals (Brath et al, 2002), given by:

    L

    ii

    t

    Ltk

    kn

    Ltn

    (3)

    A multi-layer feed-forward back propagation ANN incorporating the Levenberg-Marquardt learning algorithm was used for training the network. The network consists of three layers, namely an input layer, a hidden layer and an output layer. The number of neurons in the input layer depends on the number of input variables used. The number of neurons adopted for the hidden layer was determined by trial, while the output layer was configured to give only one forecast flow as output. The early stopping technique was used to prevent over-fitting. The tan-sigmoid transfer function was used for the neurons in the hidden layer while linear transfer functions were used for the output neurons. The development of the ANN models was executed using the neural network toolbox in MATLAB (2008).

    3.2 Inputs to ANN Model

    The cross-correlation technique was used to identify the appropriate number of time lags of rainfalls to be included as inputs to the ANN models for the plane surface and the Kranji sub-catchment. Similarly, auto-correlation was carried out on the flow time series to identify the appropriate time lags for flow to be included as input to the ANN models.

    Table 1(a) shows the 3 different combinations of inputs adopted for the plane surface. The ANN9R model uses rainfall (R) only as inputs, ANN2Q uses discharge (Q) only as inputs and ANN9R1Q uses both rainfall and discharge as inputs. The last group of ANN models represents the traditional method of specifying inputs when the ANN model is to be used for forecasting when online measurements of rainfall and discharge are available. Distinguishing the input types between rainfall only, discharge only and a combination of rainfall and discharge allows one to study in isolation the effect on forecast results by the different ANN models. The forecasted flows are Qt+1, Qt+2, Qt+4 and Qt+8 corresponding to lead times of 0.25 min, 0.5 min, 1 min and 2 min, respectively.

    Table 1 Inputs used in ANN models: (a.) Overland plane, (b.) Kranji sub-catchment. (a.) (b.)

    Model Inputs Model Inputs

    ANN9R Rt, Rt-1, , Rt-8 R Rt, Rt-1, , Rt-8

    ANN9R1Q Rt, Rt-1, , Rt-8, Qt Q Qt, Qt-1, Qt-2

    ANN2Q Qt, Qt-1 RQ Rt, Rt-1, , Rt-8, Qt, Qt-1, Qt-2

    RiQ Rt+i, Rt+i-1, , Rt+i-8, Qt, Qt-1, Qt-2

    Table 1(b) shows the 4 different combinations of inputs adopted for the Kranji sub-catchment. R, Q and RQ networks provide forecast flow using measured rainfall and flow data. These models forecast the flows Qt+1, Qt+3, Qt+6, Qt+8 and Qt+10, corresponding to forecast lead-times of 5 min, 15 min, 30 min, 40 min and 50 min, respectively. RiQ networks incorporate rainfall forecasts in the input. The rainfall forecasts are assumed to be perfect, providing an upper bound to the results predicted by the

  • 5

    ANN models. Thus, when rainfall forecasts are considered, a rainfall forecast up to the t+2 time-step (10 min) in the R2Q network, only output flow forecasts for L t + 3 (15 min) will be presented.

    4 RESULTS AND DISCUSSION

    4.1 Experimental Plane

    The Nash-Sutcliffe coefficient (NS), root mean square error (RMSE), mean absolute error (MAE) and coefficient of determination (R2) values for the different models are compared in Figure 2. The results obtained from the Constant (or Nave) model is also included as a benchmark, where the Constant model states that Qt+L = Qt. The error bars indicate the range in results obtained for the 10 storm events. Figure 2 shows that the KW and ANN models perform adequately well for forecasts up to a lead time of 2 mins (Qt+8), which is estimated to be approximately half the time of concentration for the overland plane. In addition, all ANN and KW models perform better than the Constant model except the ANN9R model for Qt+1 and the KW model for Qt+8 forecasts. This highlights the point that the ANN9R model is poor at short term forecasts. In addition, the KW model performs poorly for Qt+8 forecast, presumably due to the errors in rainfall forecast, since rainfall forecast errors are expected to increase as lead time increases. Comparing between the ANN and ARMA models, it is evident that ANN9R model predictions are inferior to ARMA model forecasts for Qt+1 and Qt+2, where it is observed that NS and R2 are smaller for the ANN9R model compared to the ARMA model, and RMSE and MAE are larger for the ANN9R model compared to the ARMA model. In general, the goodness-of-fit and error statistics for the KW and ARMA models are similar up to Qt+4 forecasts. However, NS and R2, and RMSE and MAE are higher and lower, respectively, for the ARMA model than the KW model for Qt+8 forecasts. Comparing between the ANN and KW models, it is evident that the ANN9R1Q model is the best since the goodness-of-fit and error statistics compare favourably against the KW model over the entire range of forecasted flows (Qt+1 to Qt+8), although the ANN9R model appears to perform marginally better than the ANN9R1Q model for Qt+8 forecast.

    Thus, comparing the goodness-of-fit and error statistics between the ANN models with the KW model, it can be summarized that: (i) ANN9R1Q and ANN2Q models are superior to the KW model for all forecast ranges while the ANN9R model is better only for long-term forecasts. The superiority of the ANN model forecasts is significant because the model parameters for the KW model are well defined and hence forecast errors arising from a wrong prescription of model parameters will not be significant; the greatest source of error for the KW model will probably be related to the rainfall forecasts. This is a disadvantage of the KW model; in comparison, the ANN model does not require rainfall forecasts. (ii) ANN model forecasts improve when discharge is used as input to the model. The inclusion of discharge as an input to the ANN models implies that discharge measurements must be available during the model simulation stage. This is a disadvantage of the ANN method, as the KW model does not have this requirement. In this regard, in terms of model inputs, a more equitable comparison would suggest that the KW be compared with the ANN9R model, since both of these models use rainfall only for simulations. In this case, the KW model results are better at short-term forecasts but poorer at long-term forecasts.

  • 6

    (a) (b)

    Qt+1 Qt+2 Qt+4 Qt+8-0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Forecast Discharge

    NS

    Qt+1 Qt+2 Qt+4 Qt+80.2

    0.4

    0.6

    0.8

    1

    Forecast Discharge

    R2

    (c) (d)

    Qt+1 Qt+2 Qt+4 Qt+80

    0.1

    0.2

    0.3

    Forecast Discharge

    RM

    SE

    (L/s

    )

    ANN9RANN9R1QANN2QKWConstantARMA

    Qt+1 Qt+2 Qt+4 Qt+80

    0.05

    0.1

    0.15

    Forecast Discharge

    MA

    E(L

    /s)

    Figure 2. Errors in forecast by different models. (a.) NS, (b.) R2, (c.) RMSE (L/s), and (d.) MAE (L/s).

    4.2 Kranji Sub-Catchment

    The NS values obtained by the various ANN models are compared in Figure 3. The NS for the Q network is higher than NS for the R network at shorter lead-times, and the NS for the R network is higher than NS for the Q network at longer lead times; although it should be mentioned that NS values for the R and Q networks are both generally low at longer lead-times. The RQ network inherits the positive features of the R and Q networks, and most ANN studies on flow forecasting reported in the literature adopt R and Q as inputs. The difference between the R and Q networks in the NS versus forecast lead time behaviour is significant. For the Q network, when the flow forecast lead-time, L is small, the forecast flow (Qt+L) is strongly correlated with the latest value of flow, in this case Qt, used in the input, and the NS is high. As L increases, the correlation between Qt and Qt+L decreases, resulting in the reduction in NS with increase in L. For the R network, NS is relatively insensitive to L when L = 10min. The initially constant value of NS up to L = 10 min is due to the delay of the catchment in response to rainfall, since the flow must take a finite time to reach the outlet. NS begins to decrease when L > 15 min, where it is noted that the decrease in NS with L is more gradual compared to the Q network. The findings from Figure 2 can be summarized by two main points. Firstly, it is not sufficient to use either rainfall or flow only as input, a combination of rainfall and flow is needed; rainfall inputs improve flow forecast at longer lead times while flow inputs improve flow forecast at shorter lead times. Secondly, adopting NS 0.8 as a criterion for a reasonable fit, the R

  • 7

    network is unacceptable since the NS for R forecasts are less than 0.8. In addition, Q and RQ networks provide satisfactory forecasts up to approximately 12 min and 16 min, respectively.

    0 10 20 30 40 500.2

    0.4

    0.6

    0.8

    1

    Forecast flow lead time, L (min)

    CE

    RQ

    R

    Q

    Figure 3. Forecast errors for various inputs.

    Analyses using measured discharge and assuming perfect forecast inputs of rainfall were carried out for the R2Q, R5Q, R7Q, and R9Q networks, where perfect rainfall forecasts of 2, 5, 7 and 9 steps ahead (corresponding to 10, 25, 35 and 45 mins) are used as inputs and the outputs are the discharges at 3, 6, 8 and 10 time steps (corresponding to 15, 30, 40 and 50 mins) ahead. As shown in Figure 5, the NS values of the forecasted flows improve with the inclusion of forecast rain, since the NS values for all the RiQ networks lie above the NS values for RQ network. For each of the networks considered, the best NS attained are NS = 0.85, 0.79, 0.79 and 0.78 for the R2Q, R5Q, R7Q, and R9Q networks. This is lower than NS = 0.96 obtained for the 1-step ahead forecast by the RQ network. With 2 steps ahead rainfall forecasts (R2Q), the forecast lead time is increased from 16 min to 21 min, for NS = 0.8. Furthermore, to achieve NS = 0.8 for L = 40 min forecasts, rainfall inputs of up to 7 steps ahead are required.

    Figure 5. NS for 3, 6, 8 and 10 steps ahead flow forecasts.

    NS = 0.8

    NS

    Forecast lead time (min)

    NS

  • 8

    5 CONCLUSIONS

    The following can be concluded from this study:

    1. Results from the analyses of the asphalt plane data showed that ANN model results compared favourably with runoff predictions by the KW and ARMA model. Comparing between the ANN models where different inputs were used, the inclusion of the discharge as input was found to greatly improve the performance of the ANN. However, model improvements were less significant for longer forecast lead times. The ANN model which had rainfall as the only inputs performed better at long-term forecasts but poorer at short-term forecasts, compared to the KW model. The poorer performance of the KW model at longer lead times is probably due to the forecast rainfall errors. Inclusion of discharge significantly improves short lead time forecasts for the ANN.

    2. The relative performance of the ANN models with rainfall alone, discharge alone and rainfall and discharge as inputs were confirmed when tested on data collected from a real catchment. Analyses using perfect forecast inputs of rainfall showed that with 2 steps ahead rainfall forecasts, the forecast lead time is increased from 16 min to 21 min, for NS = 0.8. Furthermore, to achieve NS = 0.8 for L = 40 min forecasts rainfall inputs of up to 7 steps ahead is required.

    6 ACKNOWLEDGEMENTS

    Funding received from the DHI-NTU Water & Environment Research Centre and Education Hub and the School of Civil and Environmental Engineering, Nanyang Technological University is gratefully acknowledged.

    7 LIST OF REFERENCES

    Barth, A., Montanari, A. and Toth, E., (2002). Neural Networks and Non-Parametric Methods for Improving Real-Time Flood Forecasting Through Conceptual Hydrological Models. Hydrol. Earth Sys. Sci., 6(4), 627 640.

    Collischonn, W., Haas, R., Anderolli, I., Tucci, C.E.M. (2005). Forecasting River Uruguay flow using rainfall forecasts from a regional weather prediction model. J. Hydrol, 305, 87-98.

    Elshorbagy, A., Simonovic, S.P., Panu, U.S. (2000). Performance evaluation of artificial neural networks for runoff prediction. J. Hydrologic Eng., 5(4), 424-427.

    Huang, W., Xu, B., Chan-Hilton, A. (2004). Forecasting flows in Apalachicola river using neural networks. Hydrolog. Process., 18, 2545-2564.

    Kisi, O. (2004). River flow modeling using artificial neural networks. J. Hydrologic Eng., 9(1), 60-63.

    Koussis, A.D., Lagouvardos, K., Mazi, K., Kotroni, V., Sitzmann, D., Lang, J., Zaiss, H., Buzzi, A., Malguzzi, P. (2003). Flood forecasts for urban basin with integrated hydro-meteorological model. J. Hydrologic Eng., 7(1), 1-11.

    Kumar, A.R.S., Sudheer, K.P., Jain, S.K., Agarwal, P.K. (2005). Rainfall-runoff modeling using artificial neural networks: comparison of network types. Hydrolog. Process., 19, 1277-1291.

    Mukerji A., Chatterjee, C. and Raghuwanshi, A. S. (2009). Flood Forecasting Using ANN, Neiro-Fuzzy and Neuro-GA Models. J. Hydrologic Eng., 14(6), 647-652.

  • 9

    Nilsson, P., Uvo, C.B., Berndtsson, R. (2006). Monthly runoff simulation: Comparing and combining conceptual and neural network models. J. Hydrol, 321, 344-363.

    Pereira Filho, A.J., dos Santos, C.C. (2006). Modeling a densely urbanized watershed with artificial neural network, weather radar and telemetric data. J. Hydrol, 317, 31-48.

    Riad, S., Mania, J., Bouchaou, L., Najjar, Y. (2004). Predicting catchment flow in a semi-arid region via an artificial neural network technique. Hydrolog. Process., 18, 2389-2394.

    Raghuwanshi, N.S., Singh, R., Reddy, L.S. (2006). Runoff and sediment yield modeling using artificial neural networks: upper Siwane river, India. J. Hydrologic Eng., 11(1), 71-79.

    Sivakumar, B., Jayawardena, A.W., Fernando, T.M.K.G. (2002). River flow forecasting: use of phase-space reconstruction and artificial neural networks approaches. J. Hydrol., 265, 225-245.

    Wong T.S.W. (2008). Optimum Rainfall Interval and Mannings Roughness Coefficient for Runoff Simulation. J. Hydrol. Eng., ASCE, 13(11), 1097 1102.

    Wu, J.S., Han, J., Annambhotla, S., Bryant, S. (2005). Artificial neural networks for forecasting watershed runoff and stream flows. J. Hydrologic Eng., 10(3), 216-222.

  • 9th International Conference on Urban Drainage Modelling Belgrade 2012

    1

    Urban Drainage Simulation Model Sensitivity Analysis

    On Runoff Control Elements

    eljka Ostoji1 , Sanja Mareta2, Duan Prodanovi3, Ljiljana Jankovi4 , Sran Tomi5

    1,2 Hidroprojekat saobraaj, Serbia, [email protected], sanja.marceta @hps.rs 3,4 University of Belgrade Faculty of Civil Engineering, Belgrade, Serbia, 5 ACO, Belgrade, Serbia,

    ABSTRACT

    In current design practice, regardless if classic or sustainable drainage concept approach is applied in modeling of rainfall-runoff process, various drainage elements are used (gullies, catch basins, grates or slotted inlets with various types of outlets). These elements are drainage controls and their characteristics have significant impact on capacity and system performance. The study of hydraulic characteristics of these elements is mostly performed in simplified laboratory conditions while obtained performance data are used in the modeling process of the real and complex drainage systems.

    Unfortunately, in most cases, these laboratory tests do not consider the use of these elements in terms of contemporary sustainable drainage, where it is allowed to retain part of the water on the streets and where dual drainage concept is used, which allows two-way flow through those control elements during the pressurized flows in the collectors. Also, modeling the system using an alternative concept includes drainage water infiltration at the source, which requires the application of new drainage elements in modeling (e.g. semi-permeable pavements and asphalt), and only some of those elements are built-in the commercial software packages for the storm water runoff modeling.

    This paper examines the sensitivity of the commercial urban drainage model StormNET regarding the characteristics of the drainage elements used for surface drainage. The paper analyzes the results of the model and the uncertainty of the obtained results (flow, velocity), for different flow conditions (with free surface or pressurized flow, with or without ponding) in urban catchments of different sizes, as a function of the degree of ignorance (or error) of the individual drainage control elements parameters.

  • 2

    KEYWORDS

    drainage control elements, StormNET ACO-Hydro

    1 INTRODUCTION

    The development of a storm drainage design in urban areas requires a trial and error approach, with the aim to limit the amount of water flowing along the gutters or ponding at the low areas to rates and quantities that will not interfere with traffic. The most destructive effects of an inadequate drainage system are damage to surrounding or adjacent properties, deterioration of the roadway components, traffic hazard and delays caused by excessive, unnecessary ponding in sags or excessive flow along roadways, as well as more frequent overflow from combined sewer systems to the receiving waters.

    An urban simulation drainage model is based on link-node concept and it consists of three sub models (hydrological model - simulating precipitation, evaporation and infiltration; surface runoff routing model - simulating hydraulics on the catchments surface; and pipe flow model-simulating the hydraulics in the pipe system). The link-node concept is very useful in representing flow control devices with the possibility to represent branched or looped networks, backwater due to tidal or non-tidal conditions, free surface flow, pressurized flow or surcharge, flow reversals, flow transfers by weirs, orifices, pumping and storage. Each of sub models can be applied with different complexity and accuracy, depending on the problem that should be solved with the model, and calibration possibilities.

    Estimation of model output uncertainties is based on errors and uncertainty contributions of the input data and model parameters as a result of used conventional deterministic modeling approach, as a starting point of the modeling process, sensitivity analysis, field measurements, literary reviews and model testing .

    This paper presents the model of existing drainage system of warehouse in Kraljevo, with analysis of the effect of input data variations on the results. Two rain gages were analysed:

    - Rain gage with the return period of 2 years, duration 10 minutes, precipitation intensity 206 l/s/ha

    - Rain gage with the return period of 10 years, duration 10 minutes, precipitation intensity 311 l/s/ha

    For each of the applied rain events, the hydraulics calculation were done for the most loaded trench drains in both Hydro - ACO program, for non-uniform flow with lateral inflow, and in commercial package STORM NET (with different channel shapes and roughness coefficients, ponding and downstream conditions). Comparing obtained results with elements of the sensitivity analysis was done only for the rain gage with the return period of 2 years.

  • 3

    2 METHODOLOGY

    2.1 Storm drainage inlet structures modeling

    Each of the drainage element of real drainage system should have the representative object in linknode conceptual model, or at least control rule based on it's behaviour in real system. Stormwater inlets and pipe drains have to be designed together because two systems interact as follows:

    1. If there is insufficient inlet capacity the pipes will not flow full, and

    2. Backwater effects from the pipe drainage system may reduce the effectiveness of the inlets, or cause them to surcharge instead of acting as inlets.

    The complexity of these interactions is such that in all but the simplest situations, the design task is best handled by computer models.

    2.2 The surface inlet structures

    The surface inlet structures considered are grate inlets, curb opening inlets, slotted inlets and combination inlets. These elements present elements of point drainage system and allow surface water to enter the storm drainage system. They also serve as access points for cleaning and inspection.

    The surface inlet structures can be modeled as special structures with one by-pass link for surface runoff and the other link to the pipe drains, if they are mounted on grade, or as special structures with one link to the pipe drains, if they are mounted on sag. Design of storm drain inlet is often neglected or receives little attention during the design of storm drainage system, although it directly impacts both the rate of water removal from the roadway (if the storm drain inlet is unable to capture the design runoff into the sewer system, roadway flooding and possible hazardous conditions for traffic may occur during a storm event ), and the degree of utilization of the storm drainage system (oversizing of sewer pipes downstream of the inlet since the inlet cannot capture the design flow).

    Manufacturers of drainage equipment have independently measured, in experiments, the hydraulic intake capacities of drain grates. Tests are usually carried out under various flow rates and catchments approach slopes, until bypass occurred (point at which liquids would pass across grate). The angle of approach section perpendicular to the grate can be critical for grating intake performance. The greater the angle the greater the velocity of liquid is and the geometry (sl