A heating and cooling load benchmark model for a Dutch office … · 2017-09-12 · 3 Abstract In...
Transcript of A heating and cooling load benchmark model for a Dutch office … · 2017-09-12 · 3 Abstract In...
1
A heating and cooling load benchmark
model for a Dutch office building using an
ANN approach
Master thesis Msc Business Information Management
Rotterdam School of Management, Erasmus University
Author: Damiën Horsten
Student number: 328815
Thesis coach: Dr Tobias Brandt
Co-reader: Professor Wolfgang Ketter
Submission date: 15-08-2017
2
Preface
The copyright of the master thesis rests with the author. The author is responsible for its
contents. RSM is only responsible for the educational coaching and cannot be held liable for
the content.
3
Abstract
In this thesis, we develop a data-driven model to benchmark the heating and cooling load for
a Dutch office building. This poses an alternative to the default option of building a physical
model which is both time-consuming and expensive. For model training and testing, we use
one years’ worth of data that was recorded in one-hour intervals over 2016. A significant
amount of missing data complicates more traditional time series analyses.
The object of research is located in the Netherlands, making it an interesting case study
because of the country’s mixed climate: most of the existing literature concerns prediction
models for either heating or (most often) cooling. The building is equipped with several
heating and cooling subsystems as well as some energy saving features. Apart from a large
amount of heating/cooling related parameters that were monitored by the building
management system, we also have access to some figures regarding electricity consumption.
We use this to create an approximation of occupancy which, as literature describes, is one of
the main drivers of building heating/cooling load. For the weather data we use the 2016 (and
2015) datasets that are made available by the KNMI. We use both the hourly and the daily
datasets for the corresponding models. Partly based on the literature review we select a
scarce number of explaining variables. Specifically, we achieved the best results including as
weather variables mean and maximum temperature, global solar radiation, wind speed and
relative humidity.
We use an Artificial Neural Network (ANN) to construct the model and compare its
performance to a simpler linear regression model. Also, separate models are built for heating
and cooling and for hourly and daily predictions, so that we end up with 8 models. Features
are selected based on physical principles. Lagged variables are used to account for thermal
inertia of the building. Apart from tuning the ANN, some techniques are introduced to
improve the results. We apply cross-validation to prevent overfitting to a specific training set
and for the daily heating model we apply zero-inflation post-processing.
The final results for the daily neural network models are promising. The results for the daily
heating and cooling models are comfortably within the bounds suggested by ASHRAE
guidelines. Also, both models outperform the existing physical model. The hourly models
are unable to achieve a level of accuracy that can be accepted under ASHRAE standards. An
4
attempt to validate the results further is made by using data on the last 45 days of 2015 as a
validation set. Because this period comprises entirely of a winter period the heating model
performs very well but the cooling model struggles.
The neural network models without exception significantly outperform the linear regression
models, leading us to conclude that the underlying physical principles are too complicated to
be (efficiently) modelled using linear regression. Also, the addition of the occupancy-related
parameters to the model without exception results in a significant improvement of prediction
accuracy. We conclude with some directions for further research that we feel might improve
results, but that we were unable to perform ourselves due to time limitations.
5
Table of Contents
Preface .................................................................................................................................................... 2
Abstract .................................................................................................................................................. 3
1. Introduction ................................................................................................................................... 7
2. Literature review I: types of models, feature selection, load prediction and relevance ... 10
2.1 Physical vs. data-driven models ....................................................................................... 10
2.2 Feature selection ................................................................................................................. 13
2.3 Load prediction research ................................................................................................... 14
2.4 Relevance ............................................................................................................................. 20
3. Literature review II: ANN’s & ANN parameters .................................................................. 22
3.1 ANN’s .................................................................................................................................. 22
3.2 ANN parameters ................................................................................................................ 24
4. Research objective and conceptual model .............................................................................. 26
4.1 Research objective .............................................................................................................. 26
4.2 Research questions ............................................................................................................. 26
4.3. Conceptual model .............................................................................................................. 27
5. Case study ................................................................................................................................... 28
5.1 Building and subsystems ................................................................................................... 28
5.2 Air handling units: heat recovery system ....................................................................... 29
5.3 Floor heating/cooling ......................................................................................................... 30
5.4 Residual electricity consumption ..................................................................................... 30
6. Data description .......................................................................................................................... 32
6.1 Missing values..................................................................................................................... 42
7. Methodology ............................................................................................................................... 45
7.1 Data preparation ................................................................................................................. 45
7.2 Seasonality ........................................................................................................................... 46
6
7.3 Feature selection ................................................................................................................. 46
7.4 Main models ........................................................................................................................ 49
7.4.1 Linear regression model ............................................................................................ 49
7.4.2 Neural Network model .............................................................................................. 50
7.5 Post-processing: ‘Zero-inflation’ ...................................................................................... 50
7.6 Performance measures ....................................................................................................... 51
8. Results .......................................................................................................................................... 52
8.1 Zero-inflation ...................................................................................................................... 57
8.2 Additional validation: 2015 data ...................................................................................... 58
8.3 Computational load ........................................................................................................... 59
9. Discussion .................................................................................................................................... 61
10. Theoretical and managerial contribution ............................................................................ 63
11. Limitations and suggestions for further research .............................................................. 64
11.1 Suggestions for further research....................................................................................... 64
Literature list ....................................................................................................................................... 67
Appendix A: AHU, Toutdoor, Toutdoor setpoint data descriptives .......................................... 71
Appendix B: Neural network visualisation and description for daily cooling model ............. 73
7
1. Introduction
For a variety of reasons, modelling the energy performance of buildings – whether it be for
instance cooling/heating load, electricity consumption or heat loss coefficient – is potentially
valuable to companies. For this and other reasons, it has become a popular research subject
over the past decades, and this interest appears to be growing still.
The most obvious business case one can think of would likely be reducing building energy
consumption by using such models. This in itself is a very relevant theme: according to
Pérez-Lombard et al. (2007), the contribution of buildings to total final energy consumption
in developed countries is estimated to lie between 20% and 40%. In Europe, 40% of total
energy use can be accounted to buildings (Zhao & Magoulès, 2012). By 2015, however, Ürge-
Vorsatz et al. (2015) estimated the building sector to be the world’s largest consumer of
energy, accounting for 32% of final energy consumption. Consequently, the sector is
responsible for roughly 33% of all greenhouse gas emissions. According to Fan et al. (2017),
building energy consumption is for the largest part – ca. 50% – attributable to heating,
ventilation and air conditioning (HVAC) systems. This in turn leads to these types of services
offering the greatest saving potential. A decade earlier, Pérez-Lombard et al. (2007) estimated
that HVAC systems combined accounted for 50% of building energy consumption and for
20% of total consumption in the USA. Although these studies are not 1-on-1 comparable,
they indicate the proportion of HVAC systems to total final energy consumption may be
rising. Also, these figures give a sense of just how much energy buildings, and HVAC
systems in particular, consume.
For this reason, and as mentioned earlier, a great amount of research has been conducted on
the topic of building energy consumption. Most of it is predictive in nature (e.g. Datta et al.
(2000), Ben-Nakhi & Mahmoud (2004), Beto & Fiorelli (2008), Taieb & Hyndman (2013), Fan
et al. (2014) and Fan et al. (2017)). Using models to forecast, for instance, the electricity
consumption of a building for the next 24 hours has benefits, the most notable of which, at
least for the business side is the ability to purchase the electricity at a better rate when the
estimated amount required becomes known earlier.
Prediction of heating or cooling load – defined by Merriam-Webster as ‘the quantity of
heat/cold per unit time that must be supplied to maintain the temperature in a building or
8
portion of a building at a given level’ – is equally common. Fan et al. (2017) state that short
term prediction of building cooling load is useful for among others fault detection and
diagnosis, demand-side management and control-optimization. The same holds for heating
load, but this specific research was conducted in a relatively hot part of the world
(Hongkong). Regarding control strategy optimization, these authors, referring to Ben-Nakhi
& Mahmoud (2004) indicate that operating costs can be reduced whist operating flexibility
can be increased. Regarding demand-side management, a lot of research has been performed
on the topic of load shifting to decrease costs, especially in the intersection of buildings and
smart grids (e.g. Xue et al., 2014).
In this thesis, we will be developing a heating/cooling load benchmark model for a medium-
size office building in the Netherlands. Two factors differentiate the present study from
existing research: first, most available studies were conducted in Asia, which means that the
usually focus on cooling load. There are also studies on heating load present, but
surprisingly not much research appears to have been conducted in mixed-climate
environments like the Netherlands. Second, contrary to most other research, we will not be
focusing on forecasting ability. Rather, we will attempt to create a model that can function as
a benchmark. The purposes of this are numerous, as shall be illustrated.
This thesis has been written in collaboration with DWA. This Dutch HVAC consultancy firm
was founded in 1986 and consists of a staff of 110 professionals from a variety of disciplines.
The company focuses on sustainability matters and works for – among others –
governments, commercial companies, education and healthcare institutions and housing
corporations, especially where the environment, climate and energy are concerned. DWA
operates from four locations in the Netherlands, with their headquarters situated in
Bodegraven.
The remainder of this thesis is organised as follows: Chapters 2 and 3 give an overview of the
existing research on the topics of modelling energy consumption and ANN’s. Here, the
business benefits of the benchmark model that is developed will also be clarified. Chapter 4
contains the research objective and the conceptual model. In Chapter 5, the case study of the
Dutch office building is presented. In Chapter 6, the dataset that was made available to us by
DWA is discussed. In Chapter 7, a description of the methodology is given. In Chapter 8, we
9
present the results, which are thereafter discussed in Chapter 9. We conclude with
explaining the theoretical and managerial contribution of this study in Chapter 10 and with
some final remarks, limitations and recommendations for future research in Chapter 11.
10
2. Literature review I: types of models, feature selection, load
prediction and relevance
A large amount of research has been performed on the topic of building heating and cooling
load prediction. Common topics are finding the most important drivers thereof as well as the
optimal model type and design. In this chapter we will discuss research on the theme of
predicting energy load for buildings. Also, we will dedicate a section to discuss the relevance
of this research.
Given the ambiguity1 that exists regarding the meaning of the word ‘prediction’, we want to
give a clarification of the meaning we intend (and indeed, this field of research appears to
intend) with it. Strictly speaking, one can argue that predicting something by definition
refers to determining something of which the value is not yet known. In temporal prediction
(for time series data, like in the present study) this would mean that in order to predict
something, an unknown value in the future needs to be calculated. This is not, however,
what the majority of the studies we encountered describes with ‘prediction’. Although some
studies try to predict into the future (estimate the value of Y on t+1 with the value of X on
time t), mostly, the term prediction is being used to refer to a model that, using a training
and testing approach, tries to examine the value of Y on time t, using factors on time t.
Therefore, we will conform to this meaning and strictly distinguish the
prediction/benchmark model we are building from forecasting models which,
unquestionably, refer to estimating future values.
2.1 Physical vs. data-driven models
Generally speaking, there are two types of models that exist for the explanation and
prediction of building energy consumption (Fan et al., 2017). The first type, the so-called
white-box models are physical models. Although accurate results can be obtained using such
models, their main drawback is the complexity of generating them. Information on building
charateristics - such as materials and insulation used but also for instance air penetration – to
the finest level are required for these types of models to perform well (Wisse, 2017).
1 E.g. https://stats.stackexchange.com/questions/65287/difference-between-forecast-and-prediction;
http://www.analyticbridge.datasciencecentral.com/forum/topics/difference-between-prediction;
https://www.linkedin.com/pulse/difference-between-forecast-prediction-lakshminarayanan-t-p.
11
Lai et al. (2008) describe how physical models are time-consuming to build, as ‘each
subsystem, each physical parameter’ needs to be carefully defined.
On the other hand, there are the data-driven models. These are regularly referred to as grey
or black-box models. These use data that was obtained during operation to model the
relationships between all sorts of operational variables and the heating/cooling/energy load
that needs to be explained or forecast. Their main advantage is the efficiency with which they
can be built, compared to physical models. Using advanced techniques such as ANN’s or
SVM’s, a high level of accuracy can be obtained using these models also.
Zhao & Magoulès (2012) state that in order to come up with a data-driven model, there are
three suitable methods for performing the analyses and the predictions:
A. Statistical methods. This is the classical way of developing a regression model that
links the independent variables to the dependent variable (energy consumption);
B. Support Vector Machines (SVM’s). Also perform well when dealing with non-linear
problems. A drawback to using SVM’s is that the training times can increase very
quickly once the datasets become larger (Meyer, 2017);
C. Artificial Neural Networks (ANN’s). Popular in this particular field of research
because of their good performance at solving non-linear problems.
SVM’s are binary classifiers that try to find the optimal hyperplane separating the two
classes, thereby maximizing the space between the closest points. The name is derived from
the fact that the points lying on the boundaries are called support vectors. (Meyer, 2017)
ANN’s are, as the name suggest, loosely based on the way our brains work, in terms of their
structure. They are organized in layers (input layer, hidden layers and output layer) and the
nodes within layers are interconnected. There are many indications for non-linear techniques
being able to outperform linear ones such as multiple linear regression and ARMA/ARIMA
(Fan et al., 2017). While it is possible to include non-linear relationships in regression models,
this needs to be done manually for such models. SVM’s or ANN’s, on the other hand, can by
themselves recognize and include non-linear relationships during model training.
12
It is important to note that, although these are the most used methods, many others exist.
Taieb & Hyndman (2013) for instance very successfully applied gradient boosting machines
and extreme gradient boosting to a load forecasting problem.
As Neto & Fiorelli (2008) describe in their paper, ANN’s are, contrary to more traditional
methods, capable of ‘learning the rule that controls a physical phenomenon under
consideration from previously known situations’. This is precisely why they are so popular
in this field of research.
Neto & Fiorelli (2008) made a comparison between a physical principles (EnergyPlus) model
and a data driven model. Their object of research was the administration building of the
University of São Paulo, covering about 3.000 m2 and with a building population of ca. 1.000
employees. Lowering of operational costs can be realised, according to these authors, in two
ways: by evaluating the energy end-users and implementing actions to reduce the
consumption, or by lowering the penalties that are often imposed by electricity operators
when peak loads exceed a certain, often contractually determined threshold. EnergyPlus is a
building simulation software that allows for implementation of geometry, materials, internal
loads, HVAC characteristics and so on. It supports daily and annual simulations. The
parameters of the simulation are determined by the user through weather parameter
profiles. Both the physical model as well as the basic ANN model were able to achieve a
good fit in 80% of the cases. The authors showed that dry-bulb temperature alone is a very
good predictor; adding other parameters (i.e. humidity and solar radiation) improved the
models, but not drastically so. The authors further show that creating separate models for
working days and non-working days can significantly improve results. An important lesson
from this paper is just how much uncertainty is being introduced by occupants’ behaviour.
They refer with this mainly to responses of occupants to a decreased thermal comfort, for
instance by opening the windows. Their most optimized neural network was able to achieve
an accuracy of about 90%.
13
2.2 Feature selection
Feature selection refers to the process of selecting relevant predictors for a model. Accuracy
and computation related benefits aside, one important reason for using feature extraction is
that overfitting risks are decreased significantly, compared to just using all available inputs
and lags (Fan et al., 2017). As Fay et al. (2003) discuss, feature selection is one of the most
important steps in building prediction models due to the poor generalisation that results
from selecting the wrong features. These authors also note that dimensionality reduction can
benefit neural network performance.
In the case of ANN’s, feature selection might be less of a necessity than for other types of
models. To quote Kalogirou et al. (1997): ‘[ANN’s] seem to simply ignore excess data that are
of minimal significance and concentrate instead on the more important inputs’. This has been
confirmed by other authors also. Nevertheless, feature selection is almost without exception
applied to ANN models also and usually does improve performance. Also, it has other
benefits such as a decrease in computational load.
Regarding their feature selection process, Fan et al. (2014) describe two main methods,
namely feature reconstruction methods such as principal component analysis and embedded
methods, such as RFE. The authors utilised a sophisticated ‘data driven input selection
process’, based on RFE to select relevant features. They further applied a clustering analysis
to detect underlying patterns in the data.
The most popular methods of relevant feature selection are engineering knowledge (e.g.
outside temperature, occupancy) and simple statistical methods. However, more advanced
‘feature extraction’ methods have also been applied. Fan et al. (2017) used and compared
four types of feature extraction, namely engineering, statistical, structural and deep learning.
Structural in this context refers to the structural relationships that exist within the data over a
certain time period. The unsupervised deep learning approach applied by these authors in
this respect stood out in both performance as well as simplicity. Very little domain
knowledge was required for this method to extract relevant features.
14
2.3 Load prediction research
Yang & Becerik-Gerber (2017) estimate that 40 percent of energy consumption in the USA is
attributable to buildings, almost half of that to commercial buildings and 40 percent of that to
HVAC systems in commercial buildings. Improving this figure, the authors explain, can be
realised by improving the thermal properties of buildings (e.g. insulation) or by better
controlling the HVAC: oftentimes, these systems are not ran in an energy-efficient manner.
An extreme example of this would be the heating and cooling systems continuously battling
each other to achieve a pleasant environment. A more real world example is given by Bauer
& Scartezzini (1998): some control systems heat a building up in the morning because of the
low outdoor temperature, only to cool it down a gain in the afternoon due to a high outdoor
temperature. This is called the pendulum effect and, although sometimes required for
thermal comfort, this can partly be handled more efficiently. Such inefficiencies can be
realised without losing thermal comfort and far more cost-efficient than by the first, physical
solution of improving the thermal properties of the building. The central theme in the Yang
& Becerik-Gerber (2017) paper are state transitions, referring to the switches such as arrival,
intermittent absence and departure. The authors measured such transitions through setpoint
analysis. Using an Enhanced Variable Neighbourhood Algorithm based on these transitions,
they were able to optimize (simulated) energy efficiency by between 10 and 28 percent.
Zhao & Magoulès (2012) state that the most suitable and popular methods to perform energy
load analyses are Support Vector Machines (SVM’s) and Artificial Neural Networks
(ANN’s). However, more conservative statistical (regression) methods have been used to
predict energy consumption with relatively simple input variables. This has been done by,
among others, Bauer & Scartezzini (1998) and Cho et al. (2004). Bauer & Scartezzini (1998)
indicate that – of course depending on properties of the building envelope (glazing
especially) and the climate, during the summer, solar irradiation and other ‘free gains’ are
often enough to fulfil the entire heating demand. According to Zhao & Magoulès (2012),
predicting energy consumption of buildings in general is being complicated because of the
variety of parameters that influence it. Regardless of the type of building, one can think of
ambient weather conditions, building characteristics (e.g. construction and thermal
insulation), lighting, HVAC and occupancy.
15
Ben-Nakhi & Mahmoud (2004) used neural networks to forecast next day cooling loads for
three public buildings in Kuwait. Their only input parameter was outdoor dry bulb
temperature. They had access to five year’s worth of data. 80% of the first four years was
randomly used for training, 20% for testing. The data on the last year was held out as
“production data”. The authors achieved excellent prediction accuracies with R2 values
exceeding 0,90. They conclude that NN’s are a relevant tool for cooling load prediction as
they require fewer input parameters than simulation models.
In the study of González & Zamarreño (2005) a two-phase neural network was built to
predict building energy load. The researchers used temperature parameters, time-related
parameters (to identify weekends and holidays) and load parameters. They found that the
load at time t was the most important predictor of the load at time t+1. Their neural network
utilised a hybrid algorithm, five neurons of the hyperbolic tangent type and 1000 training
cycles. The model performed very well. The authors noted that it appears that, for this type
of problems, many layers nor many neurons per layer are required.
Cho et al. (2004) mention three popular weather parameters for prediction purposes, namely
dry-bulb temperature, solar radiation and humidity. These authors measure heating energy
consumption between November 2000 and March 2001 for an office building in Daejon,
South-Korea and tested the influence of measurement length on prediction accuracy. Their
research shows the necessity of high quality input data that was gathered over as long a time
period as possible, as the outcomes indicated a 94 percentage point decrease in prediction
error between 1 day and 90 day measurements.
Interestingly, in the research of Datta et al. (2000) the best results were yielded using data
from ‘just’ the previous month. Training the network with more data (e.g. 4 months) resulted
in higher prediction errors. This somewhat contradicts the findings of Cho et al. (2004). The
most explaining variable proved to be time of the day, followed by interior and exterior
‘environmental conditions’ such as humidity and temperature. The best performing ANN
was able to realise half hourly predictions with an accuracy of 95%.
Datta et al. (2000) applied an ANN with one hidden layer to predict supermarket energy
consumption. The study focused on electricity consumption and is very much practice-
oriented. The authors state that energy reduction can be accomplished only through better
16
insight into both consumption patterns and energy demanding equipment in relation to
external parameters. The ANN model performed much better than both a linear and a
polynomial regression model. Also, according to Datta et al. (2000), ANN’s can achieve
higher levels of accuracy than simulation models.
Yuce et al. (2014) used ANN’s to predict energy consumption for an indoor swimming pool
in near real time. The goal was to achieve energy savings. The authors used an ANN as it is
fast, something that was required because the predictions were used in the Building
Management System (BMS). Their object of research is located in Rome, Italy and includes a
25/16/1.6-2.1m pool, a 16x4x1m pool as well as some other facilities. The building also
possesses a couple of energy generation systems such as a biomass plant, a diesel plant and
40 square metres of solar panels. Several subsystems of the building were taken into account,
including the air treatment subsystem, the water treatment subsystem and a building control
subsystem. No weather data were used. The dependent variables/objectives chosen by these
researchers were split into electricity consumption, thermal energy consumption and
thermal comfort (PMV). The researchers have determined and documented the ANN
settings and the best performing algorithms carefully, which is valuable for the present
research. The Levenberg-Marquadt backpropagation algorithm performed best. Further, the
authors yielded the best results using two hidden layers, 1000 training cycles and a learning
rate of 0,01.
Fan et al. (2014) developed models for the forecasting of both next-day energy consumption
as well as next-day peak power demand. They used a three-step approach consisting of
outlier detection, recursive feature elimination (RFE) and final model development. Most
detected outliers were related to public holidays. The RFE had a positive effect on both
computing times as well as model performance. Also, it prevented from overfitting. Using an
ensemble model constructed of eight popular prediction algorithms, the researchers were
able to reach a high level of accuracy with a MAPE of 2.32% and 2.85% for total consumption
and peak demand respectively. For all chosen variables, lags for 1 to 15 days were also used
as potential input variables. For the separate models, support vector regression and random
forests performed best, whilst linear regression and ARIMA performed poorly. The best
performing separate models also contributed most to the ensemble models. The paper stands
17
out because of the high number of meteorological variables that was incorporated into the
model.
Whilst most research focuses on a single type of objects (e.g. office buildings, supermarkets),
the study of Kalogirou et al. (1997) stands out because the researchers trained an ANN using
data of 250 different cases varying from ‘small toilets to large classroom halls’. The DV is the
heating load and the IV’s consist solely of room characteristics such as area and the presence
of windows. Given the variety of data the model is trained and tested with, the accuracy is
remarkably high (over 90%). Also, the authors note that the ANN allows for very fast
prediction times. The authors used different activation functions depending on the layers:
their ANN consisted of an eight-neuron input layer, a ten-neuron gaussian hidden layer, a
ten-neuron tanh hidden layer, a ten-neuron gaussian hidden layer and a 1 neuron logistic
output layer.
Newsham & Birt (2010) specifically researched the influence of occupancy on ARIMA energy
(in this case electricity) forecasts. They gathered this data using a variety of sources,
including wireless sensors, door sensors, motion sensors, carbon dioxide sensors and the
number of logins to the network. They conclude that, although accuracy slightly improves
when occupancy levels are taken into account, only the login data had explanatory power.
All other occupancy measures proved to be ineffective. The authors explain this mainly
through the somewhat unique nature of the building and the fact that a lot of data had to be
imputed.
Lai et al. (2008) used SVM’s to forecast electricity consumption for residential buildings. 15
months’ worth of data were used to train and test the model. Climate data was incorporated
into the model, as well as building related characteristics (e.g. building date, total surface and
overall heat transfer coefficient). The model performed well. The most important driver was
the outdoor temperature.
Taieb & Hyndman (2013) participated in a 2012 Kaggle load forecasting competition.
Although this competition entailed prediction of electricity load, there are many potential
methodological similarities to the present study. Their data came from a US utility provider
and entailed 20 zones. Also available was temperature data for 11 weather stations. The
relative locations of both zones and weather stations was unknown and needed to be
18
discovered. The authors incorporated the current temperature and lagged temperatures into
the model. They also used the minimum and the maximum temperatures in the current and
previous day as well as the average temperature for the current and previous day and for the
previous seven days. Lastly, they included the lagged values of the demand itself into the
model (with the same temporal lags as for temperature). In the end, the authors used a total
of 24 models (one for each hour) based on a component-wise gradient boosting technique
with the main predictors being current and past temperatures and past demand. For past
demand and temperatures, lag variables of at most -7 days were used. Using separate
models for every hour has been applied before successfully, as the authors note, by a.o.
Ramanathan et al. (1997) and Fay et al. (2003). The results clearly show the predictive power
of the most recent previous demand(s). Especially for the first three hours ahead, this was by
far the strongest predictor. During working hours, however, (lagged) temperature variables
proved dominant.
Fay et al. (2003) have dedicated their paper entirely to the aforementioned decision for the
hourly models: whether a comprehensive model for all hours of the day or a separate model
for each hour should be built. Their conclusion was that, except for four specific hours of the
day, the sequential approach (one model) outperformed the parallel approach (24 models).
Their final solution, a combination of the sequential and parallel model, performed even
better.
Fan et al. (2017) compared seven types of prediction techniques to predict 24 hour ahead
building cooling load. Multiple linear regression and elastic net functioned as performance
benchmarks. The focus of the paper is on deep learning methods. Unsupervised and
supervised deep learning methods were combined to enhance the prediction quality. The
object of research was a 11.000 m2 – largely air conditioned - school building. One year of
data for the year 2015 was available with 30 min intervals. Until this study, the potential of
applying deep learning methods to building energy load prediction was largely unknown.
Apart from the deep learning model performing very well, the paper stands out because of
the novel way in which feature selection was handled, as discussed in Section 2.2. Another
interesting characteristic of this study is the fact that the authors used an iterative approach to
predict the 24 hour ahead cooling loads. In essence this means that every one-hour
prediction is added to the 24 inputs used to predict the next one-hour prediction. Up to ten
19
hidden layers were used for developing the model. Adding more hidden layers does, as the
results show, not necessarily result in better performance. Specifically, the best results were
achieved using just two hidden layers. These authors explain this behaviour through data
scarcity: their one year dataset with 30 min intervals is likely not large enough to apply NN
models with such depth to. Both linear methods failed to perform very well. This can easily
be explained by the underlying relationships not being linear, which is why these methods
struggle. Overall, the “extreme gradient boosting” performed best with an CV-RMSE
(coefficient of variation of the RMSE, a way of comparing RMSE’s which in themselves are
scale-dependent) of 17,8% on one dataset. The computational load for the more complex
models is, however, much higher. Another interesting finding, in light of the feature
selection methods covered earlier, is the fact that the linear methods performed best on the
raw data, whilst the more advanced non-linear methods performed better on the high level,
nonlinear abstractions of the data generated through deep learning feature selection. Whilst
using engineering knowledge to select features can work better on one case than on another,
feature selection using deep learning will, these authors believe, almost always result in a
performance boost over using the raw feature set.
A great unknown to models such as the one we will develop, is air infiltration (the air
leakage into the building). This is problematic because it has a significant impact on the
building’s energy demand. A very thorough review of this topic has been written by Younes
et al. (2012). Some important conclusions in this paper are that air infiltration accounts for
25% to 50% of heating and cooling load in commercial buildings, and that for low-rise
buildings, the wind gusts at the building façade are an important driver of air infiltration,
whilst for high-rise buildings, the most important factor is the so-called stack effect. This
effect describes that for such buildings, during summer, the hot air rises to the top of the
building, thus creating a higher pressure in the top of the building and a lower pressure at
the bottom. As a result of this, air will infiltrate the building in the bottom area to correct the
pressure difference between the inside and outside air, and leak out of the building in the top
area.
20
2.4 Relevance
One of the most important benefits of accurate energy consumption predictions is the ability
to better align local energy generation with local energy demand. Accurately predicting
energy demand also allows for an economical gain through energy purchasing plans. This
holds at multiple levels, for instance on the level of the consuming party but also on the level
of the power provider. The power provider also benefits from this kind of research because
once predictions become better, grids can be dimensioned more accurately (less over-
dimensioning is required). A good example of the relevance of energy load
prediction/forecasting is given by Datta et al. (2000). These authors indicate that electricity is
purchased by most UK supermarkets on a half-hourly basis; a penalty is incurred in case in
case the prediction was off. This is a nice example of the monetary incentives of being able to
forecast energy consumption. The study also indicates that some load-shifting to off-peak
periods can be realised through accurate forecasts.
The aforementioned benefits, however, mainly hold for electricity load prediction. This
particular study has as its largest stakeholder DWA. The main insight they will extract from
it is whether data-driven models can perform at a similar level to physical models for this
specific type of problem (predicting heating and cooling loads for benchmark purposes).
More generally, however, these prediction models can ‘evaluate a proper operation and
audit retrofit actions’ (Neto & Fiorelli, 2008). As Kalogirou et al. (1997) point out, data driven
models can (as opposed to simulation models) be particularly valuable for countries where
accurate thermal properties of building materials tend to be unavailable.
Energy prediction models can be used for ‘fault detection and diagnosis, operation
optimizations and interactions between buildings and smart grid’, according to Fan et al.
(2014). Energy prediction models can function as a performance benchmark, enabling firms
like DWA to quickly respond to faulty behaviour or component failure. This is extremely
relevant as it is known that the energy performance of buildings degrades significantly over
time. First and foremost, this is caused by components wearing out, thus losing efficiency
and eventually failing altogether. However, decreasing energy performance is also often
caused by the building management and maintenance being hand-over to another party, that
is not entirely acquainted with the equipment in that specific building. (Wisse, 2017).
21
Although, as shown, data driven models have many benefits, there are also some drawbacks
to using them. One of the largest limitations is that the effects of retrofitting actions to the
building envelope cannot be accurately predicted. Using a physical model one can, with
relative ease, predict for instance what the energy savings of replacing all glass with triple-
isolated glass will be. The same estimation cannot be given upfront through data driven
models. Another limitation, that is very much applicable to the benchmark scenario for this
paper, is the fact that one can never reliably establish a baseline figure. In other words, when
the model is trained and the building at that point already malfunctions or contains
malfunctioning components, this behaviour is included into the model as being the
benchmark value. In this sense, a physical model remains valuable, as the performance with
all components functioning as they should can be predicted with relative accuracy. (Wisse,
2017).
22
3. Literature review II: ANN’s & ANN parameters
As ANN’s will form the backbone of our research, in this chapter, we shall discuss what
ANN’s are, how they work and which parameters are important in tuning them.
3.1 ANN’s
ANN’s exist in both hardware as well as software (algorithm) versions, with the former
being known as Hardware Neural Networks (HNN’s). We will be using an algorithmic ANN
for this research. A popular definition of a neural network, as given by Dr. Robert Hecht-
Nielsen, the inventor of one of the first neurocomputers is:
“… a computing system made up of a number of simple, highly interconnected
processing elements, which process information by their dynamic state response to
external inputs.”
(Caudill, 1989)
ANN’s are organized in layers. Each layer contains nodes that are connected to the nodes in
the adjacent layers, but not to the nodes in its own layer. Each of the nodes possesses an
activation function. A typical neural network is depicted in Figure 1.
23
Figure 1: Typical ANN structure.
In the illustrated ANN, there is one input layer and one output layer and there are three
hidden layers. Through the input layer, the patterns are presented to the ANN. The
processing and model generation is being performed by the hidden layers. The lines between
the hidden layers are weighted connections. In the output layer, the final solution (in our
case, the estimation of the energy loads) is given. The weights of the connections between the
hidden layers are determined by the learning rule that the user gives the ANN. In the
weights, the model’s ‘knowledge’ is stored (Kalogirou et al., 1997). Its structural resemblance
of human brains as well as its ability to learn by itself are the reasons for the term ‘neural’.
In the present study, we apply a backpropagation neural network (BPNN), meaning that
errors are being backpropagated. Backpropagation algorithms are the most popular
algorithms used for ANN’s (Kalogirou et al., 1997). This type of network makes use of the
‘delta rule’: each learning cycle the activation flow of outputs goes forward and the
adjustment of weight goes backwards. In the first cycle, the ANN chooses the weights
randomly. In the second (and indeed, nth) cycle, it compares its output to the actual value
and adjusts the weights accordingly. (wisc.edu, 2017)
Hidden layer 2
Hidden layer 3
Input layer
Output layer
Hidden layer 1
24
The activation function that exists at each node in the hidden and output layers is what gives
the ANN it’s non-linear abilities. In the ANN we use, the activation function is a sigmoid
function, resulting in the input of that node (that can theoretically range from -∞ to ∞) being
mapped to a value between 0 and 1:
𝛷(z) =1
1+ 𝑒−𝑧
According to Ben-Nakhi & Mahmoud (2004), this activation function is the most popular and
oftentimes the most effective. In more advanced deep learning models, the activation
function can be different for each layer (Gelston, 2017). This has been applied for instance by
Kalogirou et al. (1997).
After having been trained, the neural network can be applied to unseen data. At this point,
no more backpropagation of errors is required (this would also be impossible, because the
actual values are unknown) thus the network is set to only perform forward propagation.
The output now is the actual DV prediction.
Neural networks commonly have relatively few hidden layers. Deep learning models on the
other hand employ more hidden layers, assigning specific tasks to each layer (Fan et al.,
2017). Adding more hidden layers/neurons to the model does not necessarily improve
results. As Gelston (2017) describes, more hidden neurons will result in the ability of solving
more complicated problems. Too many hidden neurons, however, can result in overfitting or
decreased model performance. Finding the right number of hidden layers, then, is – like
many of the parameter tuning for an ANN – a process of trial and error.
BPNN’s have been criticized for being ‘the ultimate black boxes’, because a user has no
influence whatsoever on the model training, other than setting some initial parameters. Also,
depending on the number of training cycles, observations, input parameters and available
hardware, they can be quite slow. (wisc.edu, 2017)
3.2 ANN parameters
Tuning SVM’s and ANN’s is one of the most challenging parts of creating an accurate yet
generalizable model (Meyer, 2017). Two of the most important parameters of the ANN are
the number of hidden layers and the number of training cycles. In line with Fan et al. (2017),
25
we discovered that for this type of problem, using fewer hidden layers outperforms using
more of them. As these authors explain, the problem at hand is not so complex that adding
many hidden layers improves the results. In fact, our results became significantly worse,
once more than one single hidden layer was added: using the same dataset we experienced a
61% drop in accuracy with a second hidden layer and an additional 60% drop when adding a
third hidden layer.
Additionally, the results were influenced heavily by altering the number of training cycles.
Generally, the results improved by increasing this number. However, as Gelston (2017)
describes, overdoing this will specialize the model too much and restrict it from being able to
find more general patterns in unseen data. Although we apply cross-validation to decrease
overfitting risks, we did limit the number of training cycles to 1000. Compared to 500
training cycles, this still gave a significant boost in accuracy, kept the computation times
reasonable and would, we believe, not overfit the model to the known data. Also, it was in
line with the findings of González & Zamarreño (2005) and Yuce et al. (2014), who also used
1000 training cycles.
Other relevant ANN parameters are the learning rate, the momentum and decay. The
learning rate refers to the amount of change that is allowed to happen to the weights in each
training cycle. We achieved the best results using a learning rate of 0,3 with a momentum of
0,2. The momentum of a neural network refers to the addition ‘of a fraction of the previous
weight update to the current one’ (rapidminer.com, 2017). Its purpose can be explained as
follows: when too high a learning rate is used, the “best” solution can be underestimated in
cycle x only to be over-estimated in cycle x+1. What follows is an ‘oscillating pattern’ and no
convergence to the optimum (McCaffrey, 2017). The momentum parameter exists to prevent
this. By adding a fraction of the previous weight update, the ANN is able to ‘jump over’ such
local error minima, thus not getting trapped. By the time the network is close to the global
minimum (the optimal solution), the previous weight update will be very small so that the
ANN does not overshoot the global miminum in the same way it does with local minima
(McCaffrey, 2017). Applying decay – meaning that towards the final iterations, the learning
rate is being decreased – had a significant positive effect on our results.
26
4. Research objective and conceptual model
In this chapter, the research objective and research questions are being outlined, as well as
the conceptual model.
4.1 Research objective
The objective of the present study is to come up with a benchmark model for the heating and
cooling load of a Dutch office building that approximates physical reality with sufficient
accuracy.
4.2 Research questions
The question that we want to answer in this thesis is the following:
How can we come up with a model that accurately predicts daily/hourly
heating/cooling loads for a Dutch office building using external conditions?
The following sub-questions can be formulated:
(1) Which factors drive heating/cooling load for this building?
(2) Which types of models can be used to model this behaviour?
(3) Which model suits this type of problem best?
(4) What level of accuracy can be obtained?
Throughout this paper, the questions will be addressed. In the conclusion, we shall explicitly
answer the sub-questions, and thereby the main research question.
27
4.3. Conceptual model
Figure 2 depicts the conceptual model.
Figure 2: Conceptual model.
As the graphic shows, we have, based on the literature review and the interviews with
DWA, identified several predictors that we expect to be relevant for the model. We have split
them into four subclasses: building envelope, internal variable parameters, weather
parameters and temporal parameters.
The model we construct will be building specific, which means that (especially) the building
envelope will be ‘modelled’. The other categories (e.g. occupancy, weather, time) will be fed
to the model as input data, on which the heating/cooling load predictions will be based. The
temporal parameters in isolation have no effect, but will be included to account for
daily/weekly seasonality patterns that result from the other variables (e.g. occupancy,
working hours).
28
5. Case study
An overview of the building and its properties will be given in this chapter. Furthermore, the
two most important subsystems will be described in detail.
5.1 Building and subsystems
The object of research for this thesis is a medium size office building located in the
Netherlands. The gross floor area of the building is around 6.300 m2. The building is
comprised of a large hall, and a large office section behind that. It is designed to
accommodate about 210 people. The building is equipped with a variety of energy efficiency
features which, although very effective, complicate the model building. All data we use was
used by the monitoring software package Simaxx.
The illustration below, that was provided by DWA, shows a selection of climate control
systems that buildings can be equipped with (note that this building does not possess all the
‘options’) as well as the general system setup for this building.
Figure 3: General set-up and climate systems.
29
As Figure 3 shows, gas or electricity is drawn from the main grid/network, and processed to
heat or cold in a form of power plant, which is usually situated at building level (e.g. boilers,
chillers, heat pumps, aquifers), but sometimes centralised (city heating/cooling). This is then
transferred through the building by means of radiators, floor heating/cooling and Air
Handling Units (AHU’s). Other, more sophisticated techniques that DWA often uses in their
designs include concrete core activation, climate ceilings and active chilled beams.
In this paper, we will be focusing on the section indicated by the dashed blue line in Figure 3.
As a result, we will not be examining the heat/cold generation part, meaning that
inefficiencies in that part of the system will be outside of the scope of this document. Also, it
is repeated here that we will specifically be focusing on heating and cooling load, not on
electricity consumption.
In the case of our building, two specific subsystems will be analysed, namely the floor
heating and cooling and the AHU’s. Together, these, for this building, comprise the methods
of heating/cooling the building. We will, however, give most attention to the aggregate
circuit, depicted in Figure 3 as the horizontal red and blue arrows. It is also noteworthy that
for heating, there is, for some special rooms in the building, a small group of
radiators/convectors present, as an addition to the AHU’s and the floor heating system.
However, these systems only make a small contribution to total heating load (about 7% of
the total; Wisse (2017)).
In a physical sense, the heating and cooling demand parameter is exactly the same, except
for both being each other’s inverses. However, in an engineering sense, the cooling and
heating are delivered through two separate circuits. It is therefore possible for the building to
ask for heat and cold at the same moment. This would for instance happen when the AHU’s
ask for cold, but the floor heating system asks for heat at the same moment. (Wisse, 2017)
5.2 Air handling units: heat recovery system
The building possesses multiple heat recovery systems, but the most important one is a heat
recovery wheel. The function of this wheel is to transport heat from the return air flow to
heat the supply air flow (or vice versa). This thermal wheel, as it is also called, therefore is
30
located in the AHU circuit, not in the water circuit. To be able to transfer lots of heat energy,
the heat wheel consists of a very fine structure, in for instance a honeycomb pattern. The
wheel is being driven, slowly, by an electric motor to allow the heat to be transferred from
one flow to the other. The efficiency of the technique is rather good, compared to other air-
to-air heat exchangers. (Wisse, 2017)
5.3 Floor heating/cooling
The floor heating has been divided into two groups: hall and office. The reason for this is that
the central heating circuit is feeding 2 AHU’s as well as 2 floor heating and cooling groups.
One of these groups feeds the offices, the other group feeds the hall. Again, it is possible that
one of the groups asks for heat, whilst the other group asks for cold. (Wisse, 2017)
5.4 Residual electricity consumption
As mentioned above, we will not research the electrical part of the system. There is one
exception to that statement though: as we saw in the literature review, it has been widely
accepted that occupancy is one of the most important drivers of building heating/cooling
load (e.g. Yang & Becerik-Gerber (2017) and Fan et al. (2017)). This is the case because of
metabolism but also because of use of equipment, expelling heat (Yang & Becerik-Gerber,
2017). Also, interior lighting – which usually correlates with occupancy, for instance through
proximity sensors – has a significant influence on the heating/cooling required (Wisse, 2017).
While, as Fan et al. (2017) explain, actual occupancy figures are rarely available, they can be
imputed from other, known data, namely time variables. This is the case because occupancy
is usually very much correlated with time. Whilst we will be using time variables in our
models, we also have available two other indicators of occupancy, namely electricity
consumption and setpoints.
Although the measured electricity consumption is, in essence, an aggregate number,
comprising of many occupancy-dependent and occupancy-independent components, the
electricity consumed by the heat pumps – by far the largest occupancy-independent
component – is also measured. By subtracting this from the aggregate number, we expect to
31
get a rough approximation of occupancy. The largest limitation of using this data is the fact
that the power consumption of the AHU’s is also included in this number, and accounts for a
relatively large share of it. Since no data on AHU power consumption is available for this
building, we cannot correct for this and thus need to treat the residual electricity
consumption as a rough estimator of occupancy.
Through the setpoints/thermostats, we have available the number of rooms asking for
heating/cooling at any given moment. This too can be used as an occupancy approximation,
as it is correlated strongly with building occupancy.
Of course, using all three estimators (time variables, electricity consumption and setpoint
data) is likely to introduce multicollinearity into the model. We chose to use residual
electricity consumption in favour of setpoint data, as it was somewhat unclear how well the
setpoint data reflected occupancy in the non-office area of the building.
32
6. Data description
In this chapter, an impression of the data that was made available to us by DWA will be
presented. Also, we will discuss missing data in Section 6.1.
The data we received for the analyses was gathered over a one-year period (01-01-2016 to 31-
12-2016) in a medium sized office building located in The Netherlands. This location
differentiates the present study from most of the existing research because of the Dutch
climate. Whilst in other studies the research focused specifically on either heating or cooling
load, our dataset, as will be shown below, consists of a “hot” and a “cold” period,
complicating the analysis. For the building, DWA had a virtual physical model available.
Our objective was to approximate the performance of that model.
For the one-year period, data was collected with a one-hour interval. Since the year 2016 was
a leap year, and since one hour was not recorded at all, the dataset consisted of 8.783
datapoints.
The data was divided into several files, more specifically for the AHU’s, the floor heating
and cooling (again divided into an office part and a hall part), the outdoor and desired and
actual indoor temperatures and the central heating and cooling circuits. Our focus was on
the central circuit data: in it were both the dependent values we had to explain as well as the
factors that influenced it most notably. Additionally, data on the overall electricity
consumption and on the heat pump energy consumption was also used as described in
Section 5.3.
It is customary to express cooling demand using a negative value. This makes it easily and
immediately visible that a certain number or graph depicts cooling demand.
The graph in Figure 4 depicts the relationship between outdoor temperature and
heating/cooling load. The figure shows that outdoor temperature is heavily correlated with
both. Furthermore, the relationship between heating and outdoor temperature appears to be
more linear than the one between cooling and outdoor temperature: the cooling load
displays a “fork”. The upper group contains mostly the weekend observations, when the
AHU’s are inactive, and the only cooling power is being drawn through the floor systems.
The lower group consists mostly of weekdays, when both systems are often active. Because
of the less linear relationship between cooling demand and temperature (compared to
33
heating demand), we expect the heating load models to perform better. This is in line with
the hypotheses of Kalogirou et al. (1997), who note that cooling load estimation is much
harder than heating load estimation.
Figure 4: Daily cooling and heating loads plotted against outdoor temperature.
The tables below depict all the variables we had available in the data files. For the main data
file and the weather file, also shown are some basic descriptive statistics. We have only
included in this section the information on the data parts that (a) we used in the main
analysis or (b) were in other ways informative. Descriptives of the other available data are
added in Appendix A.
-3000
-2000
-1000
0
1000
2000
3000
4000
-100 -50 0 50 100 150 200 250 300
Ener
gy d
eman
d (
kWh
)
Outdoor temperature (0,1 °C)
Daily cooling and heating load vs. outdoor temperature
Heating load Cooling load
34
Variable Unit of measurement Min. Max. Avg.
Date/time 01-01-2016 0:00 31-12-2016
23:00
-
Temperature supply 1 °C 0 18,95 14,58
Pressure difference 1 kPa 0 250 85,76
Temperature return 1 °C 0 22,91 18,19
Setpoint temperature
supply
°C 0 16 14,43
Setpoint pressure
difference 1
kPa 110 110 110
Volume flow 1 m3/h 0 31,15 3,80
Pump 1 steering % 0 73,71 22,15
Pump 2 steering % 0 73,71 23.15
Temperature difference °C -9,8 0,15 -2,75
Energy demand kWh -148,02 4,04 -16,25
Table 1: Central cooling circuit variables.
The boxplots in Figure 5 display the average cooling load and the variation therein for every
day of the week. Notably, the median value for the cooling demand is highest on Tuesdays,
but the interquartile range and the lower whisker are highest on Wednesdays. The latter can
be explained through meetings taking place on Wednesdays in the building, leading to
occupancy until 20:00 PM instead of 18:00 PM. Only two values are marked as outliers,
which is expected as the number of observations is small. The cooling demand on Saturdays
and Sundays are not only the lowest, but also by far the most stable, as depicted by the
narrow interquartile ranges.
35
Figure 5: Boxplot of the daily cooling loads.
The hourly cooling boxplots in Figure 6 also show some interesting characteristics in the
data. First of all, a large increase in the cooling demand arises on hour 8, spanning between
7:00 AM and 8:00 AM. This is the moment the system is set to start conditioning the building.
The cooling demand continues to rise throughout the day and peaks between 15:00 PM and
16:00 PM. Thereafter it drops back steadily. A lot of outliers are present in the 7th, 19th and
20th hours. The first could originate from users who are in before 7:00 AM and manually start
demanding cooling. The latter two can be explained through the working days ending later
on the Wednesdays, as was mentioned earlier.
36
Figure 6: Boxplot of the hourly cooling loads.
Variable Unit of measurement Min. Max. Avg.
Date/time 01-01-2016 0:00 31-12-2016
23:00
-
Temperature supply 1 °C 19,2 49,26 29,67
Pressure difference 1 kPa 0 70,62 31,40
Temperature return 1 °C 0 34,92 24,27
Setpoint temperature
supply
°C 20 45 28,37
Setpoint pressure
difference 1
kPa 70 70 70,00
Volume flow 1 m3/h 0 20,51 2,83
Heating power delivered kWh 0 242,78 30,97
Pump 1 steering % 0 100 23,99
Pump 2 steering % 0 100 26,28
Temperature difference °C -32,2 37,61 3,89
Energy demand kWh 0 242,73 21,83
Table 2: Central heating circuit variables.
37
Some remarks now follow regarding the meaning of and relationships between selected
variables for the main heating and cooling data.
‘Temperature supply 1’ refers to the temperature of the water that flows towards the air
handling units and the floor heating/cooling system. ‘Pressure difference 1’ is an indication
for the system being in use. Since the temperature variable has a value regardless of the
system being in use, the pressure difference is needed to examine that. ‘Temperature return
1’ refers to the temperature of the water coming back from the building. This too is a single
value for the water coming back from the AHU’s as well as the floor heating/cooling system.
‘Setpoint temperature supply’ is the setpoint for the temperature of the supplied water,
whilst ‘Setpoint pressure difference’ is a fixed target value of 70 kPa. ‘Volume flow 1’ refers
to the flow rate of the water in the main circuit. When > 0,1 m3/h, it indicates heat/cold
demand being present. When below that, the system can be regarded as idle. ‘Heating power
delivered’ (for heating only) is the energy being delivered, as monitored by the system.
‘Pump 1/2 steering’ is the steering signal for the heat pumps. A value above 0 indicates
system activity. ‘Temperature difference’ refers to the difference between the supply and the
return water flow. For this variable as well, a threshold of > 0,1 °C can be treated as a real
heating/cooling demand being present. Finally, ‘Energy demand’ is our dependent variable.
The energy demand variable (our DV) was mathematically calculated as follows: ED = 1,16 *
temperature difference * volume flow 1. This formula can be physically explained because
1,16 is the product of multiplying the density of water with its heat capacity. For the heating
data, there was also a measured value. The measured values were very close to the
calculated values, but there were small deviations. We were instructed to use the calculated
values.
Figure 7 shows that the daily heating load median value peaks at Wednesdays. Interestingly,
the heating demand for the Saturdays and Sundays is far less stable than the cooling
demand, as shown by the bigger box plots. No outliers are present.
38
Figure 7: Boxplot of the daily heating loads.
The boxplots for the hourly heating load as displayed in Figure 8 show a steady increase
between 0:00 AM and 10:00 AM. After this, the demand drops until 15:00 PM until it starts to
rise again until 17:00 PM. After that, the heating demand decreases to a median value of zero
at midnight. A surprisingly high number of outliers is present between 20:00 PM and 01:00
AM. This indicates very unstable behaviour for the hourly values and will likely cause the
models to perform worse than the models for the more stable daily loads.
39
Figure 8: Boxplot of the hourly heating loads.
Variable Unit of measurement
Date/time
Setpoint secondary supply
heating
°C
Setpoint secondary supply cooling °C
# of askers for heat
# of askers for cold
Primary return temperature °C
Secundary supply temperature °C
Supply valve steering signal 1 %
Supply valve steering signal 2 %
Cold demand Binary2
Heat demand Binary
Table 3: Hall floor heating/cooling variables.
2 Although these variables are binary in nature, there are certain datapoints where they contain a decimal value.
This is the result of the variable depicting the average value over the measured hour.
40
Variable Unit of measurement
Date/time
Setpoint secondary supply
heating
°C
Setpoint secondary supply cooling °C
# of askers for heat
# of askers for cold
Primary return temperature °C
Secundary supply temperature °C
Supply valve steering signal 1 %
Supply valve steering signal 2 %
Cold demand Binary
Heat demand Binary
Table 4: Office floor heating/cooling variables.
The data for the floor heating and cooling subsystems depict an additional layer of
‘complexity’ in the system, namely the fact that the subsystem works in a democratic
manner. In other words: it cannot deliver heating to certain users and cooling to other users
at the same time. It therefore incorporates a ‘voting system’ that decides what will be
supplied, based on the number of askers for heating and cooling, respectively. This is also
the reason for the ‘Cold demand’ and ‘Heat demand’ being binary values: 1 means that this
type of conditioning has, for the moment, won. Having this data on individual rooms (e.g.
setpoint temperatures, number of askers), however, is valuable for the research. As Zhao &
Magoulès (2012) point out, one of the largest problems with this type of analyses has
traditionally been the lack of data on subsections of buildings. Having it present, thus, can be
beneficial.
In addition to the data provided by DWA, we also used weather data from the Dutch
national weather forecasting institute, the KNMI. There were two reasons for this:
(1) The DWA data only provided us with outdoor temperature measurements, whilst the
literature review showed that other variables such as solar radiation, wind speed and
humidity could also influence energy demand significantly, and;
41
(2) The recorded temperatures were not necessarily perfectly accurate; KNMI data
would probably be more reliable.
We retrieved the 2016 hourly and daily weather data datasets from the KNMI website for the
station nearest to the building. Table 5 below depicts a summary of the variables that were
available. The yearly weather data dataset was even more extensive, but did not contain
additional useful features.
Variable Unit of measurement Min. Max. Avg.
Date(/time) 20160101 20161231 -
Avg. wind direction over last 10
minutes of past hour
° (990 for variable) 0 990 194,38
Avg. wind speed over past hour 0,1 m/s 0 130 34,00
Avg. wind speed over last 10 minutes
of past hour
0,1 m/s 0 140 34,11
Max. wind gust over past hour 0,1 m/s 0 250 61,00
Temperature at 1,50 m 0,1 °C -101 349 108,17
Min. temp. at 10 cm in past 6 hrs 0,1 °C -123 298 76,38
Dew point temp. at 1,50 m 0,1 °C -107 220 73,31
Sunshine duration over past hour 0,1 hour 0 10 2,06
Global solar radiation over past hour J/cm2 0 326 43,55
Precipitation duration over past hour 0,1 hour 0 10 0,81
Hourly precipitation amount 0.1 mm (-1 for <0.05 mm) -1 181 0,88
Air pressure at mean sea level 0,1 hPa 9.790 10.456 10162,06
Horizontal visibility [categorical] 0 83 61,76
Cloud cover Octants (9 for sky invisible) 0 9 5,78
Relative athmospheric humidity at 1,50
m
% 23 100 80,97
Present weather code [categorical] 1 92 29,31
Fog Binary 0 1 0,05
Rain Binary 0 1 0,21
Snow Binary 0 1 0,00
Thunder Binary 0 1 0,00
Ice formation Binary 0 1 0,02
Table 5: Weather data summary.
42
Although being the best data available, significant deviations are still expected, given the
distance of ~30 km between the KNMI station and the building. Particularly, we expect the
deviations for the solar radiation figures to have a large impact on the accuracy of the hourly
predictions (Wisse, 2017).
6.1 Missing values
Missing values were, at least in the main dataset, a problem: of the 8.783 values that should
have been recorded 2.170 had not been recorded due to the measurement software failing.
This left us with only 6.613 useful hourly observations. Furthermore, the missing data was
mostly concentrated in two blocks. This meant that imputation would not be possible,
especially considering the fact that we only had data for one year available. For these
reasons, imputing values would not be statistically valid.
The missing hourly data affected 124 days, meaning we had 242 daily observations available.
The ‘gaps’ in the data are clearly visible in the graphs below.
When we look at the daily central cooling load graph depicted in Figure 9, we see that the
cooling load is, of course, significantly higher during the summer months. There is, however,
a demand for cooling throughout the year.
Figure 9: Daily data for the central cooling circuit.
-2500
-2000
-1500
-1000
-500
0
1-j
an
16
-jan
31
-jan
15
-feb
1-m
rt
16
-mrt
31
-mrt
15
-ap
r
30
-ap
r
15
-mei
30
-mei
14
-ju
n
29
-ju
n
14
-ju
l
29
-ju
l
13
-au
g
28
-au
g
12
-se
p
27
-se
p
12
-okt
27
-okt
11
-no
v
26
-no
v
11
-de
c
26
-de
c
Ener
gy d
eman
d (
kWh
)
Central cooling circuit - daily
43
The same cannot be said for the heating demand, which is depicted in Figure 10. During the
summer months, roughly June until September, there is no heating demand for most of the
time. Also, again as expected, the heating demand increases sharply during the winter
period.
Figure 10: Daily data for the central heating circuit.
Within the weather dataset, not all parameters had been recorded for every hour, but for the
variables we needed, there was a valid observation available for all 8.783 hours. Figure 11
below depicts what we expect to be the most important weather variable for predicting the
cooling and heating demand: mean outdoor temperature.
0
500
1000
1500
2000
2500
3000
1-j
an
16
-jan
31
-jan
15
-feb
1-m
rt
16
-mrt
31
-mrt
15
-ap
r
30
-ap
r
15
-mei
30
-mei
14
-ju
n
29
-ju
n
14
-ju
l
29
-ju
l
13
-au
g
28
-au
g
12
-se
p
27
-se
p
12
-okt
27
-okt
11
-no
v
26
-no
v
11
-de
c
26
-de
c
Ener
gy d
eman
d (
kWh
)
Central heating circuit - daily
44
Figure 11: Daily outdoor temperature data.
The two variables that are most at risk of being collinear are temperature and solar radiation.
However, solar radiation was expected to be an important driver for the heating and cooling
loads (Wisse, 2017). The graph below shows that although the trend is definitely collinear,
the behaviour is dissimilar enough to allow both variables into the model. Also, the variables
have been used alongside each other in numerous previous studies (e.g. Neto & Fiorelli,
2008; Cho et al., 2004).
Figure 12: Daily outdoor temperature plotted against global solar radiation.
-100
-50
0
50
100
150
200
250
300
1-j
an
16
-jan
31
-jan
15
-feb
1-m
rt
16
-mrt
31
-mrt
15
-ap
r
30
-ap
r
15
-mei
30
-mei
14
-ju
n
29
-ju
n
14
-ju
l
29
-ju
l
13
-au
g
28
-au
g
12
-se
p
27
-se
p
12
-okt
27
-okt
11
-no
v
26
-no
v
11
-de
c
26
-de
c
Tem
per
atu
re (
0,1
°C
)Mean outdoor temperature - daily
0
500
1000
1500
2000
2500
3000
3500
-100
-50
0
50
100
150
200
250
300
1-j
an1
7-j
an2
-feb
18
-feb
5-m
rt2
1-m
rt6
-ap
r2
2-a
pr
8-m
ei2
4-m
ei9
-ju
n2
5-j
un
11
-ju
l2
7-j
ul
12
-au
g2
8-a
ug
13
-se
p2
9-s
ep
15
-okt
31
-okt
16
-no
v2
-de
c1
8-d
ec
Glo
bal
rad
iati
on
(J/
cm2
)
Tem
per
atu
re (
0,1
°C
)
Mean outdoor temperature vs. global radiation -daily
Mean temperature Global radiation
45
7. Methodology
In this chapter, we will discuss the methodology that has been used to create the models.
Figure 13 depicts a schematic overview of the process. The steps will be elaborated upon in
the following sections.
Figure 13: Schematic overview of methodology.
7.1 Data preparation
Before starting building the models, we made some modifications to the data. First of all, as
it was dispersed over many files (e.g. year quarters, subsystems) we integrated everything
into one file. Next, we manually inserted an (empty) observation for January 1, 2016, 02:00, as
this was the only value that was missing. It would have caused problems of not all days
contained 24 observations. We combined the hourly data with the hourly KNMI dataset.
Further, we removed the differences between the DWA data and the KNMI data with
regards to handling the Daylight Saving Time (DST). Using pivot tables, we converted the
hourly data (which was the only data available in case of the DWA datasets) to daily data.
This daily data was then combined with the daily KNMI dataset. Lastly, we altered the date
variables to a more generic format and extracted the weekday variable (Mon-Sun) for all data
as well as the hourcode variable (1-24) for the hourly data. We did not remove any outliers as
46
there was no valid reason to assume that these observations had been caused by system
malfunctioning. Also, as was discussed in Section 6.1, no imputations were made for missing
values, because (a) the missing data mostly occurred in large blocks and (b) we only had data
for one year. These factors combined made meaningful imputation impossible.
7.2 Seasonality
The data analysis in Chapter 6 revealed a strong weekly seasonality: The average energy
demand for cooling and heating would peak on Tuesday and Wednesday, respectively, and
be lowest on Saturday and Sunday. The plot also displayed an obvious yearly pattern:
cooling demand would peak in the summer, specifically somewhere in July and heating
demand would peak in the winter months.
For the hourly data, the heating demand would on average be highest between 8:00 AM and
9:00 AM. The cooling demand, on average, was highest in the afternoon, around 16:00 PM.
Since the outside temperature and the solar radiation figures peaked before that time
(around 14:00 PM), this already indicated inertia to be present in the building.
Taieb & Hyndman (2013) state that it is important to consider lagged temperatures in a
demand forecasting model due to thermal inertia in buildings. These researchers further
applied a log transformation to the demand, to stabilise the variance over time. Their results
show that boosting techniques perform very well for energy explanation/prediction models,
particularly because they appear to be relatively resistant to overfitting. Inspired by these
authors, we also took into account lagged variables for all our DV’s. The temporal lags were
chosen based on the literature review (e.g. Fan et al. (2014) and Taieb & Hyndman (2013)).
7.3 Feature selection
In this paper, we used engineering knowledge as the main instrument for feature selection.
We did not feed the models with all available features, because many of the parameters
physically could not have a relationship with the dependent variable, and including all
would make the models prone to overfitting. We therefore hypothesized a selection of
potentially relevant parameters based on the literature review and the interviews with DWA.
47
We did not choose for the deep learning methods used by Fan et al. (2017) because the higher
level, more abstract feature classes these would generate would introduce yet another ‘black
box’ and complicate deploying the model for benchmark purposes using a certain input
scenario.
In first instance, we selected the following features for our models:
Daily models Hourly models
Avg. wind speed Avg. wind speed
Avg. temperature Temperature @ 1,50 m @ tobs
Global radiation Global radiation
Air pressure Air pressure
Rel. humidity Rel. humidity
Residual electricity consumption3 Residual electricity consumption
Weekday Weekday
Hourcode
Table 6: Initial features.
Air pressure, although (rarely) used in other papers, was excluded afterwards, because its
influence could not easily be physically explained (Wisse, 2017) and because it had an
adverse effect on the performance of the models. Also, for the daily models, a variable was
added, namely maximum temperature. This had a significant positive effect on model
performance. Initially, cloud cover was also taken into account as a variable. Since this is, in
the KNMI dataset, an ordinal variable (expressed in octants, 1-8) dummy variables were
created. All of the models however performed slightly worse than they did without it, and
there was a lot of collinearity with the solar radiation parameter that was also present.
As we learned from the study by Younes et al. (2012), air infiltration is an important
determinant of heating/cooling load and thus needs to be taken into consideration. For the
building presently studied – which is a low-rise building – there is no “measurement” of air
infiltration. To account for this, we included wind speed into the model. The aforementioned
3 No actual occupancy data is available, thus an approximation (residual electricity consumption) was
used. This is described in more detail in Section 5.4.
48
study showed that wind gusts at the façade are the main cause of air infiltration for low-rise
buildings.
The following features were selected to be included in the final models:
Daily models Hourly models
Avg. wind speed Avg. wind speed
Avg. temperature Temperature @ 1,50 m @ tobs
Max. temperature Global radiation
Global radiation Rel. humidity
Rel. humidity Residual electricity consumption
Residual electricity consumption Weekday
Weekday Hourcode
Table 7: Selected features for daily/hourly models.
Weekday and hourcode were primarily chosen as an approximation of occupancy, and to
incorporate the clear weekly seasonality into the model. Residual electricity consumption
(total building electricity consumption minus electricity consumed by the heatpump) was
chosen as a more specific measure of occupancy because, as discussed, it would fluctuate
with the level of occupation inside the building (e.g. through lighting and devices). Analysis
did not show too much collinearity with the weekday/hourcode variables, so we could safely
use all three.
Initially, we also included lagged variables of the heating/cooling demand itself into the
model. Literature (e.g. Taieb & Hyndman, 2013) showed that this would likely be a good
predictor of current demand. However, we removed these lags for two reasons: first, they
completely dominated all other variables. The predicted energy demand became so
dependent on the past energy demands, that the external variables (weather, occupancy,
time) became almost redundant. This also introduced large prediction errors during
transition periods. Second, after discussing this tactic with DWA, we decided it was
preferred to feed the model with only external variables. This way, a certain real-world
scenario could easily be given to the model as input in roughly the same way as for the
physical model. DWA decided that this would significantly increase value. This also makes
intuitive sense as a rising energy demand due to component wear could simply become part
of the baseline through the lags, invalidating the model for benchmark purposes.
49
7.4 Main models
We chose to build two types of models. The first type was a linear regression model. This
functioned as the benchmark model. The second type was a model based on an artificial
neural network (ANN). Apart from the fact that they suit the problem, we had a couple of
reasons for choosing an ANN approach over an SVM approach. First, according to Zhao &
Magoulès (2012) SVM’s can suffer from slow running speeds, which could be a problem if
the analysis would have to be executed regularly (for instance, on the level of a building
management system). As Yuce et al. (2014) point out, ANN’s are particularly suitable for this
kind of research as they can handle noisy and incomplete data. Our data had a lot of missing
observations. ANN’s do appear to have a high risk of overfitting when they are trained too
long – even more so than SVM’s or more traditional statistical methods. This was monitored
by always testing on a holdout set and by keeping the number of training cycles modest.
7.4.1 Linear regression model
To provide a baseline, we built a linear regression model to explain the drivers behind the
energy demand. This meant that we built four separate regression models, namely two daily
ones (heating and cooling) and two hourly ones. All models were built using the RapidMiner
7.5 software.
In the model, dummy variables were generated for the nominal variables in the data, namely
weekday in case of the daily data and weekday and hourcode in case of the hourly data. As
is customary, one of the dummy variables was removed, as it would be fully explained
through (the absence of) the others. Next, lag variables were created. The tables below depict
the lags we used for the daily and hourly data, respectively.
Daily models Hourly models
Avg. wind speed (1, 2 days) Avg. wind speed (1, 2, 4, 24 hours)
Avg. temperature (1, 2 days) Temperature @ 1,50 m @ tobs (1, 2, 4, 24 hours)
Max. temperature (1, 2 days) Global radiation (1, 2, 4, 24 hours)
Global radiation (1, 2 days) Rel. humidity (1, 2, 4, 24 hours)
Rel. humidity (1, 2 days) Residual electricity consumption (1, 2, 4, 24 hours)
Residual electricity consumption (1, 2 days)
Table 8: Lag variables for daily and hourly data.
50
The resulting feature set was filtered to exclude N/A examples. 10-fold cross-validation was
applied to (a) get the maximum out of the limited data available (especially daily) and (b)
remove the possibility of getting an overly optimistic or pessimistic outcome for the
performance measures through a “lucky” training/testing split. The folds were split using
stratified sampling. The linear regression model was configured to, itself, use the M5_prime
algorithm for further feature selection, and to eliminate collinear features with a minimal
tolerance of 0.05.
7.4.2 Neural Network model
To make as fair a comparison as possible to our benchmark model, we used the same
features and largely the same preparation steps for the neural network. As described in
Section 3.2, the neural network was set-up to use 1000 training cycles with a learning rate
(indicating how much the weights may change at each step) of 0.3 and a momentum (a
fraction of the previous weight update added to the current one, to prevent local minima) of
0.2. A huge improvement to the performance turned out to be given through enabling decay,
meaning that the learning rate is being decreased during learning. Lastly, shuffling (of the
input data, required for ordered data) was used and the input data was being normalized,
which is necessary since a value between -1 and 1 is required for the sigmoid activation
function.
To make the ANN’s more tangible, the visual representation and the description of the daily
cooling neural network are made available in Appendix B.
7.5 Post-processing: ‘Zero-inflation’
Already in early stages, it became apparent that the models did not handle the large span of
heating demand absence in the summer well.
To give an impression: for the heating/hourly dataset, 3.430 datapoints had the value zero -
more than half of the total datapoints available. To deal with this, we decided to try a two-
stage approach for the (stable) daily heating model. The first stage would be the actual
(regression/neural network) model, whilst afterwards, all values estimated below 250 would
51
be set to zero by the model. This threshold was chosen in an arbitrary manner, mainly
supported by the graphs.
Whether this second stage would improve prediction accuracy was uncertain. Because the
estimated values without such an approach are usually close to zero, it might not result in a
significant improvement of the model (Wisse, 2017). Also, the second stage obviously
introduces wrong corrections itself, potentially with larger impact than the former, minor
estimation errors. Nonetheless, we wanted to find out whether this approach could impact
the model positively and did so using both the benchmark linear regression heating model as
well as the neural network heating model.
7.6 Performance measures
We used three measures of performance. First of all, the Root Mean Square Error (RMSE)
was used as an expression of the average absolute error the model makes. This is an
interesting statistic, however, it has the issue of being scale dependent: in this case, it is
depicted in kWh. To overcome this, two other – scale-independent – statistics were also
considered. The R2 value indicates how much of the total variance in the DV is being
explained by the model. The CV-RMSE is an extension of the RMSE, commonly used in the
field of building energy prediction. It stands for coefficient of variation of the RMSE and is
calculated as follows:
𝐶𝑉𝑅𝑀𝑆𝐸 =𝑅𝑀𝑆𝐸
�̅�
Where �̅� is the average actual heating/cooling load.
52
8. Results
In this chapter, we will discuss the results that were achieved using the regression models
and the neural networks. For each result, we will differentiate between excluding and
including the occupancy-related parameters (weekday, residual electricity consumption,
hourcode in case of hourly data) to examine the impact of adding these parameters.
The results we achieved with the final models, but without the occupancy-related
parameters, are summarised in Table 9.
Dataset Method RMSE R2 CV-RMSE
Daily cooling LR 306.907 0.642 0.594
Daily cooling NN 288.578 0.678 0.558
Daily heating LR 265.579 0.875 0.367
Daily heating NN 174.451 0.944 0.241
Hourly cooling LR 20.941 0.512 0.980
Hourly cooling NN 17.958 0.641 0.840
Hourly heating LR 36.304 0.411 1.179
Hourly heating NN 33.083 0.510 1.074
Table 9: Results without occupancy-related parameters.
Several observations can be made. First, in all cases (daily/hourly, heating/cooling,
RMSE/R2/CV-RMSE) the neural network models outperform the linear regression models.
Second, without the occupancy parameters, all models struggle to achieve acceptable levels
of accuracy, with the only exception being the daily heating neural network model, which
comes reasonably close to the ASHRAE recommended threshold (which is discussed in
Chapter 9) of 20%. Third, as was hypothesised earlier, the daily heating demand is much
easier explained through the weather parameters. The models struggle to achieve a decent
result for the cooling variables. The hourly demand as well is very difficult to predict with
some accuracy using just the weather parameters. Last, it is obvious that the hourly models,
especially for heating, perform poorly.
The results with the complete, final models (thus including occupancy parameters) are
summarized in Table 10.
53
Dataset Method RMSE R2 CV-RMSE
Daily cooling LR 212.529 0.829 0.390
Daily cooling NN 103.966 0.958 0.191
Daily heating LR 238.000 0.898 0.341
Daily heating NN 141.595 0.962 0.203
Hourly cooling LR 18.075 0.644 0.830
Hourly cooling NN 10.976 0.868 0.504
Hourly heating LR 29.027 0.626 0.938
Hourly heating NN 23.242 0.760 0.751
Table 10: Final results, including occupancy-related parameters.
Again, these statistics reveal some interesting properties of the models. First, it is obvious
that the occupancy parameters add explanatory power to the model, as all models perform
better than their counterparts without occupancy parameters. Second, even with the
occupancy parameters included, the hourly models fail to achieve decent levels of accuracy.
Lastly, the cooling models appear to benefit more from adding the occupancy parameters
than the heating models.
The graphs below visualize the performance of the daily regression models and the daily
neural network models.
Figure 14: Results of the daily cooling regression model.
-2500
-2000
-1500
-1000
-500
0
500
1000
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
Results regression with cross-validation - cooling - daily
prediction(Sum of ED) Sum of ED
54
Figure 15: Results of the daily heating regression model.
As the above graphs show, the regression models perform reasonably well in situations
where heating/cooling demand is substantial, but fail to do so when loads are small or zero.
In the cooling model, a significant exaggeration for zero-load values is visible, whilst also in
the heating model, it is apparent that the model struggles to predict low demands.
Figure 16: Results of the daily cooling neural network model.
-1000
-500
0
500
1000
1500
2000
2500
3000
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
Results regression with cross-validation - heating - daily
prediction(Sum of ED) Sum of ED
-2500
-2000
-1500
-1000
-500
0
500
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
Results NN with cross-validation - cooling - daily
prediction(Sum of ED) Sum of ED
55
Figure 17: Results of the daily heating neural network model.
The ANN models, however, perform much better at this. For instance, the predicted values
for the heating demand are far closer to zero. Apart from the prediction errors in the low-
demand regions, the overall prediction accuracy is also significantly better.
Figures 18 and 19 visualise the performance of the hourly models. We have randomly
selected 2% of the dataset (~130 hours), as displaying all 6496 available datapoints in a
meaningful manner was not feasible within a single chart.
-500
0
500
1000
1500
2000
2500
3000
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
Results NN with cross-validation - heating - daily
prediction(Sum of ED) Sum of ED
56
Figure 18: Partial results of the hourly cooling neural network model.
Figure 19: Partial results of the hourly heating neural network model.
-160
-140
-120
-100
-80
-60
-40
-20
0
20
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113
Results NN with cross-validation - cooling - hourly (partial)
prediction(Energy demand) Energy demand
-50
0
50
100
150
200
250
1 4 7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
10
0
10
3
10
6
Results NN with cross-validation - heating - hourly (partial)
prediction(Energy demand) Energy demand
57
8.1 Zero-inflation
As discussed, due to almost no heating demand being present during the summer, a
significant part of the errors for the heating model appeared to originate from the wrong
zero-predictions. To improve this, we applied the zero-inflation tactic discussed in Section
7.5 to the daily models. The results are summarised below:
Dataset Method RMSE R2 CV-RMSE
Daily heating NN 140.701 0.993 0.094863
Table 11: Results of the zero-inflated daily heating neural network model
Figure 20: Results of the zero-inflated daily heating neural network model.
The performance improvement is substantial. As the graph shows, almost all the errors of
the model around the zero-line have disappeared, whilst almost no new errors have been
introduced. The CV-RMSE value being below 10 percent indicates at the fit being excellent.
Overall, we can conclude that the heating model performs better than the cooling model. We
expected this to be the case as the cooling demand is less linearly related to the driving
variables such as temperature.
0
500
1000
1500
2000
2500
3000
1
10
19
28
37
46
55
64
73
82
91
10
0
10
9
11
8
12
7
13
6
14
5
15
4
16
3
17
2
18
1
19
0
19
9
20
8
21
7
22
6
23
5
Results NN with cross-validation and zero-inflation - heating - daily
Prediction(Sum of ED) Sum of ED
58
8.2 Additional validation: 2015 data
After the models had been built, we received from DWA data on the last 45 days of 2015.
Similar to Ben-Nakhi & Mahmoud (2004), we wanted to use this as “production data” since it
was completely new to the neural network. Although for the most of 2015 the control
strategies for the AHU’s had been different – invalidating the data for this purpose – the
control strategies from November 16 onward were similar to the 2016 setup. As a result, we
were able to validate the models on an entirely independent test set. We focused on the
neural networks and on the daily models. The performance we achieved using this test set is
summarized in Table 12, and visualized in Figures 13 and 14.
Dataset Method RMSE R2 CV-RMSE
Daily cooling NN 94.817 0.626 0.476
Daily heating NN 185.973 0.759 0.177
Table 12: Neural network performance on the 2015 validation set.
Figure 13: Results of the neural network on the 2015 validation set (cooling).
-600
-500
-400
-300
-200
-100
0
100
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Results validation set (nov/dec 2015) - NN -cooling - daily
Sum of ED prediction(Sum of ED)
59
Figure 14: Results of the neural network on the 2015 validation set (heating).
These results show that the heating model performs very well, but the cooling model
struggles. Note that although the absolute error is lower for the cooling model, this is caused
by the cooling demand itself being much lower. This behaviour can be explained through the
2015 test set consisting entirely of winter months.
8.3 Computational load
The models should at some point be able to run as part of a Building Management System
(BMS), in order for which they have to be computationally efficient. Although this was not
the primary focus of this study, we did include below the computation times for each of the
final models (including occupancy-related parameters) in Table 15. All data indicate training
and testing times combined. Testing times are normally a small component of total time.
Also, complete retraining of the model for every prediction does not need to happen. The
models were built using an Intel quad-core processor running at 3,60 GHz using 4 GB of
memory.
0
500
1000
1500
2000
2500
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Results validation set (nov/dec 2015) - NN -heating - daily
Sum of ED prediction(Sum of ED)
60
Model Computation time
Regression – heating – daily 0m4s
Regression – cooling – daily 0m4s
Regression – heating – hourly 1m31s
Regression – cooling - hourly 1m40s
Neural Net – heating - daily 0m22s
Neural Net – cooling - daily 0m30s
Neural Net – heating - hourly 48m59s
Neural Net – cooling - hourly 50m22s
Table 15: Computation times per model.
61
9. Discussion
The ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers)
which is an American organisation that leads in developing standards and guidelines for
(among others) HVAC systems states in ASHRAE Guideline 14-2002 that the CV-RMSE for
the baseline models needs to be below 20% for daily models when < 12 months of data are
available (Bing, 2005) and below 30% for hourly models (Yin et al., 2010). These guidelines
are very strict: According to Kalogirou et al. (1997), accuracies above 90% are easily within
the acceptable bounds for design engineering purposes for these sorts of problems. As the
results depict, the linear regression models fail to achieve any decent levels of accuracy for
all four datasets. We expected this to be the case because the linear regression models are
unable to model the complex underlying interactions between the input factors. Also, from
the data analysis and the literature review we already learned that many relationships in this
context appear to be non-linear.
The neural networks performed far better and the daily models achieved a level of accuracy
that is compatible with the ASHRAE guidelines. The hourly models were unable to do so. As
we saw in Chapter 6, the hourly values are far more volatile than the daily values. The neural
network models gave an improvement over the regression models, but also did not reach the
ASHRAE recommended treshold of 30% for hourly models.
Including the occupancy-related parameters appears to have a greater positive effect on
cooling load prediction accuracy than on heating load prediction accuracy. This may indicate
that the heating demand results more from a fixed schedule, whilst cooling is more often
explicitly demanded by the users of the building.
Applying the models that were trained (using cross-validation) with the 2016 data to the data
for the last 45 days of 2015 lead to some interesting observations. First, the heating model
performed far better than the cooling model. This can be explained by these months being
winter months only, where heating is more predictable than cooling, based on external
parameters. We believe the results would have been reversed, had the holdout set been for a
summer period.
62
The performance of the neural network will only improve once it learns further as a result of
more data becoming available. Frequent network retraining will be necessary to benefit from
this.
Returning to our original research objective and questions, we can present the following.
A variety of factors drives heating and cooling load for buildings. The most important
weather-related variables are likely average and maximum ambient temperature, solar
radiation, wind speed and humidity. Also, occupancy and building envelope are important
factors. For the building researched in this study, the weather parameters as well as the
occupancy-related parameters proved to have significant explanatory power.
To model heating and cooling load for commercial buildings, the most popular data-driven
solutions are ANN’s and SVM’s, although other types of models have also been used
frequently and successfully. We have not been able to compare all relevant types of models
in this study, but we can confirm that linear regression techniques are unable to model the
complex, non-linear relationships that underlie building energy load prediction problems.
In terms of accuracy, the daily models we built outperformed the physical principles model.
Depending on the model, accuracies in excess of 95% can be obtained.
63
10. Theoretical and managerial contribution
This paper contributes to an already rich field of research in several ways. First, it is
relatively unique because of its object of research. Not many heating/cooling load prediction
studies have been performed in the Netherlands, or even, surprisingly, in mixed climate
environments. We believe we have made a solid starting point, although we have
suggestions for improvement (as will be discussed in the next chapter). The other
characteristic that distinguishes the present study from most existing literature is its
objective. The majority of existing research that involves ANN’s uses prediction and
forecasting models to accomplish savings through for instance purchasing contracts or load
shifting. The primary goal of this study, on the other hand, was to come up with a
benchmark model that can be ‘fed’ with a certain real-world scenario and will give a baseline
figure that can be used to for instance detect component malfunctioning.
From a societal perspective, research in this field can result in savings in energy consumption
and thus in CO2-emissions. As we discussed, a huge share of total energy consumption
originates from (commercial) buildings, making it a very relevant sector to achieve
significant savings through optimising the buildings’ energy performance.
From a firm perspective also, there are benefits from this and similar studies. First, it is
obviously in every firm’s best interest to keep its building energy consumption to a
minimum, without sacrificing comfort. An ‘easy’ way of doing this is carefully monitoring
the energy performance of the building. Our data-driven models show that a benchmark
model for this purpose can be set-up even without in-depth engineering knowledge and
with relatively little resources.
Second, firms like DWA could benefit from this research. Such firms often need to adhere to
energy performance contracts, meaning that a certain energy performance of the building is
‘guaranteed’ by the HVAC consultancy firm. Currently, energy performance is being
monitored and compared to a physical principles benchmark model to detect deviations.
Our models appear to be able to do the same job, but are far less time-consuming to build.
Also, they are more accurate and they can be re-trained regularly to increase performance
further.
64
11. Limitations and suggestions for further research
Although in this thesis we were able to create a model that is sufficiently accurate for daily
predictions of both heating as well as cooling loads, some limitations exist and we have
suggestions regarding potential future research. Both will be presented in this chapter.
The first limitation that needs to be acknowledged concerns the data that we had available.
Although a years’ worth of data is not necessarily too little, the large periods for which the
measurement system failed to record measurements does pose a problem. After all, it means
that for these periods (spanning multiple months) the model cannot be trained in an accurate
manner. Another clear limitation, that we discussed in Section 2.4, concerns data driven
benchmark models in general: faulty behaviour can already be included in the ‘baseline’. In
other words, for buildings as large as the one in this study, there is no way of knowing for
certain that all components work as they should at the time the data is being gathered. Even
a new building can already have some degree of malfunctioning present. A way of dealing
with this is by having a physical principles model alongside the data driven model. Of
course, this somewhat defies the primary purpose of the data driven model in the first place,
which is saving resources. However, given that we were able to outperform the physical
model in certain aspects, having both models as complements could still be a viable solution.
11.1 Suggestions for further research
For further research, we have the following suggestions.
First, looking at the results achieved on the 2015 holdout set, we believe that separate
summer and winter models would benefit accuracy greatly, because cooling and heating
behave completely differently depending on this condition. As such, the cooling models
struggle during the winter period, and the heating models during the summer periods.
Second, it can be interesting to find out what the effect is of applying the zero-inflation
upfront, thus before the main models. That way, an initial model could be applied to classify
a certain prediction in either the zero or non-zero class. For the non-zero predictions, the
main models could be applied. This may give different results compared to the post-model
zero-inflation tactic we applied.
65
Third, it would be equally interesting to find out the effect of calculating the daily loads as
the sums of the hourly loads. Although our hourly predictions are not yet on a suitable level,
in aggregate their accuracy might be better than for the current daily models.
Fourth, like Taieb & Hindman (2013) suggest, both daily and hourly predictions could
potentially be improved by creating a separate model for each hour. In their research,
significantly higher levels of accuracy were achieved for the hourly predictions than in the
present study. This somewhat contradicts the findings of Fay et al. (2003), as they proved
that sequential models generally outperform parallel models. However, the latter authors
also successfully applied a combination of sequential and parallel models to achieve their
optimal results.
Fifth, a similar extension, creating separate models for working days and non-working days
could improve results. This has been done by, among others, Neto & Fiorelli (2008).
Sixth, national holidays were not taken into account in this study. We expect such days to, in
terms of heating/cooling demand behave similar to weekend days, such that demands have
likely been overestimated when these holidays occurred on “working days”. A slight
increase in performance could result from correcting this.
Seventh, it would be interesting to apply ensemble techniques to this problem. Fan et al.
(2014) for instance applied a combination of (among others) multiple linear regression,
ARIMA, support vector regression, random forests and k-nearest neighbours to form an
ensemble model. This model significantly outperformed the separate models.
Eighth, taking into account the wind direction might add explanatory power, especially in
combination with the wind speed variable we included. Given that air infiltration is such a
large predictor of heating and cooling load – as Younes et al. (2012) showed – we believe that
including the wind direction, which is available in the KNMI data, might improve results.
Lastly, the study would of course greatly benefit from having more and especially more
complete data available. The large blocks that are missing in the set we used are, we believe,
an important cause of the hourly models underperforming. Also, some additional variables –
that were not monitored in the present dataset – could prove important. For instance, the
electricity consumption of the heat pumps was measured, but the electricity consumption of
66
the AHU’s was not. Thus, we could not subtract this from total power consumption to get as
close an approximation to occupancy as possible.
67
Literature list
Bauer, M., and Scartezzini, J.-L. (1998. ‘A simplified correlation method accounting for
heating and cooling loads in energy-efficient buildings’, Energy and Buildings, 27(2), 147–154.
Ben-Nakhi, A.E. and Mahmoud, M.A. (2004) ‘Cooling load prediction for buildings using
general regression neural networks’, Energy Conversion and Management 2004; 45:2127–41.
Bing, D. (2005) Baseline models of whole building energy consumption, ch. 3
Caudil, M. (1989) ‘Neural Network Primer Part I’, AI expert 02-1989
Cho, S.H., Kim, W.T., Tae, C.S. and Zaheeruddin, M. (2004) ‘Effect of length of measurement
period on accuracy of predicted annual heating energy consumption of buildings’, Energy
Conversion and Management, 45(18–19), 2867–28t78.
Datta, D., Tassou, S.A. and Marriott, D. (2000) ‘Application of neural networks for the
prediction of the energy consumption in a supermarket’, Proc. CLIMA 1–10.
Fan, C., Xiao, F. and Wang, S. (2014) ‘Development of prediction models for next-day
building energy consumption and peak power demand using data mining techniques’,
Applied Energy 127, 1–10.
Fan, C., Xiao, F. and Zhao, Y. (2017) ‘A short-term building cooling load prediction method
using deep learning algorithms’, Applied Energy 195, 222-233.
68
Fay, D., Ringwood, J.V., Condon, M. and Kelly, M. (2003) ‘24-h electrical load data sequential
or partitioned time series?’, Neurocomputing 55(3-4), 469–498.
Gelston, L. (2017) ‘Neural network training’. Accessed on July 7 2017 on
http://lauragelston.ghost.io/neural-network-training/.
González, P.A., and Zamarreño, J.M. (2005) ‘Prediction of hourly energy consumption in
buildings based on a feedback artificial neural network’, Energy and Buildings, 37(6), 595–601.
Kalogirou, S., Neocleous, C. and Schizas, C. (1997) ‘Building Heating Load Estimation Using
Artificial Neural Networks’, Proceedings of the 17th International Conference on Parallel
Architectures and Compilation Techniques 1–8.
Lai, F., Magoulès, F., and Lherminier, F. (2008) ‘Vapnik’s learning theory applied to energy
consumption forecasts in residential buildings’, International Journal of Computer Mathematics
85(0), 1563–1588.
McCaffrey, J.D. (2017) ‘Neural Network Momentum’. Accessed on July 17 2017 on
https://jamesmccaffrey.wordpress.com/2017/06/06/neural-network-momentum/.
Meyer, D. (2017) ‘Support Vector Machines: The Interface to libsvm in package e1071’.
Accessed on February 26 2017 on ftp://cran.r-
project.org/pub/R/web/packages/e1071/vignettes/svmdoc.pdf
69
Neto, A.H., and Fiorelli, F.A.S. (2008) ‘Comparison between detailed model simulation and
artificial neural network for forecasting building energy consumption’, Energy and Buildings
40(12), 2169–2176.
Newsham, G.R., and Birt, B.J. (2010) ‘Building-level Occupancy Data to Improve ARIMA-
based Electricity Use Forecasts’, BuildSys 2010, 13–18.
‘E-Handbook of Statistical Methods’ (2012) Accessed on July 5 2017 on
http://www.itl.nist.gov/div898/handbook/
Pérez-Lombard, L., Ortiz, J., and Pout, C. (2008.’ A review on buildings energy consumption
information’, Energy and Buildings 40(3), 394–398.
‘Neural Net’ (2017) Accessed on July 5 2017 on
https://docs.rapidminer.com/studio/operators/modeling/predictive/neural_nets/neural_net.h
tml.
Ramanathan, R., Engle, R. F., Granger, C.W.J., Vahid, F. and Brace, C. (1997) ‘Short-run
forecasts of electricity loads and peaks’, International Journal of Forecasting 13, 161–174.
Taieb, S.B. and Hyndman, R.J. (2013) ‘A gradient boosting approach to the Kaggle load
forecasting competition’.
Ürge-Vorsatz, D., Cabeza, L.F., Serrano, S., Barreneche, C. and Petrichenko, K. (2015)
‘Heating and cooling energy trends and drivers in buildings’, Renewable and Sustainable
Energy Reviews 2015; 41:85–98.
70
Wisse, K. & Abdalla, G. (2017), interviews.
Xue, X., Wang, S.W., Sun, Y.J., Xiao, F. (2014) ‘An interactive building power demand
management strategy for facilitating smart grid optimization’, Applied Energy 2014 ;116:297–
310.
Yang, Z. and Becerik-Gerber, B. (2017) ‘Assessing the impacts of real-time occupancy state
transitions on building heating/cooling loads’, Energy and Buildings 135, 201-211.
Yin, R., Xu, P., Piette, M.A. and Kiliccote, S. (2010) ‘Study on Auto-DR and pre-cooling of
commercial buildings with thermal mass in California’, Energy and Buildings, 42, 967–975.
Yuce, B., Li, H., Rezgui, Y., Petri, I., Jayan, B. and Yang, C. (2014) ‘Utilizing artificial neural
network to predict energy consumption and thermal comfort level: An indoor swimming
pool case study’, Energy and Buildings 80, 45–56.
Younes, C., Shdid, C.A., Bitsuamlak, G. (2012). Air infiltration through building envelopes: A
review. Journal of Building Physics. vol 35. pp 267. 2012
Zhao, H.X., and Magoulès, F. (2012) ‘A review on the prediction of building energy
consumption’, Renewable and Sustainable Energy Reviews 16(6), 3586–3592.
71
Appendix A: AHU, Toutdoor, Toutdoor setpoint data descriptives
Variable Unit of measurement
Date/time
Fan return steering signal %
Fan supply steering
signal
%
Setpoint air pressure
supply channel
Pa
Air pressure supply
channel
Pa
Temperature air supply °C
RV return air %
Setpoint air blow-in
temperature
°C
Valve cooling steering
signal
%
Valve steam steering
signal
%
Valve heating steering
signal
%
Heat wheel steering
signal heating
%
Air return temperature °C
Temperature cooling
verw-battery entrance
°C
Temperature cooling
verw-battery exit
°C
Table 16: AHU variables.
Variable Unit of measurement
Date/time
Outside temperature °C
Measured inside
temperature
°C
Table 17: Toutdoor/Tindoor variables.
72
Variable Unit of measurement
Date/time
Outside temperature °C
Desired inside
temperature
°C
Table 18: Toutdoor/Tindoor setpoint variables.
Most variables are self-explanatory, yet we will make some remarks regarding the heat
recovery system that is part of the AHU circuit.
First of all, the ‘heat wheel steering signal’ being a value > 0 indicates the heat recovery
system being active. Although we do not use the AHU data directly into our model, it does
lead to important insights: the heat wheel, when active, further deteriorates the potential
linear relationship between outside temperature and energy demand. As we saw earlier, a
certain amount of inertia is present by definition because of the building characteristics.
However, the heat recovery system complicates this relationship slightly more so. Heat being
recovered from the return flow does not ask any energy from the central heating circuit,
unlike the heating coil directly after the recovery system, thus introducing more nonlinearity
to the system. This notion also holds for the cooling circuit. The effect of this, however, is
relatively minor.
Once the value for the ‘Air pressure supply’ variable becomes greater than the ‘Setpoint air
pressure supply channel’ variable, this indicates the AHU’s being active. The same can be
deducted from the ‘Fan return steering signal’ being > 0. It is important to note however that
the fans of the AHU can be active, but no heating or cooling energy is being absorbed from
the central system. The latter can be deducted from the ‘Valve cooling steering signal’ and
‘Valve heating steering signal’ variables.
73
Appendix B: Neural network visualisation and description for daily
cooling model
Figure 21: Visualisation of neural network for the daily cooling model.
74
Hidden 1
========
Node 1 (Sigmoid)
----------------
Weekdag_vrijdag: -0.285
Weekdag_zaterdag: 0.069
Weekdag_zondag: 0.072
Weekdag_dinsdag: -0.214
Weekdag_woensdag: -0.191
Weekdag_donderdag: -0.253
Sum of Residual_elec: -
0.421
FG: 0.026
TG: -0.333
TX: -0.330
Q: -0.146
UG: 0.131
Q-1: -0.072
Q-2: -0.005
TG-1: -0.185
TG-2: -0.093
UG-1: 0.036
UG-2: 0.018
FG-1: 0.050
FG-2: 0.085
Sum of Residual_elec-1:
0.008
Sum of Residual_elec-2:
0.135
TX-1: -0.258
TX-2: -0.142
Bias: 0.280
Node 2 (Sigmoid)
----------------
Weekdag_vrijdag: 0.090
Weekdag_zaterdag: 0.059
Weekdag_zondag: 0.140
Weekdag_dinsdag: 0.152
Weekdag_woensdag: 0.157
Weekdag_donderdag: 0.084
Sum of Residual_elec: -
0.003
FG: 0.094
TG: -0.131
TX: -0.124
Q: 0.130
UG: 0.002
Q-1: 0.134
Q-2: 0.119
TG-1: 0.027
TG-2: 0.031
UG-1: -0.031
UG-2: 0.033
FG-1: 0.178
FG-2: 0.161
Sum of Residual_elec-1: -
0.114
Sum of Residual_elec-2: -
0.032
TX-1: -0.024
TX-2: 0.075
Bias: -0.159
Node 3 (Sigmoid)
----------------
Weekdag_vrijdag: 0.444
Weekdag_zaterdag: -0.045
Weekdag_zondag: -0.001
Weekdag_dinsdag: 0.252
Weekdag_woensdag: 0.560
Weekdag_donderdag: 0.387
Sum of Residual_elec:
0.738
FG: 0.003
TG: 0.622
TX: 0.689
Q: 0.211
UG: -0.193
Q-1: 0.185
Q-2: 0.156
TG-1: 0.542
TG-2: 0.380
UG-1: 0.061
UG-2: 0.060
FG-1: -0.098
FG-2: -0.072
Sum of Residual_elec-1: -
0.304
Sum of Residual_elec-2: -
0.402
TX-1: 0.532
TX-2: 0.373
Bias: -0.742
Node 4 (Sigmoid)
----------------
Weekdag_vrijdag: 0.140
Weekdag_zaterdag: 0.166
Weekdag_zondag: 0.173
Weekdag_dinsdag: 0.144
Weekdag_woensdag: 0.180
Weekdag_donderdag: 0.131
Sum of Residual_elec: -
0.035
FG: 0.086
TG: -0.035
TX: -0.094
Q: 0.056
UG: -0.037
Q-1: 0.049
Q-2: 0.112
TG-1: 0.006
TG-2: -0.032
UG-1: -0.037
UG-2: 0.032
FG-1: 0.117
FG-2: 0.117
Sum of Residual_elec-1: -
0.076
Sum of Residual_elec-2: -
0.021
TX-1: 0.013
TX-2: 0.024
Bias: -0.241
Node 5 (Sigmoid)
----------------
Weekdag_vrijdag: -0.421
Weekdag_zaterdag: 0.048
Weekdag_zondag: 0.100
Weekdag_dinsdag: -0.251
Weekdag_woensdag: -0.343
Weekdag_donderdag: -0.372
Sum of Residual_elec: -
0.612
FG: 0.005
TG: -0.511
TX: -0.490
Q: -0.228
UG: 0.154
Q-1: -0.158
Q-2: -0.076
TG-1: -0.329
TG-2: -0.248
UG-1: -0.029
UG-2: 0.031
FG-1: 0.073
FG-2: 0.024
Sum of Residual_elec-1:
0.126
Sum of Residual_elec-2:
0.229
TX-1: -0.356
TX-2: -0.216
Bias: 0.470
Node 6 (Sigmoid)
----------------
Weekdag_vrijdag: 0.221
Weekdag_zaterdag: 0.041
Weekdag_zondag: 0.135
Weekdag_dinsdag: 0.174
Weekdag_woensdag: 0.200
Weekdag_donderdag: 0.171
Sum of Residual_elec:
0.220
FG: 0.083
TG: 0.140
TX: 0.223
Q: 0.165
UG: -0.036
Q-1: 0.079
Q-2: 0.097
TG-1: 0.114
TG-2: 0.061
UG-1: 0.009
UG-2: -0.042
FG-1: -0.004
FG-2: 0.030
Sum of Residual_elec-1: -
0.057
Sum of Residual_elec-2: -
0.050
TX-1: 0.118
TX-2: 0.107
Bias: -0.308
Node 7 (Sigmoid)
----------------
Weekdag_vrijdag: 0.170
Weekdag_zaterdag: 0.150
Weekdag_zondag: 0.148
Weekdag_dinsdag: 0.165
Weekdag_woensdag: 0.157
Weekdag_donderdag: 0.170
Sum of Residual_elec: -
0.020
FG: 0.131
TG: -0.078
TX: -0.001
Q: 0.104
UG: -0.051
Q-1: 0.087
Q-2: 0.095
TG-1: 0.014
TG-2: -0.016
UG-1: -0.004
UG-2: -0.025
FG-1: 0.099
FG-2: 0.167
Sum of Residual_elec-1: -
0.095
Sum of Residual_elec-2: -
0.024
TX-1: 0.052
TX-2: 0.072
Bias: -0.175
Node 8 (Sigmoid)
----------------
Weekdag_vrijdag: 0.160
Weekdag_zaterdag: 0.143
Weekdag_zondag: 0.173
Weekdag_dinsdag: 0.177
Weekdag_woensdag: 0.126
Weekdag_donderdag: 0.145
Sum of Residual_elec: -
0.070
FG: 0.121
75
TG: -0.009
TX: -0.053
Q: 0.132
UG: -0.032
Q-1: 0.168
Q-2: 0.113
TG-1: 0.030
TG-2: 0.044
UG-1: 0.011
UG-2: -0.024
FG-1: 0.181
FG-2: 0.154
Sum of Residual_elec-1: -
0.100
Sum of Residual_elec-2:
0.002
TX-1: -0.004
TX-2: 0.040
Bias: -0.187
Node 9 (Sigmoid)
----------------
Weekdag_vrijdag: 0.158
Weekdag_zaterdag: 0.120
Weekdag_zondag: 0.197
Weekdag_dinsdag: 0.168
Weekdag_woensdag: 0.134
Weekdag_donderdag: 0.148
Sum of Residual_elec: -
0.003
FG: 0.124
TG: -0.052
TX: -0.023
Q: 0.104
UG: -0.043
Q-1: 0.098
Q-2: 0.163
TG-1: -0.038
TG-2: 0.026
UG-1: -0.007
UG-2: -0.037
FG-1: 0.168
FG-2: 0.172
Sum of Residual_elec-1: -
0.042
Sum of Residual_elec-2: -
0.071
TX-1: 0.012
TX-2: 0.020
Bias: -0.156
Node 10 (Sigmoid)
-----------------
Weekdag_vrijdag: 0.190
Weekdag_zaterdag: 0.100
Weekdag_zondag: 0.152
Weekdag_dinsdag: 0.182
Weekdag_woensdag: 0.163
Weekdag_donderdag: 0.126
Sum of Residual_elec:
0.006
FG: 0.052
TG: 0.010
TX: -0.023
Q: 0.118
UG: -0.026
Q-1: 0.052
Q-2: 0.110
TG-1: -0.036
TG-2: -0.011
UG-1: -0.002
UG-2: 0.023
FG-1: 0.142
FG-2: 0.127
Sum of Residual_elec-1: -
0.007
Sum of Residual_elec-2:
0.008
TX-1: 0.065
TX-2: -0.004
Bias: -0.257
Node 11 (Sigmoid)
-----------------
Weekdag_vrijdag: 0.211
Weekdag_zaterdag: 0.102
Weekdag_zondag: 0.166
Weekdag_dinsdag: 0.186
Weekdag_woensdag: 0.119
Weekdag_donderdag: 0.166
Sum of Residual_elec: -
0.047
FG: 0.079
TG: -0.018
TX: 0.020
Q: 0.112
UG: -0.021
Q-1: 0.141
Q-2: 0.080
TG-1: -0.037
TG-2: 0.042
UG-1: -0.023
UG-2: 0.012
FG-1: 0.071
FG-2: 0.163
Sum of Residual_elec-1: -
0.002
Sum of Residual_elec-2: -
0.048
TX-1: -0.006
TX-2: 0.021
Bias: -0.202
Node 12 (Sigmoid)
-----------------
Weekdag_vrijdag: 0.016
Weekdag_zaterdag: 0.099
Weekdag_zondag: 0.099
Weekdag_dinsdag: 0.033
Weekdag_woensdag: 0.083
Weekdag_donderdag: -0.002
Sum of Residual_elec: -
0.089
FG: 0.098
TG: -0.182
TX: -0.112
Q: 0.018
UG: 0.045
Q-1: 0.035
Q-2: 0.147
TG-1: -0.079
TG-2: 0.059
UG-1: -0.049
UG-2: -0.016
FG-1: 0.151
FG-2: 0.156
Sum of Residual_elec-1: -
0.046
Sum of Residual_elec-2: -
0.034
TX-1: -0.015
TX-2: -0.015
Bias: -0.075
Node 13 (Sigmoid)
-----------------
Weekdag_vrijdag: 0.208
Weekdag_zaterdag: 0.077
Weekdag_zondag: 0.143
Weekdag_dinsdag: 0.171
Weekdag_woensdag: 0.147
Weekdag_donderdag: 0.138
Sum of Residual_elec:
0.088
FG: 0.100
TG: 0.048
TX: 0.023
Q: 0.096
UG: -0.005
Q-1: 0.067
Q-2: 0.062
TG-1: 0.085
TG-2: 0.047
UG-1: -0.000
UG-2: 0.006
FG-1: 0.075
FG-2: 0.072
Sum of Residual_elec-1: -
0.025
Sum of Residual_elec-2: -
0.007
TX-1: 0.110
TX-2: 0.075
Bias: -0.252
Node 14 (Sigmoid)
-----------------
Weekdag_vrijdag: 0.066
Weekdag_zaterdag: 0.048
Weekdag_zondag: 0.156
Weekdag_dinsdag: 0.102
Weekdag_woensdag: 0.177
Weekdag_donderdag: 0.055
Sum of Residual_elec: -
0.008
FG: 0.110
TG: -0.133
TX: -0.103
Q: 0.083
UG: 0.036
Q-1: 0.080
Q-2: 0.139
TG-1: -0.066
TG-2: 0.009
UG-1: -0.028
UG-2: -0.023
FG-1: 0.175
FG-2: 0.119
Sum of Residual_elec-1: -
0.046
Sum of Residual_elec-2: -
0.005
TX-1: -0.037
TX-2: 0.112
Bias: -0.148
Output
======
Regression (Linear)
-------------------
Node 1: 0.290
Node 2: 0.291
Node 3: -1.237
Node 4: 0.158
Node 5: 0.672
Node 6: -0.092
Node 7: 0.195
Node 8: 0.259
Node 9: 0.233
Node 10: 0.150
Node 11: 0.180
Node 12: 0.167
Node 13: 0.082
Node 14: 0.246
Threshold: -0