Linking Data: Semantic enrichment of the existing building ... · Semantic Web ontologies (BOT,...
Transcript of Linking Data: Semantic enrichment of the existing building ... · Semantic Web ontologies (BOT,...
Linking Data: Semantic enrichment of the existing building
geometry
Jeroen Werbrouck
Supervisors: Prof. Pieter Pauwels, Willem Bekers
Counsellor: Mathias Bonduel
Master's dissertation submitted in order to obtain the academic degree of
Master of Science in de ingenieurswetenschappen: architectuur
Department of Architecture and Urban Planning
Chair: Prof. dr. ir. Arnold Janssens
Faculty of Engineering and Architecture
Academic year 2017-2018
Linking Data: Semantic enrichment of the existing building
geometry
Jeroen Werbrouck
Supervisors: Prof. Pieter Pauwels, Willem Bekers
Counsellor: Mathias Bonduel
Master's dissertation submitted in order to obtain the academic degree of
Master of Science in de ingenieurswetenschappen: architectuur
Department of Architecture and Urban Planning
Chair: Prof. dr. ir. Arnold Janssens
Faculty of Engineering and Architecture
Academic year 2017-2018
ii
iii
The author gives permission to make this master dissertation available for consultation and to copy parts of this master dissertation for personal use.
In the case of any other use, the copyright terms have to be respected, in particular with regard to the obligation to state expressly the source when quoting results from this master dissertation.
De auteur geeft de toelating deze masterproef voor consultatie beschikbaar te stellen
en delen van de masterproef te kopiëren voor persoonlijk gebruik.
Elk ander gebruik valt onder de bepalingen van het auteursrecht, in het bijzonder met betrekking tot de verplichting de bron uitdrukkelijk te vermelden bij het aanhalen van resultaten uit deze masterproef.
Jeroen Werbrouck,
Ghent, 31 May 2018
iv
v
Acknowledgements
Underlying thesis would not have been as it is without some persons that made their
time and experience available to provide feedback on the various topics that meet in
this thesis. First of all, I would like to thank prof. Pieter Pauwels for being my promotor
during this research, for his expertise and feedback during almost an entire year and
his willingness to plan feedback sessions on very short notice. Many thanks also to
Mathias Bonduel, who spent lots of time to prepare the source files for this thesis, for
his continuous guidance on both remote sensing techniques and Linked Data, the
comments on the draft and the quick answers on my mails, regardless whether during
the week or on Sundays. Thanks to Willem Bekers, who was not involved in the thesis
from the very beginning, but jumped in when the time was right to provide new
perspectives on the thesis during the feedback sessions.
I am also very grateful to prof. Jakob Beetz, who coached me during my Erasmus
Exchange at RWTH Aachen and without whom the course of the project would have
been entirely different. The insights of the M1 Projekt under his supervision proved of
great use to the research carried out in context of this thesis. Thanks, Jakob, for
sparking my interest in ‘Architekturinformatik’, teaching me how to code, explaining
the basics of BIM and for your encouragement to dive into the world of Linked Data.
Finally, I would like thank my brother Andreas for taking the time to read the draft
version of underlying text and provide it with useful comments and questions, both on
content and writing style.
Personally, I took a lot of pleasure in the diversity of the project, in which I could
obtain and combine knowledge about topics of which I had never heard of before. The
alternation between reading, writing and coding ensured the project did not bore me a
second. I hope I succeeded in presenting all these subjects in an interesting and
accessible way and that it can form a basis for future research.
vi
Linking Data: Semantic enrichment of the existing building geometry
JEROEN WERBROUCK
Supervisors: Prof. Pieter Pauwels, Willem Bekers
Counsellor: Mathias Bonduel
Master's dissertation submitted in order to obtain the academic degree of
Master of Science in de ingenieurswetenschappen: architectuur
Faculty of Engineering and Architecture – Ghent University
Department of Architecture and Urban Planning
Chair: Prof. dr. ir. Arnold Janssens
Academic year 2017-2018
Abstract Building Information Modelling (BIM) has changed the way buildings are conceived,
planned and executed. Apart from its frequent use for as-planned buildings, BIM has
now been used for some years as a useful tool for digitizing existing buildings as well.
mostly by performing a ‘scan-to-BIM’. However, some inherent characteristics of
existing buildings are complicating such a modelling process: uncertainties,
imperfections in the built object and an interdisciplinarity that may go beyond the
traditional Architecture, Engineering and Construction (AEC) topics. In this thesis, a
method is proposed to integrate the concepts of scan-to-BIM into a Linked Data
context, which could provide some answers to these complications. Current modular
Semantic Web ontologies (BOT, PRODUCT, GeoSPARQL) are combined and
extended to provide an alternative, minimalist framework for creating a semantically
enrichened building model of existing buildings, in the form of an RDF graph. This
‘scan-to-Graph’ framework combines building topology, geospatial data and geometry
with product classes, source linking and the documentation of assumptions, with a
focus on point cloud sources. An example plugin for Rhinoceros 6 has been developed
to link such semantic information to capricious geometries based on real-world objects.
Finally, as a proof of concept, the process is demonstrated in a case study of the
Presence Chamber of the Gravensteen castle in Ghent. With point clouds as a basis, a
geometric 3D model of this room was created, semantically enriched and serialized to
an RDF graph, including geometry.
Keywords: Linked Data, existing buildings, scan-to-BIM, scan-to-graph, remote
sensing, geometry
vii
Extended abstract (English)
In recent years, the AEC Industry (Architecture, Engineering and Construction) has
made serious progress in terms of interoperability, construction planning and execution
of a building project. This was made possible by the emergence of Building Information
Modelling (BIM): a semantically rich, digital model that contains information about
geometry, building physics, structure, the building’s execution and so on. Although
BIM is primary used for planning and executing new building projects (‘as-planned’),
there is also an increased use of BIM for existing buildings (‘as-is’). Such a BIM of an
existing building can be used in renovation projects, heritage conservation, facility
management or management of the building life cycle, including demolition. The
primary source for ‘as-is’ building reconstruction is a point cloud, the result of remote
sensing techniques as Terrestrial Laser Scanning (TLS) or photogrammetry. Therefore,
the process of creating a BIM with such point clouds as a blueprint is called ‘scan-to-
BIM’, in the case of photogrammetry also sometimes called ‘photogrammetry-to-BIM’.
Several challenges currently prevent frequent application of the creation of an as-is
model (Volk et al., 2014):
(1) The costly, time-intensive and error prone process of creating an as-built model, due to
imperfect real-world geometry. Nowadays, such conversions mostly happen manually or
semi-automatically;
(2) An efficient way of dealing with uncertain data (occlusions, internal structure, etc.) is
lacking in current BIM packages;
(3) The lack of an incentive for continuous information maintenance and assessment in
BIM.
Since BIM is primary meant to be used in the AEC industry, additional challenges rise
when integrating knowledge from other disciplines in the model:
(1) How to deal with information about Cultural Heritage (e.g. historical events that shaped
the appearance of the building), GIS or other disciplines that are currently not (fully)
integrated in existing BIM packages (interdisciplinarity);
(2) In the case of heritage: how to deal with building elements that do not relate to
available product classes in BIM, because they are not used in modern-day
buildings.
Independent from BIM and the AEC industry, there is an emergence of a global
Semantic Web, in which various disciplines can interconnect their knowledge on data
level, in contrast to the classic World Wide Web, where information is mainly linked
on document level. The principles of this semantic web are based on Linked Data
concepts, the ‘Resource Description Framework’ (RDF) being the main ‘language’ to
make connected, semantic statements in the form of triples. The use of Linked Data
viii
concepts specific for the AEC industry is currently still in research phase, but the
prospects are promising, especially concerning enhanced interoperability, ‘linking across
domains’ and ‘logical inference and proofs’ (Pauwels et al., 2017).
This thesis investigates the use of Linked Data concepts as a way to overcome some of
the above mentioned challenges, more specifically dealing with uncertain data. The
integration of knowledge apart from typical AEC concepts (‘linking across domains’),
product classification and the intensive modelling process are also, to a lesser extent,
part of the research. Using existing Linked Data ontologies as well as a new ontology
that was constructed in the context of this research, a modular method will be proposed
in which the principles of scan-to-BIM are translated into a Linked Data context. This
process will be denoted as ‘scan-to-graph’.
Outline A general introduction to the thesis is given in Chapter I. The start of the research
project involved a literature study on both remote sensing and Linked Data. The next
two chapters are conceived as brief introductions to these topics for the less acquainted
reader. Chapter II discusses the basics of Terrestrial Laser Scanning and
Photogrammetry and ends with a section on scan-to-BIM; Chapter III introduces the
main Linked Data concepts that are used in this thesis: RDF, RDFS, Ontologies,
SPARQL. It ends with an outline of current applications of Linked Data for both the
AEC Industry and a case in which Linked Data is applied for 3D reconstruction of the
built Cultural Heritage.
In Chapter IV, the development of a scan-to-graph framework is discussed. It starts
with a discussion of existing Linked Data ontologies which form the main basis of the
framework: BOT (Rasmussen et al., 2018) for the building topology, the Building
Product Ontology 1 (W3C Linked Building Data Community Group) for product
classification and GeoSPARQL (Open Geospatial Consortium) for georeferencing and
linking geometries. In addition to these existing ontologies, a specific ontology was
created in the context of this research for keeping track of sources, modelling remarks
and occlusions and the integration of 3D geometry in the graph. This ontology is by
definition compatible with the existing ones (in its use of the Linked Data framework
RDF) and is discussed next in this chapter. Then, a comparison between some open-
source geometry formats is made, based on storage size and quality, eventually selecting
one of them (STEP, Standard for the Exchange of Product Model Data) to include the
as-is 3D geometry in the graph. Chapter IV concludes with the combination of the
previously described topics in a proposed template for a Linked Data graph that
involves a scan-to-graph procedure.
1 https://github.com/pipauwel/product (accessed 25/05/2018)
ix
Chapter V discusses a prototypical application that implements scan-to-graph
functionality. Such functionality is not included in existing BIM packages, which is
why the development of this application (as a plugin for Rhino 6) was considered part
of the research. The plugin provides a basic user interface for converting the as-is 3D
geometry and additional semantic attributes to a Linked Data graph, according to the
template in Chapter III. The focus lies in semantic enrichment of a CAD model, limited
to the topics discussed in Chapter IV. The plugin further includes functionality to
query the graph with SPARQL and the visualize geometrical results of such query in
Rhino. An indication for how topics that are not covered by the user interface may
nevertheless be implemented in the data graph is made at the end of the chapter.
Chapter VI covers a case study as a proof-of-concept. The first steps of a scan-to-graph
process do not differ that much from a normal scan-to-BIM process. The description of
the case study encompasses the 3D modelling process of an ‘as-is’ model of the presence
chamber in the Gravensteen Castle, Ghent (based on segmented point clouds),
geometric deviation analyses and finally the semantic enrichment.
A closing discussion is the topic of Chapter VII, along with a general conclusion and
an outlook to future work.
Deliverables The deliverables of this thesis are:
(1) A modular and extensible ontology (STG, ‘scan-to-graph’) that provides
Linked Data definitions (classes and properties) to perform the proposed scan-
to-graph process;
(2) An extensible template for constructing a Linked Data graph of an as-is 3D
model;
(3) A plugin for Rhino 6 (IronPython 2.7 and Python 2.7) that:
- allows to semantically enrichen 3D geometry, in the end generating a Linked
Data graph of the model;
- provides functionality to query the graph and visualize geometrical results;
(4) A script to extract the geometric information contained in the graph, in
STEP format;
(5) The case study of the presence chamber of the Gravensteen Castle in Ghent
(this includes the final graph, and the geometry in a .3dm (Rhino) file).
Deliverables (1), (2), (3), and (4) are made accessible at the GitHub repository that
was created in the context of this thesis.2 The documents of the case study are not
openly accessible but can be provided on request, after permission by the city of Ghent.
2 https://github.com/JWerbrouck/scan-to-graph
x
Conclusion The use of Linked Data concepts for semantic enrichment of the existing building
geometry seems a viable way of dealing with some of the challenges inherent to current
BIM for as-is reconstruction. The basic framework that is described in the text
combines existing ontologies that describe basic building data (topology, element
classification, georeferencing) and extends them by providing a way to link and manage
the sources used for reconstruction, and to keep track of general modelling remarks,
assumptions and occlusions. The implementation of geometrical information is a valid
concept, because the creation of a digital 3D model of the non-platonic geometries
inherent to existing building objects is a core objective of a scan-to-BIM process,
whether performed in a ‘ classic’ BIM environment or in a Linked Data context.
The developed scan-to-graph framework and the plugin engage in current research
efforts for the development of a Linked Data version of BIM, based on multiple small,
modular ontologies, each one with its own functionality and use for a specific topic
(Rasmussen et al., 2018). It can be easily extended by other Linked Data ontologies,
that provide a way to describe related topics (historical data, geographic or structural
information etc.). In this way, this research may also serve as a reference project for
future master dissertations concerned with the semantics of existing buildings.
References - Pauwels, P., Zhang, S., Lee, Y.-C., 2017. Semantic web technologies in AEC industry: A literature
overview. Autom. Constr. 73, 145– 165. https://doi.org/10.1016/j.autcon.2016.10.003 - Rasmussen, M., Pauwels, P., Hviid, C., Karlshø j, J., 2018. The BOT ontology: standards within a
decentralized web-based AEC Industry. Under review. - Volk, R., Stengel, J., Schultmann, F., 2014. Building Information Modeling (BIM) for existing
buildings — Literature review and future needs. Autom. Constr. 38, 109– 127. https://doi.org/10.1016/j.autcon.2013.10.02
xi
Extended abstract (Nederlands)
De bouwindustrie kende de afgelopen jaren een grote vooruitgang voor wat betreft
samenwerking, planning en uitvoering van een gebouw. Dit werd mogelijk gemaakt
door de algemene opkomst van Building Information Modelling (BIM): een semantisch
verrijkt digitaal model dat informatie over de gebouwgeometrie, bouwfysica, structuur,
uitvoering etc. samenbrengt. Hoewel BIM in de eerste plaats gebruikt wordt voor het
plannen en uitvoeren van nieuwe gebouwen (‘as-planned’), is er een tendens om BIM-
technieken te gebruiken voor de documentatie van bestaande gebouwen (‘as-is’). Een
dergelijk BIM-model (verder aangeduid als ‘een’ BIM) voor een bestaand gebouw vindt
toepassing in renovatieprojecten, erfgoedzorg, Facility Management (FM) en
management van de Building Life Cycle (afbraak inbegrepen). De primaire bron die
gebruikt wordt voor een ‘as-is’ 3D reconstructie is een puntwolk, het resultaat van
‘remote sensing’ technieken zoals Terrestrische Laserscanning (TLS) of een
fotogrammetrisch proces. Het maken van een BIM op basis van dergelijke puntwolken
word daarom ook vaak ‘scan-to-BIM’ genoemd (in het geval van fotogrammetrie soms
ook wel ‘photogrammetry-to-BIM’). Desalniettemin zijn er diverse obstakels die een
doorbraak van ‘scan-to-BIM’ belemmeren (Volk et al., 2014):
(1) Het tijdsintensieve, foutengevoelige en dure modelleringsproces, vanwege de
imperfecte geometrie die eigen is aan bestaande gebouwen. Op dit moment
gebeuren de meeste modelleringsprocessen handmatig of semi-automatisch;
(2) Het gebrek aan een efficiënte manier om onzekerheden (occlusies, structuur, etc.)
bij te houden in het BIM model;
(3) Het gebrek aan een incentive om de informatie continu bij te werken in een BIM.
Wanneer de opdracht de integratie van disciplines die niet direct gerelateerd zijn aan
de AEC industrie (Architecture, Engineering and Construction) vereist, ontstaan
enkele bijkomende uitdagingen:
(1) Hoe om te gaan met het modelleren van cultureel erfgoed (hoe integreer je
historische informatie in het model?), het toevoegen van informatie gerelateerd
met GIS of met andere disciplines;
(2) In het geval van erfgoed: hoe om te gaan met objecten die tegenwoordig niet
meer gebruikt worden en daarom niet vertegenwoordigd zijn door een ‘BIM
klasse’?
Onafhankelijk van BIM (en de bouwindustrie in zijn geheel) is de opbouw van een
wereldwijd ‘Semantisch Web’, waarin disciplines van alle aard hun informatie op
dataniveau met elkaar kunnen verbinden. Dit in tegenstelling tot het gekende World
Wide Web, waarin informatie veelal op het niveau van documenten wordt verbonden.
De principes van dergelijk ‘Semantisch Web’ zijn gebaseerd op Linked Data, meer
bepaald op het ‘Resource Description Framework’ (RDF), de ‘taal’ waarin de informatie
xii
beschreven en verbonden wordt in de vorm van ‘triples’. Ook onderzoekers binnen de
AEC industrie springen op deze kar. Hoewel onderzoek momenteel vooral nog op
academisch niveau gebeurt, zijn de vooruitzichten voor de bouwpraktijk veelbelovend,
in het bijzonder in verband met samenwerking (‘Interoperability’), het verbinden van
verschillende domeinen (‘Linking across domains’) en logische deductie (‘Logical
inference and proofs’) (Pauwels et al., 2017).
Deze thesis onderzoekt het gebruik van Linked Data technieken om enkele van
bovengenoemde uitdagingen aan te gaan, meer bepaald in verband met het bijhouden
van onzekerheden. In mindere mate zijn ook de integratie van disciplines die niet direct
aan de bouwindustrie gerelateerd zijn, productclassificatie en manieren om om te gaan
met het intensieve modelleringsproces geïntegreerd in het onderzoek. Een modulaire
methode die de principes van scan-to-BIM vertaalt naar een Linked Data context wordt
in dit onderzoek geïntroduceerd, gebaseerd op bestaande Linked Data ontologieën en
een ontologie die specifiek voor deze thesis werd opgesteld. Deze methode wordt
aangeduid met ‘scan-to-graph’.
Overzicht Hoofdstuk I omvat een algemene introductie tot de thesis. Een literatuurstudie omtrent
remote sensing en Linked Data werd ondernomen bij het begin van het onderzoek. De
eerste twee hoofdstukken zijn hiervan de schriftelijke neerslag, opgevat als introducties
voor de lezer die minder vertrouwd is met de materie. Hoofdstuk II bespreekt de
grondbeginselen van remote sensing en sluit af met een uiteenzetting over scan-to-BIM.
Hoofdstuk III introduceert de belangrijkste principes van Linked Data die van
toepassing zijn op deze thesis: RDF, RDFS, ontologieën en SPARQL. Het hoofdstuk
eindigt met een sectie over applicaties van Linked Data voor de bouwindustrie en een
case over het gebruik van Linked Data voor digitale reconstructie van bouwkundig
erfgoed.
Hoofdstuk IV bespreekt de totstandkoming van het scan-to-graph framework.
Allereerst worden enkele bestaande Linked Data ontologieën besproken die als basis
dienden voor het onderzoek: BOT (Rasmussen et al., 2018) in verband met de topologie
van het gebouw, de Building Product Ontology 1 (W3C Linked Building Data
Community Group) voor productclassificatie en GeoSPARQL (Open Geospatial
Consortium) voor geografische verwijzingen en het linken van geometrieën. Bijkomend
werd in deze thesis een specifieke ontologie samengesteld voor het bijhouden van
bronnen, modelleeropmerkingen en occlusies, aangevuld met een manier om 3D
geometrie in de Linked Data graph te integreren. Deze ontologie wordt besproken in
de tweede sectie van dit hoofdstuk. Ze is per definitie compatibel met de bestaande
ontologieën waarop het scan-to-graph framework gebaseerd is, omdat al deze
ontologieën geformuleerd zijn in RDF. De beslissing welk digitaal geometrieformaat als
11 https://github.com/pipauwel/product (accessed 25/05/2018)
xiii
voorbeeld in de Linked Data graph geïntegreerd zou worden (STEP, Standard for the
Exchange of Product Model Data), werd voorafgegaan door een analyse van
verschillende open formaten die vergeleken werden aan de hand van de
bestandscompactheid en de kwaliteit van de geometrieconversie (open-source
bestandsformaat als vereiste). Hoofdstuk IV besluit met het samenbrengen van de
eerder besproken onderwerpen in een template voor een Linked Data graph met
betrekking tot een scan-to-graph proces.
In hoofdstuk V wordt de applicatie, die ontwikkeld werd voor een scan-to-graph proces
in het kader van een Linked Data-gebaseerde BIM-omgeving, beschreven. Dergelijke
functionaliteit is momenteel niet in bestaande BIM-pakketten inbegrepen. De
ontwikkeling van een dergelijke applicatie (opgevat als plug-in voor Rhino 6) werd dus
beschouwd als van het onderzoek. De applicatie omvat allereerst een methode voor het
semantisch verrijken van CAD geometrie, begrensd door de thema’s die in hoofdstuk
IV besproken werden. Daarnaast is er ook de mogelijkheid tot doorzoeken van de
Linked Data graph met behulp van SPARQL en het visualiseren van geometrische
resultaten. Aangezien het domein van Linked Data onbegrensd is, sluit deze sectie af
met een methode om informatie die niet met de user interface te linken is, toch in de
Linked Data graph te implementeren.
Hoofdstuk VI bespreekt een casestudy als illustratie van een scan-to-graph proces, dat
initieel weinig verschilt van een klassiek scan-to-BIM proces. De beschrijving van de
casestudy omvat het modelleerproces van een ‘as-is’ model van de audiëntieruimte in
het Gravensteen in Gent (op basis van gesegmenteerde puntwolken), geometrische
deviatie-analyses en tot slot de uiteindelijke semantische verrijking.
In het slotgedeelte Hoofdstuk VII vindt een algemene evaluatie en conclusie plaats,
samen met enkele vooruitzichten voor toekomstig onderzoek.
Eindproducten De eindproducten van deze thesis zijn:
(1) Een modulaire en uitbreidbare ontologie (STG, ‘scan-to-graph’), die Linked Data
definities beschrijft voor het uitvoeren van voorgesteld scan-to-graph proces;
(2) Een uitbreidbare template voor een Linked Data graph van een ‘as-is’ 3D model;
(3) Een plugin voor Rhino 6 (IronPython 2.7 and Python 2.7), bruikbaar voor:
- semantische verrijking van 3D-geometrieën met behulp van Linked Data;
- ‘queryen’ van de Linked Data graph met SPARQL en het zichtbaar maken
van geometrische resultaten;
(4) Een script om de geometrie in de Linked Data graph terug te converteren naar
een format dat leesbaar is door 3D-applicaties (STEP);
(5) De casestudy van de audiëntiezaal in het Gravensteen, Gent. Dit houdt in: de
uiteindelijke graph en de geometry in .3dm (Rhino) formaat.
xiv
De eindproducten (1), (2), (3) en (4) worden ter beschikking gesteld via de GitHub
repository van dit werk.2 De documenten van de casestudy (5) kunnen op aanvraag
verkregen worden, na toestemming van de stad Gent.
Conclusie Het gebruik van Linked Data voor semantische verrijking van de 3D-geometrie van
bestaande gebouwen lijkt een logische evolutie. Het model dat in deze tekst beschreven
wordt combineert bestaande ontologieën voor het beschrijven van topologie,
productclassificatie en geografische situering. Een bijkomende ontologie beschrijft
definities specifiek voor het bijhouden van bronnen, modelleeraannames en occlusies.
Het implementeren van geometrische informatie in de graph zelf vindt zijn oorsprong
in het basisidee van scan-to-BIM (of scan-to-graph), dat precies zijn bestaansreden
heeft in het documenteren en semantisch verrijken van de grillige 3D-geometrie eigen
aan reële objecten.
Het ontwikkelde scan-to-graph framework en de plugin schrijven zich in in hedendaags
onderzoek naar een BIM die berust op Linked data, gebaseerd op meerdere, compacte
en modulaire ontologieën, elk met een eigen functionaliteit en specialiteit (Rasmussen
et al., 2018). Het framework is gemakkelijk uit te breiden met definities die mogelijk
gerelateerde onderwerpen beschrijven (historische data, geografische informatie,
structurele informatie, etc.). Op die manier hoopt dit werk ook een referentie te vormen
voor toekomstig onderzoek naar semantische verrijking van bestaande gebouwen met
behulp van Linked Data.
Referenties - Pauwels, P., Zhang, S., Lee, Y.-C., 2017. Semantic web technologies in AEC industry: A literature
overview. Autom. Constr. 73, 145– 165. https://doi.org/10.1016/j.autcon.2016.10.003 - Rasmussen, M., Pauwels, P., Hviid, C., Karlshø j, J., 2018. The BOT ontology: standards within a
decentralized web-based AEC Industry. Under review. - Volk, R., Stengel, J., Schultmann, F., 2014. Building Information Modeling (BIM) for existing
buildings — Literature review and future needs. Autom. Constr. 38, 109– 127. https://doi.org/10.1016/j.autcon.2013.10.023
2 https://github.com/JWerbrouck/scan-to-graph
xv
Content
Acknowledgements ........................................................................................ v
Abstract ........................................................................................................ vi
Extended abstract (English) ........................................................................ vii
Extended abstract (Nederlands) .................................................................. xi
List of abbreviations ..................................................................................xviii
Chapter I: Introduction .................................................................................. 1
1.1 Main topics ......................................................................................................... 1
1.2 Research questions .............................................................................................. 3
1.2.1 Linked Data applied to scan-to-BIM ............................................................ 3
1.2.2 Development of a scan-to-Graph environment ............................................. 4
1.3 Proof-of-concept: Case study .............................................................................. 5
1.4 Thesis Structure ................................................................................................. 5
Chapter II: Remote Sensing ........................................................................... 6
2.1 Laser scanning .................................................................................................... 7
2.1.1 Principles of Laser Scanning ........................................................................ 7
2.1.2 Occlusions .................................................................................................... 8
2.1.3 Registration .................................................................................................. 9
2.2 Photogrammetry ................................................................................................. 9
2.3 Combining laser scanning and photogrammetry ............................................... 11
2.3.1 Resolution .................................................................................................. 11
2.3.2 Accessibility ............................................................................................... 12
2.3.3 Monoplotting .............................................................................................. 12
2.3.4 Edge detection ............................................................................................ 12
2.4 Scan-to-BIM ..................................................................................................... 13
2.4.1 As-planned/as-is BIM ................................................................................ 13
2.4.2 Challenges in creating an as-is model ......................................................... 14
2.5 Conclusion ........................................................................................................ 17
Chapter III: Introduction to Linked Data .................................................... 18
3.1 Note on Literature ............................................................................................ 18
3.2 Resource Description Framework ..................................................................... 19
3.2.1 Triples and Resources ................................................................................ 20
3.2.2 Uniform Resource Identifiers and Literals .................................................. 21
3.2.3 QNames ...................................................................................................... 21
3.2.4 Turtle ......................................................................................................... 22
3.3 RDF Schema and classes .................................................................................. 22
3.3.1 Classes ........................................................................................................ 22
xvi
3.3.2 Subclasses ................................................................................................... 23
3.3.3 rdfs:domain and rdfs:range ......................................................................... 23
3.3.4 Web Ontology Language ............................................................................ 24
3.4 Ontologies ......................................................................................................... 24
3.4.1 TBox and ABox ......................................................................................... 25
3.4.2 Inferencing.................................................................................................. 25
3.5 SPARQL .......................................................................................................... 26
3.6 Linked Data in the AEC industry .................................................................... 27
3.6.1 Interoperability .......................................................................................... 27
3.6.2 Linking across domains .............................................................................. 28
3.6.3 Logical inference and proof ........................................................................ 30
3.6.4 About detail ............................................................................................... 30
3.7 Linked Data in Cultural Heritage ..................................................................... 31
3.7.1 Interpretation and reconstruction .............................................................. 31
3.7.2 An event-centric model .............................................................................. 31
3.7.3 TYPE labelling .......................................................................................... 33
3.7.4 Semantic LoD ............................................................................................. 33
3.8 Conclusion ........................................................................................................ 34
Chapter IV: Methodology ............................................................................ 35
4.1 Modular Ontologies .......................................................................................... 36
4.1.1 Building Topology Ontology ...................................................................... 36
4.1.2 CHML and E-CRM .................................................................................... 37
4.1.3 Classification .............................................................................................. 38
4.1.4 GeoSPARQL .............................................................................................. 39
4.2 Geometry .......................................................................................................... 42
4.2.1 Geometry implementation in the graph ..................................................... 42
4.2.2 Geometry formats in an RDF graph .......................................................... 43
4.2.3 Rhino IDs ................................................................................................... 45
4.3 Scan-to-graph classes and properties ................................................................ 46
4.4 Graph Scheme .................................................................................................. 48
4.4.1 Topology Level ........................................................................................... 48
4.4.2 Building Element Level .............................................................................. 49
4.5 Conclusion ........................................................................................................ 50
Chapter V: Rhino Plugin ............................................................................. 51
5.1 Dependencies .................................................................................................... 51
5.2 General information .......................................................................................... 52
5.3 Project Info Tab ............................................................................................... 52
5.4 Point Clouds Tab ............................................................................................. 54
5.5 Element Tab ..................................................................................................... 55
xvii
5.6 SPARQL query tab .......................................................................................... 57
5.7 Additional content ............................................................................................ 58
5.8 Conclusion ........................................................................................................ 60
Chapter VI: Case Study ............................................................................... 61
6.1 Extraction of meaningful point clouds .............................................................. 62
6.1.1 Sources for reconstruction .......................................................................... 62
6.1.2 Extracting point clouds .............................................................................. 64
6.2 Modelling the Presence Chamber ..................................................................... 67
6.2.1 Preparation ................................................................................................ 68
6.2.2 Modelling ................................................................................................... 70
6.3 Plugin demonstration ....................................................................................... 85
6.3.1 Semantic definitions of the Eastern Wall ................................................... 85
6.3.2 Semantic definitions of the Fireplace .......................................................... 85
6.3.3 Semantic definitions of a column ................................................................ 86
6.4 Additional content ............................................................................................ 89
6.5 Conclusion ........................................................................................................ 91
Chapter VII: Discussion and conclusion ....................................................... 92
7.1 Metadata handling ........................................................................................... 92
7.2 General LD benefits applied to existing buildings ............................................ 93
7.2.1 ‘Interoperability’ ......................................................................................... 93
7.2.2 ‘Linking across domains’ ............................................................................. 93
7.3 Plugin ............................................................................................................... 94
7.4 CAD vs BIM for as-is modelling ....................................................................... 94
7.5 Conclusion ........................................................................................................ 95
Bibliography ................................................................................................. 96
Appendix A: Full-page figures ..................................................................... 99
Appendix B: Generate STEP geometry from an RDF graph .................... 106
Appendix C: STG ontology........................................................................ 107
Appendix D: STG products ....................................................................... 110
Appendix E: Presence chamber – without geometry ................................. 111
xviii
List of abbreviations
AEC Architecture, Engineering, Construction
ALS Airborne Laser Scanning
BIM Building Information Modelling
BOT Building Topology Ontology
BRep Boundary Representation
CAD Computer Aided Design
CHML Cultural Heritage Markup Language
CIDOC International Comittee for Documentation
CIDOC CRM CIDOC Conceptual Reference Model
DSLR Digital Single Lens Reflex-camera
E-CRM Erlangen Conceptual Reference Model
EDM Europeana Data Model
FM Facility Management
GIS Geographic Information Systems
HBIM Heritage Building Information Modelling
HDS High Definition Surveying
IFC Industry Foundation Classes
LBD Linked Building Data
LD Linked Data
LOA Level of Accuracy
LOS Lines of sight
MEP Mechanical, Electrical, Plumbing
NURBS Non-uniform Rational B-Spline
OWL Web Ontology Language
RDF Resource Description Framework
RDFS Resource Description Framework Schema
SPARQL SPARQL Protocol and RDF Querying Language
STEP Standard for the Exchange of Product Model Data
STG scan-to-Graph
TLS Terrestrial Laser Scanning
TS Total Station
.ttl Turtle file document
UI User Interface
URI Uniform Resource Identifier
URL Uniform Resource Locator
WKT Well-Known Text
1
Chapter I: Introduction
1.1 Main topics
The advent of Building Information Modelling (BIM) has brought a revolution in the
way a construction project is designed, analysed and managed. Information exchange
between the various stakeholders that are involved in such a project being its main
reason of existence, BIM addresses the need for a ‘central hub’ that contains
architectural, engineering and ‘Mechanical, Electrical, Plumbing’ (MEP) information.
The industry that combines these topics is often called the AEC Industry (Architecture,
Engineering, Construction). A BIM model (from here on: ‘a’ BIM) allows these
disciplines to communicate and interact in a much more streamlined way than before.
Less known than the usual use of BIM for planning a new building (‘as-planned’) is its
potential to manage existing buildings; for Facility Management (FM), for renovation
projects, for building demolishment and for Cultural Heritage. BIM for existing
buildings includes several categories:
- ‘As-constructed’ refers to a BIM for the construction phase of a building, typically
until completion;
- ‘As-built’ refers to an updated ‘as-constructed’ BIM based on what is actually built;
- ‘As-is’ refers to a BIM for the operational phase of an existing building. An as-is
BIM in the case of heritage is sometimes denoted as HBIM;
- (‘As-was’ refers to a BIM of a building that does not exist anymore).
This work primary focuses on ‘as-is’ BIM. The results will be applicable to both non-
heritage and heritage buildings. As there is no agreed definition on what ‘BIM’ really
is, a short explanation on the use of the term in this thesis imposes itself. In this text,
BIM refers to a semantically enrichened model which contains essentially both the
topology and geometry of a building. This very broad definition allows the modular
approach that will be used in this work to be denoted as BIM.
The procedure to create an as-is BIM is different from a ‘normal’ as-planned BIM. Since
a BIM for an existing building should reflect its real ‘out-of-plumb’ geometry,
conducting real world surveys is the first step. The primary source for creating an as-
is model is a point cloud: a huge list of 3D points made by a laser scanner or using
photogrammetry. Therefore, the process to make a BIM based on such point clouds is
called respectively ‘scan-to-BIM’ or ‘photogrammetry-to-BIM’ (often both denoted as
‘scan-to-BIM’) (Chapter II). When the point clouds have been processed, the modelling
takes place, alternating with a deviation analysis between the model and the point
cloud. For scan-to-BIM to become a frequently applied, mature process, it needs first
to overcome several obstacles.
The first obstacle is related to the nature of current BIM and CAD software packages:
they are optimized for planning new buildings, maintaining an orthogonal fashion and
2
a strong preference for ‘platonic’ bodies without irregularities. BIM (and, to a lesser
extent, CAD) workflows are not optimised for dealing with ‘out-of-plumb’ geometries
of existing buildings. This relates to the high conversion effort from captured data to
semantic BIM objects outlined in (Volk et al., 2014): as no automatic conversion
algorithms for point cloud to simple 3D geometry are present, the geometric conversion
nowadays largely happens in a manual or semi-automatic way. Further on, current
BIM packages provide only a limited amount of product classes, which can become
difficult when one needs to model rare elements that do not have a place anymore in
newly built structures (there is, for example, no such thing as ‘IfcCapital’ to classify a
column’s capital). Lastly, BIM’s focus on new buildings limits the implementation of
efficient ways to add metadata to the project: what was the modelling source? Which
assumptions were made when information was not available or uncertain?
The next challenge relates to the ecosystem of current BIM software. Because it is
optimized for the AEC industry, it is difficult for other disciplines to become part of
the BIM story. Geographical or historical data are only two examples that could
provide valuable attributes. Efforts are made to integrate GIS (Geographic Information
Systems) and BIM with one another, which would for example enable a deeper insight
in the building site, soil materials, underlying aquifers etc. Compatibility with historical
information would be a relevant feature for as-is models specifically: for example, when
making a BIM from a building that is considered built heritage, implementing its
cultural information in the model is relevant (e.g. relating building elements to
historical events that shaped and influenced the building’s appearance). This way, a
BIM of the building could serve as a central model for a ‘virtual museum’. Such
integration of other disciplines would engage in the ambition for BIM to provide a
central information hub, made it even go beyond the typical topics of the AEC industry.
In the meanwhile, independent from BIM, there emerges a global trend where various
disciplines interlink their knowledge, on a data level instead of on document level. This
interdisciplinary ‘web of linked data’, based on the principles of the ‘classic’ world wide
web, is called the ‘Semantic Web’ and uses a universal standard ‘language’ for
representing and connecting knowledge: the Resource Description Framework (RDF)
as the basis for worldwide accessible, connected graphs of data (Chapter III). Fig. 1.1
shows the ‘Linked Open Data Cloud, as an illustration of different disciplines,
interconnected by the Semantic Web. It is clear that some disciplines are already firmer
rooted in the Semantic Web than others. Although still mainly in academic
environments, there is also a trend to translate the current BIM practices of the AEC
industry into the principles of Linked Data, thereby participating in the Semantic Web,
as it could provide various advantages compared with current BIM (Pauwels et al.,
2017b).
The use of Linked Data for semantic enrichment of existing buildings (as-is) will be the
core topic of this research project.
3
Fig. 1.1: The Linked Open Data Cloud (source: http://lod-cloud.net/. s.d. Last visited on 29/05/2018)
1.2 Research questions
1.2.1 Linked Data applied to scan-to-BIM
In this thesis, the concept of scan-to-BIM will be transferred into a Linked Data
context, to investigate if Linked Data concepts could provide answers to the above
outlined research challenges for as-is BIM creation, specifically related to
interoperability, metadata implementation and product classification. A scan-to-BIM
process that is performed using Linked Data semantics rather than standard BIM data
structures could then be classified as ‘scan-to-graph’ (or ‘photogrammetry-to-graph’).
The main research question will lie in how a minimal framework for such a scan-to-
graph process could be defined, based on existing, modular Linked Data ontologies,
complemented with definitions that were made in the context of this project (Chapter
IV). The focus will lie on the building topology, the building elements and a way to
handle modelling uncertainties. Although further extension of this framework with
historical or geographic data is not a part of the research question, the different
perspectives on Linked Data of the AEC industry and Cultural Heritage will be
compared, in order to provide a critical background for developing the scan-to-graph
framework (Chapter III).
4
1.2.2 Development of a scan-to-Graph environment
Because an accessible user interface to perform such ‘scan-to-graph’ process is currently
non-existent, another ambition of this work consists in developing a prototype for such
UI, as a plugin for an existing 3D modelling environment (Chapter V). Since we
disconnect from existing BIM applications, a ‘classic’ 3D CAD package suffices for
geometry modelling. Such CAD software is even preferred, as they are optimised for
modelling 3D geometry, which will benefit the manual conversion of point cloud to 3D
objects.1
One of the goals for the end products of this thesis is to make them as ‘open-source’ as
possible. The ‘5 ★ open data’2 is used (Fig. 1.2) as a guideline for the graphs that are
produced by the plugin and for the plugin code itself. In the best case, a 5★ ‘data
openness’ is reached: non-proprietary, URI-denoted and interlinked data. Note that in
the case of building data, general access to all data seldom occurs, because of property
rights, safety etc. Therefore, the reconstructed ‘as-is’ geometry of the case study on the
Gravensteen Castle in Ghent can only be made accessible on request. Yet this has no
influence on the data formats that are used in the proposed scan-to-graph framework:
whenever possible, open data standards will be used to further enhance interoperability
and independence from proprietary formats: e57 for point clouds, STEP (Standard for
the Exchange of Product Model Data) for geometry and RDF as the intrinsically open
framework that ‘glues’ everything together. An exception on this ambition is the
dependency on and the provided compatibility with Rhinoceros: since the plugin runs
on Rhino 6, the creation of the graph is currently restricted to this particular
environment. Since the 5 ★ of open data only refer to the availability of the data, and
does not state the way it is created, this ‘dependency’ is not really problematic. All in
all, querying the graph and the extraction of geometry can both happen independently
from Rhino and the plugin will be freely available at the GitHub repository of this
project.3
Fig. 1.2: Five Star open data scheme (source: http://5stardata.info/en/, accessed 3/5/2018, updated 31/08/2015)
1★ Make your stuff available on the Web
(whatever format) under an open license;
2★ Make it available as structured data (e.g.
Excel instead of image scan of a table);
3★ Make it available in a non-proprietary
open format (e.g. CSV instead of Excel);
4★ Use URIs to denote things, so that
people can point at your stuff;
5★ Link your data to other data to provide
context.
1 (Mezzino, 2017) reports several modelling constraints when using Revit for modelling geometry, especially when modelling complex shapes. 2 http://5stardata.info/en/ 3 https://github.com/JWerbrouck/scan-to-Graph
5
1.3 Proof-of-concept: Case study
As an illustration of both the proposed scan-to-graph framework and the plugin, a case
study on the presence chamber of the Gravensteen Castle in Ghent will be carried out
(Chapter VI). The case study includes segmenting the available point clouds, as-is
modelling of the 3D model, geometric comparison (deviation analysis) with the point
cloud sources and finally creating a Linked Data graph of the model, by use of the
plugin.
1.4 Thesis Structure
Apart from the research itself, it is this thesis’ ambition to be comprehensible to anyone
with a background in construction and architecture, especially students that are writing
a master thesis about a similar subject and have to obtain a background on the topics
of remote sensing and Linked Data. Because both remote sensing and Linked Data are
typically not included in an architecture student’s curriculum, the next two chapters
offer an introduction to these topics and their applications and implications for the
AEC industry. The fourth chapter discusses the general approach and decisions that
led to the template that will be the basis for the data graphs that are generated by the
plugin. It includes ontologies, geometry and the general structure of the graphs. The
functionality of the plugin itself will be discussed in Chapter V, as well as a method to
overcome its current limitations. Chapter VI illustrates the proposed scan-to-graph
process and the plugin by a case study of the presence chamber of the Gravensteen
Castle in Ghent. As an emphasis is laid on used methods, this chapter sometimes takes
the form of a tutorial. It starts with an overview of the available point cloud sources
and the method that is used to ‘prepare’ the point clouds for modelling in Rhino. This
is followed by the modelling stage, geometric deviation analysis and finally, the
serialization into an RDF-based graph. Finally, Chapter VII combines retrospect and
prospect in a discussion about the project and some perspectives on possible future
work.
6
Chapter II: Remote Sensing
Whenever coping with existing buildings (whether in case of heritage, renovation,
education…), the access to reliable information is of the utmost importance. Although
original plans and sections mostly provide a decent idea of a building ‘as-planned’, they
do not provide information about its actual ‘as-is’ or ‘as-built’ state. To obtain the as-
is building geometry, real-world surveys are inevitable. There are a number of possible
techniques to get spatial information about a structure, which are depicted in Figure
2.1. These techniques can be grouped into two large parts: non-contact techniques and
contact-techniques. Nowadays, the most efficient methods are non-contact techniques:
they are quick, reliable and touching fragile or unreachable surfaces is not necessary.
Non-contact techniques are often referred to as ‘remote sensing’ techniques.
A total station (TS) measures exact locations of discrete points on a surface, which
can be used as ‘control points’ to reconstruct the building’s geometry digitally. Laser
scanners and photogrammetric surveys, on the contrast, return an immense amount of
data in the form of a ‘point cloud’, a digital spatial representation of the object
containing information about millions of points. Therefore, laser scanning and
photogrammetry are called High Definition Surveying (HDS) techniques. Point clouds
are de facto the most complete sources for digitalisation of real-world objects (buildings,
landscapes …) that are currently available. However, they are essentially just a
collection of vectors containing spatial coordinates, sometimes also incorporating RGB
(colouring) values, normals and an Intensity value. They do not provide any
information about the nature of the object being scanned (is it a door, a chair, a
column?), its boundaries or the membership of a larger surface: each point is just
‘hanging’ in the digital space, independent from the others. Although a point cloud
provides a lot of geometrical information, it is inherently semantically poor. This means
there is a huge amount of data being used (Gigabyte file sizes being rather rule than
exception), with no return in terms of typological information. Therefore, in many
Fig. 2.1: Systematic overview of data capturing and surveying techniques to gather existing buildings' information (source: Volk et al., 2014)
7
cases, point clouds serve as a basis for more storage-friendly and semantically rich
models. In the construction industry, this is mostly a Building Information Model
(BIM).
This chapter starts with an introduction to both Terrestrial Laser Scanning (TLS)
(section 2.1) and Photogrammetry (section 2.2). Then, some situations are discussed
in which both techniques are complementary (section 2.3). Finally, the concept of ‘Scan-
to-BIM’ is introduced: the process of modelling a BIM with point clouds as a blueprint
(section 2.4).
2.1 Laser scanning
Within the field of laser scanning, several techniques for capturing geometrical data
exist. Airborne Laser Scanning (ALS) is used mostly when digitizing entire
topographies (e.g. for geographical or archaeological surveys), since the airplane to
which the scanner is attached can fly over the landscape without any obstructions.
When the scanning positions are located on the ground, the process is called Terrestrial
Laser Scanning (TLS). Apart from ‘static’ TLS (the scanner mounted on a tripod),
there are some subtechniques such as Mobile Laser Scanning (the scanner attached to
a vehicle) or Handheld Laser Scanning, both dealing with movements of the scanner
and therefore typically less accurate than static TLS. For building projects, static TLS
is used the most.
2.1.1 Principles of Laser Scanning
The basic principle of laser scanning is pretty straightforward. Rotating around their
axes, each second thousands (up to a million and more) of laser pulses are generated,
which are then reflected on whatever object they ‘collide’ with and recaptured by the
scanner’s receiving sensor. Then, several methods can serve to determine the distance
of the scanner to the point of reflection. One of them is the so-called direct ‘time-of-
flight’ method (Fig. 2.2): in short, to obtain the distance between scanner and measured
point, it multiplies the time t between sending and receiving the pulse with the velocity
of light c, then dividing it by two, because the pulse travels twice the scanner-point
distance: d = c * t/2. The other
common method involves a phase
difference: the distance is calculated by
emitting a continuous laser beam with
a slightly changing wave phase and
measuring the phase difference between
the emitted and received wave. A
scanner that uses phase difference
typically has a shorter range, but a
higher rate than one that uses time-of-
flight (Nuttens et al., 2014).
Fig. 2.2: principle of Time-of-Flight (source: van Genechten, 2008)
8
The resulting point cloud depends greatly on the scanner being used, even when using
the same technique. Other influencing factors are the operating settings, the calibration
of the scanner, incident angle, the object’s material and the registration process. Table
2.1 shows a comparison between typical orders of magnitude for time-of-flight scanners
and phase difference scanners.
Time-of-flight Phase difference Scan rate 10000 – 300000 points/s ~1 million points/s
Minimum distance 1 – 5 m 0,3 – 0,5 m Maximum distance 300 – 6000 m 80 –180 m Precision (length) 3 – 5 mm @50 m 2 – 3 mm @50 m Precision (angle) 0,0002 – 0,01 ° 0,001 – 0,007 ° Weight 10 – 20 kg 5 – 15 kg Table 2.1: comparison between Time-of-Flight and Phase Difference scanners (source: Dubois et al., 2017)
2.1.2 Occlusions
A laser scanner relies on ‘Lines of Sight’ (LOS): it cannot ‘see’ points that lie behind
an opaque object. Zones with large ‘occlusions’ (or ‘shadows’) occur frequently within
one individual scan. Figure 2.3 shows a room scanned in a survey of the Gravensteen
Castle executed by the city of Ghent in 2008. The room was digitized with one single
scan, resulting in a point cloud with large occlusions. Taking multiple scans on different
locations reduces the occlusions, because most of the time, there will be at least one
position from where the scanner can ‘see’ the parts of the objects that were occluded in
other scans.
Fig. 2.3: Occlusions in a TLS scan of the Gravensteen by the City of Ghent, 2008. The circular occluded area indicates the scanner position. (software: Autodesk Recap Pro)
9
2.1.3 Registration
By taking several scans within one room, a dense overlap between points is created,
which can be used for aligning the scans to each other, a process called ‘registration’.
In a simplistic way: detecting zones that represent the same space in the different point
clouds allows for the spatial transformation of one point cloud to align to the other one
(using closest point algorithms etc.).4 Although manual registration is still an option in
registration software packages, registration is nowadays done mostly automatically.
After the alignment of the point clouds, the total cloud is typically georeferenced. This
means the aligned point clouds are linked to the location of the scanned object in the
world. In order to do this, this location has to be taken during the survey, e.g. by use
of a GPS.
2.2 Photogrammetry
“Photogrammetry encompasses methods of image measurement and interpretation in
order to derive the shape and location of an object from one or more photographs of
that object” (Luhmann et al., 2006). The basic concepts of photogrammetry are not
new; they exist almost as long as photography itself: in 1859, the Frenchman Laussedat
explained to a commission the Paris Academy of Sciences how one could find 3D
coordinates of an object based on two photographs (Kraus, 2012). In 1885, the German
Meydenbauer established the first photogrammetric institute for documenting heritage
objects (Albertz, 2002). The principles they used were rooted in the same mathematical
equations that are used today in digital object reconstruction based on photographs.
The greatest advantage of photogrammetry towards laser scanning is the cost of the
equipment, which is typically a lot lower than the cost of a TLS scanner (although
there exist specialized photogrammetry cameras for which the price tag approaches the
price tag of a TLS).
Although photogrammetric measurement is, given the right circumstances and to a
certain degree, possible with two images (“stereophotogrammetry”), a ‘complete’ 3D
model can only be accomplished when multiple overlapping images are combined and
oriented in a three-dimensional space. Detection of corresponding features between
pictures used to be done by hand. Now, computer vision algorithms are capable of
detecting these features both quickly and accurate.
When pixel zones with similar characteristics in different pictures are linked to each
other, their 2D coordinates in the pictures system can be used to resolve the
mathematical equations that define the photograph’s location relative to the other
photographs (‘Relative Orientation’) and in the object’s 3D system (‘Absolute
Orientation’) (Fig. 2.4). After the position of the photographs is calculated, a ray is set
4 This is the method used in a cloud-to-cloud based registration process.
10
out, defined by a pixel and the focal
point of the camera objective. The
(interpolated) intersection of this ray
with the ray coming from the
equivalent pixel in another
photograph, reconstructs the point
spatially (Fig. 2.4). Further iterations
and computer vision algorithms can
create an extremely dense network of
points, resulting in a point cloud,
comparable with those made by a
laser scanner. Figure 2.5 shows an
extremely dense photogrammetry-
based point cloud made as a test case
on a part of the Ponttor in Aachen,
Germany. Note that for making an as-is geometrical model of an entire building, such
resolution is unnecessary (and might even be very contraproductive by slowing down
the computer). Further on, the density of the points is no indication about the
(measured) accuracy of the point cloud (section 2.4.2.2).
Fig. 2.5: Dense point cloud reconstruction from a part of the Ponttor in Aachen, Germany (source: Author; camera: Pentax K-50; software: Agisoft Photoscan Professional (trial version))
In the case of laser scanning, the equipment had a reasonable influence on the resulting
point cloud. This is also the case when using photogrammetry (sensor size, objective),
but additionally, there is the impact of lighting and weather conditions, camera settings
and the quality of the feature detection and reconstruction algorithms. Additionally, a
photogrammetry-based point cloud has to be set to the correct 1:1 scale by additional
Fig. 2.4: Picture association in a multi-image setting (source: Schwermann, R., Lecture slides for the course
‘Photogrammetrie’ at RWTH Aachen, 2017-2018)
11
on-site reference measurements.5 Further on, the topics on occlusions, georereferencing
and registration stay valid for photogrammetry-based point clouds. Because this thesis
is not a work on photogrammetry, an elaborate discussion about the mathematical and
geometrical backgrounds of photogrammetry is not relevant. The interested reader can
find more information on this in (Kraus, 2012; Luhmann et al., 2014).
2.3 Combining laser scanning and photogrammetry
Although laser scanning and photogrammetry may seem competing surveying methods,
they can complement each other in many cases. The following section discusses some
differences and how they can be combined.
2.3.1 Resolution
Because the respective point clouds originate in a different way, they are not exactly
the same. This starts at resolution level: photogrammetry typically performs better
regarding resolution in function of the distance (Fig 2.6b). On the other hand, the
accuracy of a photogrammetric process decreases quadratically with the distance, while
in the case of a laser scanner, this happens linearly (Fig. 2.6a). In theory, when point
clouds are available from both laser scanner and photogrammetry, they can be
combined to achieve a point cloud that has best of both: the more accurate laser-
scanned point clouds can serve as a basis to ‘correct’ the less accurate but denser (higher
resolution) photogrammetry-based point clouds. However, in practice, it rarely happens
that the same object is scanned multiple times with different methods. Additionally,
the quality of the resulting point cloud depends on the merging algorithm being used.
Fig. 2.6a-b: Comparing TLS and photogrammetry in point deviation (left) and resolution (right) as a function of the measuring distance (source: R. Schwermann,
lecture slides for the course ‘Photogrammetrie’ at RWTH Aachen, (2017-2018))
5 Although an indication might be given if the sensor size and focal length are known, by use of trigonometry. For example, for the on-site measurements a TLS or TS can be used.
12
2.3.2 Accessibility
Because photogrammetry essentially only needs a camera on site, the equipment is
much lighter and more manageable than in the case of laser scanning.6 It can, for
example, be used for capturing information in very small rooms (e.g. winding staircases
or sanitary rooms). Further on, a camera is light enough to be mounted on a drone,
which is able to make aerial pictures, hereby collecting information about the building’s
roof or other places that would in no way be accessible with heavy TLS equipment (in
the case of Airborne Laser Scanning (e.g. for detailed geographic surveys), the scanner
is mostly mounted on a plane, although there exist exceptions7).
2.3.3 Monoplotting
Another technique that combines laser scanning and
photographs is called ‘Monoplotting’. Admitting,
monoplotting does not involve ‘real’
photogrammetry, since the pictures do not deliver
any spatial information; as has been indicated in the
previous section, recovering 3D information is only
possible when there are at least 2 different,
overlapping pictures. However, when the geometry of
the object is already known (e.g. because there is a
laser scanned point cloud or the surface is a simple
plane), having a single image can provide additional
information about the object. For example, a point
cloud originating from a laser scan only has x, y, z
coordinates and an intensity value I. ‘Plotting’ the
image from the correct angle onto the point cloud,
provides it with colouring information, valuable for visualizing, modelling or mapping
detailed multispectral information on the points for futher analysis (Liao and Huang,
2012). Most contemporary laser scanners have a built-in high-resolution camera that
takes pictures immediately after scanning, ‘automatically’ adding an RGB dimension
to the point cloud. Figure 2.7 shows a ‘Riegl VZ-2000i’ laser scanner, which has literally
a DSLR camera mounted on top (mostly, it is integrated). Correction for the different
location of scanner and camera is done afterwards.
2.3.4 Edge detection
Although laser scanning is one of the most complete surveying methods existing right
now, automatic edge detection remains an issue that is inherent to the technique.
Because the scanner identifies single points, determining an object’s edges can only be
approached. On the other hand, automatic edge detection in photographs has taken
serious improvements, so that the positions of corners and edges can be estimated on
6 Note: as indicated before, to bring the resulting point cloud to the exact 1:1 scale, additional measurement on-site is nevertheless needed. For georeferencing the point cloud, additional on-site GPS equipment is necessary. 7 https://www.3dlasermapping.com/riegl-uav-laser-scanners/, accessed 25/05/2018
Fig. 2.7: Riegl VZ-400 (source: http://www.riegl.com, accessed 31/05/2018)
13
a sub-pixel level.8 Situating the found edges spatially can help in segmenting the point
cloud into object regions, which in turn may be used for automatic object recognition.
A reliable object recognition for point clouds is currently still in research phase; but
along with the progress made in the field of computer vision, this holds some promising
prospects regarding automatic conversion of point clouds to Building Information
Models.
2.4 Scan-to-BIM
As indicated in the introduction of this chapter, point clouds an sich do not contain
any semantic information, as they are in fact just points that are distributed in space,
often provided with some extra dimensions (intensity, colour …). The ‘scan-to-BIM’
process involves the creation of a semantically rich Building Information Model from
an existing building, with point clouds as a basis. The creation of such an as-is BIM
opens a whole range of possibilities, regarding the management and planning of
renovation projects, cultural heritage conservation, analysis of the building life cycle,
facility management and more. Apart from serving as a blueprint for the creation of a
BIM model, the point cloud can also serve as an important instrument of control: e.g.
by regularly making scans of the project under construction, the construction process
can be monitored in real-time (Dubois et al., 2017). This section discusses the benefits
and the challenges of creating a BIM from an existing building, which may be based
on point clouds, plans, sections and other documentation that is available, such as an
as-planned BIM (which can show large differences with the actual building). In-depth
discussions regarding scan-to-BIM processes are documented by Thomson (2016) and
Mezzino (2017).
2.4.1 As-planned/as-is BIM
The profits one gains from having a BIM and its influence on a structure’s life cycle
depends on the moment it is made. When created at the early conception of the project,
a BIM helps to manage nearly all stages of the building construction, from the inception
until far in the production phase. A BIM that is an updated version of an as-planned
BIM or made from an already existing building can additionally provide an improved
building maintenance, renovation, regulations compliance and, finally, demolishing
process. Figure 2.8 shows the different ways a BIM model can be created, along with
their influence on the building life cycle. Note that renovations are included in
‘maintenance’.
8 source: R. Schwermann, notes for the course ‘Photogrammetrie’ at RWTH Aachen (2017-2018)
14
Fig. 2.8: BIM model creation processes in new or existing buildings depending on available, pre-existing BIM and LC stages with their related requirements (Source: Volk et al., 2014)
2.4.2 Challenges in creating an as-is model
Despite its usefulness, the use of BIM for existing buildings is currently limited.
According to (Volk et al., 2014), this so far scarce implementation has several reasons
(see also Chapter I):
- The costly, time-intensive and error-prone process of reverse-engineering point clouds
to an as-is BIM, which happens nowadays mostly manually or semi-automatically;
- The handling and modelling of uncertain data, objects and relations due to the lack of
information, occlusion in point clouds or information which is simply not accessible with
remote-sensing techniques (e.g. the constituent materials of a wall);
- The lack of an incentive for continuous information maintenance and assessment in
BIM.
Other challenges include the interdisciplinarity often needed in such project (heritage,
GIS, FM, etc.) and the need for an efficient classification system for non-typical
building elements. Since the amount of different products is countless, such
classification system would rather be an extendable framework than the limited amount
of modern-product classes provided by current BIM packages (section 3.6.4).
However, the need for having a tool for efficiently managing (and adapting) the existing
building stock will only continue to grow as more and more governments are restricting
the unbridled expansion of urban development (e.g. the 2040 ‘Betonstop’ planned in
Flanders), making renovations of existing buildings more relevant. Furthermore, an
efficient and ecological use of resources in the building life cycle becomes more
important. Having a centralized model of the existing building thus becomes more
interesting, despite the problems outlined above. The amount of work that goes into
such a model depends on the building age: either the building has been built quite
recently, which means there may be an already existing BIM that could be updated
(‘as-built’), or it dates from the pre-BIM epoch, which means that, in the best case,
there are other documents available (plans, sections …). If the available documents are
15
not sufficient, surveying techniques such as laser scanning and photogrammetry can fill
the void, and a scan-to-BIM process is carried out.
2.4.2.1 Intensive modelling process
The ‘intensive modelling’ is the greatest obstacle for creating as-is and as-built BIMs.
A building is never built exactly as planned and perfect orthogonality is seldom
observed in existing buildings, especially when it concerns heritage. “The challenge then
becomes how does one represent the real world out-of-plumb conditions? […] To
complicate matters further, many of today’s design software packages prefer to work
in an orthogonal fashion and are limited in their ability to accurately represent out-of-
plumb conditions” (U.S. Institute on Building Documentation, 2016). Orthogonal
modelling thus brings a lot of errors into the model, non-orthogonal modelling is much
more time-consuming.
An automatic approach that segments and labels the point cloud and efficiently and
accurately generates a BIM out of it, will not become reality in the near future,
although there is a lot of research going on to enhance (semi)-automatic object
recognition and geometry reconstruction out of segmented point clouds. The last part
of section 2.3 indicated already that there are currently no fully automatic solutions
yet. Advanced semi-automatic algorithms (e.g. ‘FARO PointSense’ (FARO), ‘Leica
CloudWorx’ (Leica), DURAARK Software Prototype (DURAARK, academic), etc.)
can already significantly reduce the time that is spent on modelling, but still require a
lot of user input. Progress made in the fields of deep learning and computer vision,
along with semantic nets, already provided a basis for more semi-automated algorithms
(e.g. Bassier et al., 2017; Nan et al., 2012; Ochmann et al., 2016; Shi et al., 2016) and
is expected to further automatize this process.
2.4.2.2 LOA specification
In order to specify the accuracy requirements of the model, the ‘USIBD Level of
Accuracy (LOA) Specification Guide’ (U.S. Institute on Building Documentation,
2016), defines five LoA levels (Table 2.2), ranging from LOA10 (low accuracy) to
LOA50 (high accuracy). The distance ranges correspond with a standard deviation of
2σ, which is equal to the 95th percentile. A difference is also made between the real
object surface and the measurement representation (e.g. point clouds), since no
measuring method is exact. Therefore, the framework defines two types of accuracy:
Measured Accuracy and Represented Accuracy. Measured Accuracy is about the final
measurements, regardless of the method used to acquire those measurements.
Represented Accuracy stands for the deviation range once the measured data is
processed into some other form (e.g. a 2D drawing or 3D model). The difference
between the true object surface, measured points and represented surface is shown in
Figure 2.9.
16
The next challenge is the method to use for analysing the model’s (or its substituent
parts’) LoA, since different methods exist with different results. The simplest method
is determining the distance for each point (possibly after subsampling first) to the
closest point on the model. An extension that is proposed by (Bonduel et al. (2017))
divides the determination process in two steps, with an analysis on macro- (the entire
building) and microscale (separate building elements). The microscale method brings
occlusions into account by segmenting the point cloud in occluded and non-occluded
parts. This allows to determine the actual LoA of the building element in a quantitative
manner.
2.4.2.3 Documenting uncertainties
The uncertainties and assumptions that are made while making the digital model need
to be kept and linked to the BIM model, as well as the sources they are based on. It
would be very misleading if assumed object geometries (e.g. when the object is partly
occluded in the source point cloud) or attributes (e.g. the substituent layers of a wall)
were modelled as if they were known for sure, especially when working in a team.
Assuming the boundaries of an object that is occluded may be based on rational
thought, but it stays nevertheless an assumption. In the same way, point cloud
cluttering around glazing or other reflecting elements may prevent the correct
Table 2.2: LoA specification according to the USIBD Level of Accuracy (LOA) Specification Guide (source: U.S. Institute on Building Documentation, 2016)
Fig. 2.9: difference between the true object’s surface, the measured points and the modelled representation (source: U.S. Institute on Building Documentation, 2016)
17
modelling of the glass thickness. Such uncertainties may influence the overall outcome
of building performance analyses, structural analyses etc. Also applications therefore
need to know which data is uncertain (preferably in a quantified way), so they can take
this into account in the probability of their outcome. Keeping track of assumptions and
the sources the model is based on may also indicate the need for further surveys.
No established standards to link such assumptions to a BIM were found, as current
BIM standards mainly focus on new buildings. However, Barbosa et al. (2016) argue
they could be extended to apply to existing ones, too, by incorporating guidelines (both
for software developers and end-users) for coping with the above outlined issues. For
example, to better handle uncertainties or in providing better integration of data
formats that are used in an as-is modelling process (e.g. point clouds). Keeping
metadata about the assumptions, deviations an more may be done using Linked Data
techniques, applied to a graph-based BIM rather than to a ‘regular’ BIM (Chapter II,
III).
2.4.2.4 Information maintenance and assessment
Information maintenance and keeping the model up-to-date hugely depend on the
facility manager and the amount of work that is needed to do this. Augmented Reality
applications that link the real building to a BIM or a semantic graph could lower the
step for active participation of facility managers in keeping the model up-to-date and
also keep track of the changes that have been made earlier.9
2.5 Conclusion
This chapter introduced TLS and Photogrammetry, two remote sensing techniques
that generate a digital point cloud as output, which can be used for digital
reconstruction of buildings. Apart from the main working principles, the factors that
influence the outcome have been discussed. It has been shown that these methods can
complement each other to keep the best elements of both. Lastly, the principle of the
scan-to-BIM process was laid out, as well as the main challenges current scan-to-BIM
practice struggles with.
9 E.g.: https://www.bambouwentechniek.nl/nieuws/2017/11/augmented-reality-in-facilitair-management (accessed 10/04/2018, updated 10/11/2017)
18
Chapter III: Introduction to Linked Data
Together with scan-to-BIM, the focus of this thesis lies on Linked Data. This chapter
will give a layman’s introduction to the most important concepts of Linked Data. Key
concepts, such as RDF (3.2), RDFS (3.3), ontologies (3.4) and SPARQL (3.5) will be
discussed, followed by a review of some Linked Data applications in the Architecture,
Engineering and Construction (AEC) Industry, which are becoming more and more
important in current practice (3.6). The last section discusses the application of Linked
Data in built cultural heritage, using the Cultural Heritage Markup Language (CHML)
as the main example (3.7).
The basic concepts of the Semantic Web are very much the same as those for Linked
Data in general. Therefore, some of the concepts that are to be explained in this
chapter, will be more easy to grasp when explained in a Semantic Web context. There
are many opinions about the relationship between Linked Data and the Semantic
Web10. In this thesis is maintained that on the one hand, Linked Data concepts provide
the structures for a semantic web to work; and on the other hand, the information that
can be retrieved through querying ‘the’ Semantic Web, consists out of linked data. Key
for the ‘semantic web concept’ is that the data is connected to outside data.11
3.1 Note on Literature
It is neither possible nor desirable within the context of this thesis to give a full
overview of all Linked Data concepts; this would lead us too far from the actual topics
and become irrelevant soon. For an elaborate introduction to the Semantic Web (and
Linked Data), there are a number of well-written and understandable books, with lots
of examples and exercises. For an extensive and excellent covering of the topics in this
chapter, the reader can dive into (Allemang and Hendler, 2011; Segaran et al., 2009).
Following descriptions and definitions have a firm base in these books, that introduce
the concepts of Linked Data. The documentation at the website of the W3C
Consortium (www.w3.org) also gives a clear and comprehensive idea about some topics
that will be discussed. When applicable, the links to these webpages are provided as a
footnote.
10 http://linkeddata.org/faq (accessed 18/04/2018), (Pauwels et al., 2017b), … 11 The reader should note the difference between the use of ‘Linked Data’ and ‘linked data’ in this thesis: when the capitalized version is used, it refers to the framework to link data, by use of RDF. When
written lowercase, it just means ‘data that is linked’, thus talking about the data itself rather than about
the underlying structures . The same holds for ‘Semantic Web’: written lowercase, it just means an
implementation of Linked Data concepts to organize and connect larger datasets into a connected ‘web’
that can be queried for information. When capitalized, ‘the’ Semantic Web refers to the world wide
version of the previously mentioned concept, the W3C’s vision of the Web of linked data as an extension to the World Wide Web.
19
3.2 Resource Description Framework
The advent of the internet meant an immense improvement regarding document and
data exchange and collaboration around the globe. One of the keys that enable this
global network to keep growing at an unbelievable speed is the ability, and the
‘permission’ for everyone to post all things one is able to think of.12 This encouragement
for everyone to contribute to the World Wide Web is responsible for the rapid
expansion of the current internet, since everyone can state whatever he or she wants
(relevant or not) and put it online (Wikipedia, Facebook, Youtube, TripAdvisor,
personal website etc.). For an efficient information exchange (in both document web
and Semantic Web), the way this information is structured needs to obey international
agreements and rules. Note that, in this context, information exchange methods are
meant rather than the actual content of the information that is described (agreements
cannot restrain human interpretation misunderstandings). If one wants to develop or
use a Linked Data application, it is necessary to know the basic rules upon which this
application is going to be based.
As indicated on the W3C web page dedicated to the semantic web, “the ultimate goal
of the Web of data is to enable computers to do more useful work and to develop
systems that can support trusted interactions over the network.” 13 We tend to move
towards an ever-more automatized world, with information sources and stores
connected through a ‘smart web’, and more and more work is going to be done by
computers and algorithms. Information exchange is the core purpose of the Internet,
therefore, having a communication framework with which all actors can ‘speak’ with
one another is essential. Sadly, computers are not that efficient as humans in decoding
natural language into semantic meaning. While we do not perceive any difficulty in
understanding the meaning of highly complex sentences, this is an extremely
challenging, if not nearly impossible task for machines. So, for a network that is
‘understandable’ for both human beings and computer algorithms, information has to
be represented in two ways: a human-readable presentation and a machine-readable
description (Allemang and Hendler, 2011). This machine-readable description forms the
framework for the semantic web, and is standardized in the ‘Resource Description
Framework’ (RDF). 14 Currently, RDF is used widely around the world for representing,
managing and connecting information that is available on the web, either augmenting
existing web pages to be part of the Semantic Web or in constructing new applications
that need information located somewhere on a web server, with which the application
must be able to communicate unambiguously.
12 Sometimes referred to as the AAA-slogan: “Anyone can state Anything about Any topic” 13 https://www.w3.org/standards/semanticweb (accessed 20/04/2018) 14 https://www.w3.org/TR/rdf11-primer (accessed 15/05/2018)
20
3.2.1 Triples and Resources
Anything that is described in a Linked Data context is called a ‘resource’. This means
that not only ‘things’ or ‘objects’ (instances such as columns or doors), but that also
more abstract concepts (classes, values, disciplines …) and even relationships are
defined as resources. As the name already indicates, RDF provides a way to make
statements about these ‘things’ and their relationships with others.
In RDF, all information is represented by means of triples. Triples can be thought of
as very basic sentences, containing only a ‘subject’, an ‘object’ and something that
establishes a relationship between these two, referred to as the ‘predicate’. Just as in
many spoken languages around the world (including English and Dutch), one could
define RDF as a subject-verb-object (SVO) language, therefore, the word order of any
triple has a subject-predicate-object syntax (e.g. Jeroen (subject) writes (predicate)
Thesis (object)). The more triples that are defined, the more semantic meaning the
database gets: each resource becomes related to multiple others, sometimes being the
‘subject’, other times being the ‘object’. A short statement could look like this:
Subject Predicate Object
Jeroen writes Thesis
Jeroen a Student
Thesis about Linked Data
about Scan-to-Graph
…
Another way of representing RDF
information, is to visualize it into a graph.
Each node of the graph represents a
resource that relates to other resources by
edges that define their specific
relationship. These graphs are ‘directed’,
which means that the edges are like
arrows pointing from one node to the
other, making clear which one is the
object and which one the subject. By
using a graph visualisation, the
denotation as a ‘web’ of linked data
becomes obvious. Figure 3.1 displays such
a directed graph that describes a window that is located in an opening (as defined by
the Industry Foundation Classes (IFC) and its Linked Data equivalent ifcOWL).
Fig. 3.1: example of a graph describing a window in an opening (source: Pauwels et al., 2011)
21
3.2.2 Uniform Resource Identifiers and Literals
When dealing with small databases, with few persons involved, this way of defining
information through triples is fine. But if you try to scale this up to world wide web
format, some scaling issues enter. When different people are talking, it can occur that
they use different words when, in fact, they mean the same. This is a rather small issue
that can easily be overcome, since you could easily state their equivalence with another
statement. It becomes much more difficult when people use the same word for
describing things that are not the same, either with subtle nuances or huge differences.
Using the example above: there might be another ‘Jeroen’ that writes a ‘Thesis’; which
‘Jeroen’ and which ‘Thesis’ is meant? This issue is handled through the use of ‘Uniform
Resource Identifiers’, URIs in short.15 A URI provides a unique, global identifier for a
specific resource (entity or relationship). When two people (websites) want to make
sure they are referring to exactly the same topic, they refer to the unique URI of that
topic. In fact, the URLs (Uniform Resource Locators) used to refer to websites are a
specific case of URIs, that, alongside with identifying a resource, also provide the
information to locate and access this resource.16 Apart from URIs, there also exist so-
called value Literals: these contain information that is only included in the graph,
without being located somewhere else on the web. Literals can be usual datatypes such
as strings or integers, but also very specific datatypes that refer to a certain serialization
format or other datatypes. While the subject or predicate of a triple always relates to
a URI, the object might be either URI or Literal.
3.2.3 QNames
The use of URIs counters the problem of ambiguously talking about different things
using the same words. However, it makes the triples much less human-readable. An
sich, this causes no problems, since most of the time, the data is going to be read by
machines. However, when writing the triples down, “shortening” the resources into a
more easy-to-read form can be useful. The abbreviation scheme that is often used for
this, is called QNames (“Qualified Names”).17 A QName has two parts: a namespace
and an identifier; the namespace essentially just a reference to an elsewhere in the file
fully specified URI, the identifier serving for finding the exact resource present in that
namespace. For example, the URI hosting the definitions that are going to be
constructed in context of this thesis will be: ‘https://github.com/JWerbrouck/scan-to-
graph/blob/master/stg.ttl’. Now that the full URI is specified (in fact this could be
anything, as long as it is unique), we could state that when referring to concepts with
their qnames, the namespace ‘stg’ (‘scan-to-graph’) refers to this full URI. An example
of some relationship that will be defined in chapter III is the predicate
‘hasModellingRemark’. Fully, the URI would then be:
‘https://github.com/JWerbrouck/scan-to-graph/blob/master/stg.ttl/hasModellingRemark’,
but using QNames, one could just say stg:hasModellingRemark.
15 https://www.w3.org/wiki/URI (accessed 20/04/2018) 16 https://tools.ietf.org/html/rfc3986 (accessed 21/04/2018) 17 https://www.w3.org/2001/tag/doc/qnameids-2004-01-14.html (accessed 21/04/2018)
22
3.2.4 Turtle
There are many ways to serialize (i.e. “storing” sets of triples into a file) data. One of
these serialization formats is called ‘Turtle’. 18 Turtle is “a textual syntax for RDF that
allows an RDF graph to be completely written in a compact and natural text form,
with abbreviations for common usage patterns and datatypes.” For the description of
resources, Turtle uses the same abbreviation methods as QNames: a namespace and an
identifier, with a colon in between. After the namespaces and their abbreviations in a
Turtle file (*.ttl) are defined (as ‘prefixes’), little effort is needed to determine what the
triples are actually stating (when, at least, the used ontologies are known and the
resources clear about their meaning).
Note that there are other formats that can be parsed much more quickly than Turtle
(‘parsing’ means the inverse of serializing, reading these triple-based files and converting
them to the RDF data model), but Turtle has the advantage that it is more human
readable, which is why it is used to refer to RDF triples throughout this text.
3.3 RDF Schema and classes
The RDF Schema (RDFS) is the schema language of RDF. 19 This means “it provides
a way to talk about the vocabulary that will be used in the RDF graph. […] It provides
information about the ways in which we describe our data.” (Allemang and Hendler,
2011) What is special about RDFS (and other RDF vocabulary languages such as OWL
(Web Ontology Language (3.3.4)) is that all statements and resources in RDFS are
written using the rules of RDF. Rather than extending RDF by introducing new syntax
rules, it standardizes certain specific resources, identifying them as some kind of ‘special
keywords’. These ‘special’ resources can then be used to determine characteristics of
other resources.
3.3.1 Classes
To bring structure in the resources that are used in a graph, they can be grouped into
‘classes’. Classes are a way to state something about the nature of the resource, ‘what’
it is (e.g. dog, human, mammal, door, building, …). In order to do this, at least two
triples have to be defined:
Subject Predicate Object
product:Door rdf:type rdfs:Class ;
inst:FrontDoor1 rdf:type product:Door.
18 https://www.w3.org/TR/turtle/(accessed 21/04/2018) 19 https://www.w3.org/TR/rdf-schema/ (accessed 22/04/2018)
23
One states that ‘product:Door’ is a rdfs:Class, and then it has to be defined that this
particular instance ‘inst:FrontDoor1’ is a member of the class ‘product:Door’.20 The
statement that something is a class is defined in the ontology (section 3.4) and does
not have to be defined again in each graph. A resource can be an instance of multiple
classes. Note that the predicate used to say some resource is an instance of a class,
rdf:type, is often also written as ‘a’ (as in ‘is a’, for improving human readability).
Besides, the overall class of all resources is rdfs:Resource (all resources are
automatically instances of the class rdfs:Resource).
3.3.2 Subclasses
Using classes, every resource is defined to be part of a set (or multiple sets) of instances
that all belong to the same class. The classes they belong to can be very general (e.g.
Animal) to very specific (e.g. German shepherd). All German shepherds are, of course,
also animals. To implement this statement, one introduces a triple that states that a
German Shepherd is a ‘subclass’ of Animal. Applied to building elements, to state that
a trapdoor is also some kind of door:
Subject Predicate Object
product:Door-TRAPDOOR rdfs:subClassOf product:Door
Going further, ‘door’ can be defined as a subclass of ‘building element’, which is in
turn a ‘man-made object’ etc. A similar concept exists for properties, called
rdfs:subPropertyOf. It indicates that “If a property P is a subproperty of property Q,
then all pairs of resources which are related by P are also related by Q.” 21 For example,
if ‘hasTrapDoor’ were a subproperty of ‘hasDoor’, anything that relates to something
else with ‘hasTrapDoor’, would also relate to it by ‘hasDoor’.
3.3.3 rdfs:domain and rdfs:range
Two last important concepts are rdfs:domain and rdfs:range. Both are used to make
statements about a specific property that is going to relate specific resources. The first
one, rdfs:domain, makes a statement about the class of the subject of the triple for
which the property is going to be predicate. rdfs:range does the same, but for the object
of the triple for which the property is the predicate. For example, in the Building
Product Ontology (PRODUCT) there is one object property that is defined:
product:aggregates. In the ontology description22, only the following statement is made
about product:aggregates:
Subject Predicate Object
product:aggregates a ( = ‘rdf:type’) owl:ObjectProperty ;
rdfs:domain product:Product ;
rdfs:range product:Product .
20 rdfs:class is also itself a class: it groups the resources that are a rdfs:class 21 https://www.w3.org/TR/rdf-schema/ (accessed 25/04/2018) 22 https://github.com/pipauwel/product/blob/master/prod.ttl (accessed 25/05/2018)
24
This means that, whenever two resources are related by product:aggregates, one could
derive (‘infer’) that these two resources are both instances of the class product:Product.
It is worth mentioning that domain and range are not restricting the classes of an
instance, on the contrary, they are giving implicit information about it. This is
important for ‘inferencing’, discussed later in this chapter (section 3.4.2).
3.3.4 Web Ontology Language
When the basic elements for ontology description from RDFS do not suffice, the Web
Ontology Language (OWL) provides more expressivity and a means to make logical
rules or restrictions about some resources. These rules (and proofs) can serve a
reasoning process, which deducts new relations or recognises impossible relationships.
“In short, OWL further enhances the RDFS concepts to allow making more complex
RDF statements, such as cardinality restrictions, type restrictions, and complex class
expressions. The RDF graphs constructed with OWL concepts are called OWL
ontologies.” (Pauwels et al., 2017b)
3.4 Ontologies
The Semantic Web covers an endless amount of different areas and topics, each with
its own jargon, subjects and relationships. Providing definitions that make clear how
to talk about a discipline’s topics in a Linked Data context is no luxury. Such definitions
are described in ontologies (sometimes: ‘vocabularies’). On the one hand, ontologies
define a set of concepts (classes as well as properties) that can be used within a
particular (sub)discipline. They enable stating implicit information, e.g. by setting
domain and range of a property. On the other hand, using an OWL ontology, it is also
possible to restrict the possibilities for using these concepts (disjoint classes,
cardinalities etc.).
As indicated on the webpage of the W3C about ontologies/vocabularies, “the role of
vocabularies on the Semantic Web are to help data integration when, for example,
ambiguities may exist on the terms used in the different data sets, or when a bit of
extra knowledge may lead to the discovery of new relationships. […] Another type of
example is to use vocabularies to organize knowledge.”23 Thus, ontologies do not only
define concepts ‘for internal use’, they also help in negotiating between often
contradictory information on the web, in ‘communicating’ with other ontologies.
Because of this, there is no problem in combining several ontologies in one graph: after
all, the Semantic Web is like one large graph containing all kinds of ontologies.
Therefore, smaller, modular ontologies with a specific application area are often more
manageable than large ontologies that try to be all-encompassing (and thus more
applying to practice (Rasmussen et al., 2018)). Small ontologies might be combined to
serve a specific purpose, which will be illustrated further in Chapter IV.
23 https://www.w3.org/standards/semanticweb/ontology (accessed 28/05/2018)
25
Just as in the case of RDFS and OWL, ontologies consist entirely out of triples
themselves; they follow the same rules as any other RDF graph and are not to be
considered an extension but rather a set of statements about certain resources. In fact,
most ontologies largely rely on RDFS and OWL: the functional parts of the definitions
are often not using much more than rdfs:subClassOf, rdfs:subPropertyOf, rdfs:domain
and rdfs:range, and then referring to Classes that were previously defined in the
ontology, or elsewhere.
3.4.1 TBox and ABox
When talking about ontologies and graphs, there is a firm distinction between resources
that are instances and resources that are concepts (classes and properties). The terms
TBox and ABox refer to this dichotomy. As TBox (‘Terminological’) refers to the
concepts, ABox (‘Assertion’) refers to the particular instances that are linked to the
concepts in a specific graph.
3.4.2 Inferencing
‘Inferencing’ is a powerful ‘reasoning’ process typical for Semantic Web applications.24
It is based on a chain analysis of triples, that enables to discover new relationships
between resources, based on the information that is present in the graph and the rules
that are specified in an ontology (or a set of ontologies). These new relationships and
information can then further enrichen the graph (or the query (section 3.5)), making it
‘more complete’ by making implicit information explicit. As indicated in the example
above, given the meanings of rdfs:domain and rdfs:range, one could define the classes
of objects and subjects, which can then in turn be used for further inferencing. A fictive
property could be prop:OpensTo, with domain product:Door and range bot:Space. The
use of a triple: ‘inst:resource1 prop:OpensTo inst:resource2’ suffices for determining
that ‘inst:resource1’ is a Product:Door and ‘inst:resource2’ a Bot:Space.25 This class
inferencing can then serve for making more information explicit.
Logical inferencing is a powerful mechanism for improving the quality of the data,
analysis and management, tracing inconsistencies in the datasets and much more.
When provided with a set of rules, semantic reasoning applications use inferencing for
rule checking, e.g. for building regulations etc. It can also be used while querying a
dataset for specific information, which is done using SPARQL queries.
24 https://www.w3.org/standards/semanticweb/inference (accessed 05/05/2018, updated 24/06/2014) 25 Note that a class inference by looking at the domain and range is by no means a restriction to the classes a resource can be an instance of: an instance can belong to several classes. The only way this can cause information
collisions is when a ‘higher’ semantic language such as OWL, in which it can e.g. be stated that 2 classes are mutually
exclusive (disjoint). Such collisions can be detected using ‘reasoning engines’, in a variant of inferencing.
26
3.5 SPARQL
A last, very important Linked Data concept is SPARQL (“SPARQL Protocol And RDF
Querying Language”).26 SPARQL is the query language used to query RDF databases
using triple patterns. In fact, a variant of Turtle is used to define SPARQL queries.
Roughly, a query consists of three parts: defining the prefixes that refer to full URIs
(cf. Turtle), then the command (‘SELECT’, ‘INSERT’, ‘CONSTRUCT’, etc.). A
SELECT query is followed by the variables that are to be ‘extracted’ (always preceded
by a ‘?’) and then the definition of the triple patterns to which the variables have to
comply (‘WHERE’). The simplest example returns all subjects, prefixes and objects in
the graph:
SELECT ?s ?p ?o
WHERE {?s ?p ?o}
Another, very simple example that queries a graph for all instances that are an
‘IfcWallStandardCase’, limiting the amount of results to maximum 10:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX ifcowl: <http://www.buildingsmart-tech.org/ifcOWL/IFC2X3_TC1#>
SELECT ?w
WHERE {?w rdf:type ifcowl:IfcWallStandardCase}
LIMIT 10
Of course, SPARQL queries can consist out of several triple patterns to exactly specify
what one is looking for. The return of a SPARQL SELECT query takes the form of a
table. In contrast, a SPARQL CONSTRUCT query returns a complete RDF graph,
which can be visualized, exchanged, queried again and so forth. Other query types
insert triples in an existing database (INSERT) can filter out duplicates (SELECT
DISTINCT) or ask yes or no questions (ASK).
In a building context, the results of a SPARQL query can play an important role for
visualizing specific elements (see Chapter IV). For example, for a viewer to visualize
only ‘IfcWallStandardCase’ elements, asked by the previous query. Or, very specific,
visualize windows from a specific manufacturer and with specific U-value, that face the
northern side of a building.
26 https://www.w3.org/TR/rdf-sparql-query (accessed 30/04/2018, updated 26/03/2013)
27
3.6 Linked Data in the AEC industry
Some basic principles of Linked Data in general have been discussed in this chapter.
This part focusses on the implementation of Linked Data technologies within the AEC
industry, which are gradually finding their way into practice. Implementation of
Semantic Web Technologies will improve collaboration in the AEC Industry, which is,
in fact, one of the very core businesses of BIM. To encourage this evolution towards
data-based BIM, Pauwels et al. (2017b) motivate their use by three main arguments:
- Interoperability
- Linking across domains
- Logical inference and proofs
3.6.1 Interoperability
To enhance collaboration between the multitude of stakeholders that is typically
involved in a construction project, the Industry Foundation Classes (IFC) form the
main standard for describing, exchanging and sharing information.27 IFC is maintained
by BuildingSMART28 and provides an open and readable description of a BIM that is
(at least partly) supported by virtually all BIM-related software products. However,
IFC is not synonym to BIM; it is a standard, which means it reflects more or less the
current state of BIM as implemented in practice. This current state can be situated in
the ‘BIM Levels of Maturity’ diagram (Fig. 3.2). Four Levels of Maturity are
distinguished: the lowest level (Level 0) represents the mere use of CAD software, using
drawings as the main exchange objects; the highest level (Level 3) represents BIM as
a highly interoperable framework based on linked data, integrated web services etc.
Today, the majority of the AEC market is still situated at Level 0, 1, or 2 (Rasmussen
et al., 2018).
Because the AEC Industry usually slowly adapts to novelties, taking the step towards
a web-of-data-based BIM needs some encouragement and a clear outline of its added
value. Pauwels et al., (2017b) argue that the adoption of semantic web technologies
“might be the ideal technical means to provide interoperability while also allowing to
flexibly handle new semantic structures” that could extend or adapt existing structures
such as ifcOWL. The ifcOWL ontology (Pauwels and Terkaj, 2016) is the literal
translation to RDF of the ‘classic’ IFC framework (which is modelled in EXPRESS).
This is because RDF serves as the unifying framework for representing any kind of
information and the intensive focus of semantic web technologies on “linking diverse
graphs of information together in a web-like fashion”. As an example, Abdul-Ghafour
et al. (2007) focus on the exchange of neutral CAD data by using semantic web
technologies. They propose an ontology (Common Design-Feature Ontology) that maps
the concepts of different CAD applications and is thus able to “serve as an interlingua
27 http://www.ifcwiki.org/index.php?title=IFC_Wiki (accessed 28/04/2018) 28 http://www.buildingsmart-tech.org/(accessed 28/04/2018)
28
to enable semantic interoperability.” Rhinoceros (McNeel), which is used as the main
CAD application for this thesis, was unfortunately not included.
The ability to combine different representations of information is of great value to this
thesis, as the resulting graph will include several geometries mapped to single instances
(e.g. point clouds and STEP-files). (See Chapter IV)
Fig. 3.2: BIM Levels of Maturity (data-based BIM on the right) as defined by BSI Standards Limited, 2013 (source: Rasmussen et al., 2018)
3.6.2 Linking across domains
Currently, Building Information Modelling is the AEC industry’s main tool for
integrating architecture, construction, MEP, time and cost estimations and so forth.
However, there are still many relevant topics that are not (or insufficiently) covered by
most BIM applications, such as Geographical Information Systems (GIS), Facility
Management (FM), or heritage conservation. Due to their inherent capability to
connect information from various fields, Linked Data technologies hold promises for
applications that can integrate information coming from various disciplines, located on
various servers.
Closely related to this is the ability to search product catalogues for specific products.
For some software packages, such catalogue databases already exist and are either the
initiative of the product manufacturers themselves or third parties that provide
databases with products of different companies. An example for BIM in general is
29
BIMobject.com29, but there are also databases for very specific applications (e.g. for
Lighting Analysis), such as LumSearch30 for DIALux. The main disadvantage of these
databases is that the information they contain is mostly very software-specific. It would
save a lot of work and money when manufacturers or third parties would be able to
implement products (and the corresponding (meta)data) in a standardized procedure
that can be used by all BIM-packages. This standardized procedure can be achieved
through Linked Data.
Some initiatives have been made for providing such general building product databases.
An important one is the bsDD31 (BuildingSMART Data Dictionary), which serves as
a foundation for instance libraries and is being converted to RDF by the DURAARK
working group.32 The bsDD defines general terms that can, for example, be used for
product manufacturers to specify information about their products. Further on, its aim
is to be language independent, thus being usable worldwide. Another, smaller (and thus
more manageable) initiative for classifying building elements is the Building Product
Ontology33, which has a subontology about building elements, that are also often linked
to the bsDD or IFC classes. In this thesis, the product ontology is used as a basis for
classifying elements, because of its compactness in comparison with the bsDD and
others. A product catalogue that bundles specific products from different European
manufacturers has been constructed in context of BauDataWeb34.
According to Figure 3.3, in 2014, the state of the industry regarding product support
lied at the very beginning of ‘Level 2’, which means vendor or market specific product
libraries with downloadable components, but without standardisation. Nowadays
(2018) this state has probably shifted towards somewhere between Level 2 and the
beginning of Level 3 (this is the author’s assumption, since no up-to-date schemes were
found). As indicated in Figure 3.3, level 3 is just the beginning of open, online product
libraries, with bsDD support. While in academic environments, lots of research is being
carried out for deepening the implementation of Semantic Web technologies for the
AEC industry, everyday praxis reaching level ‘beyond level 3’ is not something that
will be achieved in the next couple of years. Note that Level 1 in Figure 3.3 corresponds
with Level 0 in Figure 3.2.
29 http://bimobject.com/en-us (BIMobject is the successor of Autodesk SEEK, which combines different kinds of BIM-families from different manufacturers) (accessed 29/05/2018) 30 http://lumsearch.com/en-US#0 (accessed 29/04/2018) 31 http://bsdd.buildingsmart.org/ (accessed 29/04/2018) 32 http://duraark.eu/tag/standardization/ (accessed 29/04/2018) 33 https://github.com/pipauwel/product (accessed 25/05/2018) 34 http://semantic.eurobau.com/ (accessed 29/04/2018)
30
Figure 3.3: Technical roadmap Product Libraries - state of the industry in 2014 (source: https://www.buildingsmart.org/standards/technical-vision/technical-roadmaps/, accessed 28/05/2018)
3.6.3 Logical inference and proof
The ability to deduce additional information from original data and the concept of
‘semantic reasoning’ can be of considerable value for querying the graph for specific
resources or rule checking environments (Pauwels et al., 2011). In context of this thesis,
inference is a more relevant feature than rule checking, since it allows to construct
powerful queries that propagate through the graph and are able to find implicit results.
An example related to this thesis could be: a user defines a SPARQL query that
propagates through the graph for all windows that have a point cloud representation
mapped to it, in order to visualize them. Inferencing that an instance of a subclass is
also an instance of the superclass, will make sure that not only the instances of the
class ‘product:Window’ will be found, but also the instances that belong to its
subclasses, e.g. ‘product:Window-LIGHTDOME’ or ‘product:Window-SKYLIGHT’.
3.6.4 About detail
Apart from the previous arguments, there is also the fact that an ontology-based BIM
approach provides a way for a virtually unlimited detailed description of a subject.
This in contrast to standard ways of classification, which are limited in their classes
(for example, there is a class ‘ifcDoor’, but not something like ‘ifcDoorHandle’ or
‘ifcDoorHinges’, which, in contrast, could be defined very easily using Linked Data).
Apart from providing advantages for as-planned BIM, this can be valuable for as-is
BIM object specification (e.g. in a heritage context: specifying a type of medieval door
knob that is, of course, not present in any manufacturer database).
31
3.7 Linked Data in Cultural Heritage
Another discipline that strongly benefits from the interconnection of data in the
semantic web is the cultural heritage sector: enhanced research collaboration and a
more interactive dissemination (e.g. in a Virtual Museum) towards an interested public
being just two examples. Worth mentioning is the Europeana Data Model (EDM), an
immense database that contains information about European cultural heritage, among
others available via a SPARQL endpoint. Another initiative is E-CRM35, an ontology
based on CIDOC Conceptual Reference Model (CIDOC CRM)36 by the university of
Erlangen-Nuremberg. CIDOC CRM is an international ISO Standard (ISO 21127) for
dealing with cultural heritage documentation in general, defining the main classes and
properties of cultural assets and the events that shaped them. Related to this is the
conversion from the XML-based “Cultural Heritage Markup Language” (CHML) to an
OWL-based Linked Data version that relies on E-CRM. CHML is a data model that
captures the process of documenting and creating a 3D reconstruction of objects that
no longer exist or never existed at all. Since this thesis relates to 3D modelling of
existing buildings, an introduction to CHML will be the main topic of this section.
3.7.1 Interpretation and reconstruction
In its focus on heritage that doesn’t exist (anymore), CHML makes a difference between
‘3D preservation’, and ‘3D reconstruction’. ‘3D Preservation’ relates to an automatic,
algorithm-driven reconstruction of faces and solids based on point clouds (for still
existing objects, which is not the topic of CHML). As we have seen in the section about
scan-to-BIM, such a fully automatic, algorithmic conversion to a semantically rich,
segmented object is yet to be developed (at least in the case of buildings). Therefore,
the ‘3D preservation’ as mentioned in (Kuroczyński et al., 2016) may probably be seen
as just a computer generated 3D model without semantically rich objects, meant for
preservation and not for querying (as this is shortly mentioned in (Kuroczyński et al.,
2016), this is an assumption made by the author of this thesis). ‘3D reconstruction’,
the main topic of CHML, is the process in which a human modeller takes an essential,
creative role in modelling something that is currently non-existent, in a scientifically
approved process based on “a broad data acquisition (primary and secondary resources),
the evaluation and interpretation of various sources, and finally digital molding, leading
to the digital 3D object” (Kuroczyński et al., 2016).
3.7.2 An event-centric model
Since CHML is based on CIDOC CRM, it takes the same event-centric approach, which
means that the focus lies on documentation of events that influenced (or were
influenced by) the object, rather than focussing on the object itself, making direct links
between the described object and its features (‘object-centric’ model, e.g. EDM)
35 http://erlangen-crm.org/ (accessed 28/05/2018) 36 http://www.cidoc-crm.org/ (accessed 28/05/2018)
32
(Kuroczynski et al., 2016). Specific for CHML, the most important ‘event’ of the
ontology is the 3D reconstruction process itself (Fig. 3.4).
‘Activities’ create and describe sources that are
used by the modeller to make the 3D
reconstruction. In CHML, the “reconstruction
activity” serves as the core event, the ‘glue of
sources, semantic objects and actors, set in
time and space’. Both physical object and 3D
reconstruction are referred to as the ‘Semantic
Object’, the abstract idea of the object, but in
the relational schema, they are only connected
indirectly, through sources and reconstruction
activities (Fig. 3.4-3.5) (Hauck and
Kuroczyński, 2014). Such a focus on sources
and the modelling activity is essential when
the object of description is non-existing at the
moment of research: it allows modelling even
when no actual object is present, when there are only 2D or textual sources (Svenshon
and Grellert, 2010). Note that such documented use of sources is also very useful for
an as-is reconstruction of an existing building. The interpretation of the sources is not
(only) the task of the modeller, but mainly of professional (art) historians or
archaeologists. The modeller, often an architect, can contribute with his expertise in
construction logic or statistics (Svenshon and Grellert, 2010).37
Figure 3.5: the CHML workflow (source: Kuroczyński et al., 2016)
37 This methodology is a general approach to 3D reconstruction of (non-existing) heritage and has no direct relationship with Linked Data
Figure 3.4: The reconstruction process as the central event in CHML. The semantic object serves as a reference to both the physical object and its 3D reconstruction.
(source: Hauck and Kuroczyński, 2014)
33
3.7.3 TYPE labelling
As the main classification system, CHML uses definitions provided by thesauri,
Wikipedia, publications etc. These definitions of an element are then bundled in a
subclass of H6_CHML_Type, which is an owl:Thing (class) itself. An H6_CHML_Type
is characterized by 4 capital letters and can be assigned to any object, source, activity,
actor, place or event, which makes its area of use very broad (e.g. ‘FIRE’ stands for
the object class ‘Fireplace’, ‘ARIL’ for ‘Archeological Illustration’ and ‘CARN’ for
‘Computer Aided Reconstruction’)38 (Hauck and Kuroczyński, 2014; Kuroczyński et al.,
2016). Some types are more specific than others; they can be linked to a certain level
of detail: a division into ‘capital-shaft-base’ is more detailed than just ‘column’). This
‘detail’ of classification is project-specific and relates to the project’s ‘Semantic LoD’ as
defined at the beginning of a project.
3.7.4 Semantic LoD
(Hauck and Kuroczyński, 2014) propose the so-called ‘Semantic LoD’ as a Level of
Detail that links the precision of classification with the building parts of a model and
an equivalent architectural scale: as a larger scale depicts more details, so does a higher
Semantic LoD relate to a finer classification. A classification level for a given Semantic
LoD is explicitly not given, only the recommendation to make the decision based on
the project’s requirements. The combination with the ‘Level of Information’ (LoI, a
quantification for the reliability of the source) leads them to define another Level, one
that copes with modelling assumptions: the ‘Level of Hypothesis’, which is defined as
the difference between the LoD and the Level of Information of the sources (Table 3.1).
Table 3.1: : Relation between 'Level of Hypothesis', 'Level of Information'
and ‘Level of Detail', connected with a traditional architectural scale.
(Source: Hauck and Kuroczyński, 2014)
38 http://www.patrimonium.net/find/67/result/17db7047-5cec-e404-90c2-ce32c099f3bb (accessed 10/05/2018)
34
In a digitalization process of existing buildings, point clouds are the sources that are
most often used. The use of the schema and definitions in Table 3.1 is thus limited in
this context, also because the Level of Information does not take structural
assumptions, occlusions or other invisible parts into account.
3.8 Conclusion
This chapter introduced the RDF framework and related topics forming the basis of
Linked Data and its application in the Semantic Web. It has been shown that, within
RDF, all information can be described by use of triple patterns, provided with the right
definitions (resources). To make sure every definition (both classes and properties) is
unique, it is assigned a globally unique string, a URI. The frameworks that describe
the specific use of these resources are called ontologies or vocabularies. To make the
difference between ontology concepts and instances in graphs, the difference between
TBox (ontology) and ABox (instances) was mentioned. As a final concept, the basic
RDF query language SPARQL has been laid out briefly. In Chapter I, it was already
shown that an enormous amount of disciplines are engaging in the Semantic Web and
use Linked Data for their knowledge bases. Going further on this, this chapter
concluded with a discussion on applications of Linked Data in the AEC industry and
applied to Cultural Heritage.
35
Chapter IV: Methodology
The basic concepts of point clouds and scan-to-BIM were explained in Chapter II.
Although scan-to-BIM has potential to serve as a database for management of existing
buildings, renovation projects and more, it keeps struggling with some problems
inherent to current-day BIM, that have already been outlined in the introduction and
the section about scan-to-BIM.
Introduced in Chapter III, the framework provided by RDF can be a unifying and
highly adaptable method for connecting disciplines regardless their topic or internal
regulations. Taking the starting points of scan-to-BIM but implement them into a
‘scan-to-Graph’ process is a novel and interesting way to tackle some of the previously
outlined issues. As with BIM, the basic topology of the building serves as the backbone
of the graph. Relevant attributes can be included, such as geographic location or
element classification, digital representations of the real-world building elements (and
their level of accuracy), links to the point cloud sources, and possible observations
about the state of the building (element) or modelling remarks/uncertainties. A scan-
to-graph approach could thus provide an answer to some inherent needs of as-is
modelling (ontologies as alternatives to slowly adapting existing BIM standards), at
the same time much more focussed on the topic of interest and less cumbersome than
an entire BIM.
In this chapter, a methodology that could be taken to construct a graph that
incorporates these data will be outlined. This methodology will lead to ‘template’ for
the graphs as generated by the plugin. The template provides a basis for further
modular extensions in the future, e.g. for adding historical or technical information or
materials. First, the modular ontologies that will serve as a basis for the graph will be
discussed, their purpose as well as the classes and properties that are going to be used
(section 4.1). Ontologies that relate to the AEC industry will form the basic ontologies,
complemented with the GIS-ontology GeoSPARQL. This section will also include a
short discussion on the practical use of CHML and E-CRM. When all is clear about
the existing ontologies, the next section discusses how geometry will be handled in the
graphs: the way of storing geometry as well as the serialization format that will be used
(section 4.2). Then, some specific classes and properties for the scan-to-graph process
will be defined in a ‘scan-to-graph’ (STG) vocabulary (section 4.3). These definitions
will include (1) source linking, (2) geometry handling and (3) keeping track of
metadata. Finally, the structure of the template (the graph as serialized by the plugin)
is discussed and illustrated (section 4.4).
36
4.1 Modular Ontologies
4.1.1 Building Topology Ontology
A whole set of ontologies exist right now to formally describe building topology,
elements, properties etc. An example is ifcOWL (Pauwels and Terkaj, 2016), which
literally translates the EXPRESS-based IFC data schema into OWL. Therefore, it is
very suitable to convert IFC models to a linked data equivalent. However, because of
this dependency on IFC, it is also a very heavy ontology and less appropriate for dealing
with smaller-scale projects that do not really need such elaborate schema. An ontology
that would fit more into the spirit of scan-to-Graph is the Building Topology Ontology
(BOT) (Rasmussen et al., 2017, Rasmussen et al., 2018), which is meant as a simple,
modular and extendible ontology to define “relationships between the sub-components
of a building” (Rasmussen et al., 2018). Because of this minimalistic approach and its
usability for existing buildings, BOT is chosen as the main ontology for defining the
building’s spatial relationships in the graph.
In BOT, a building consists of (hierarchical) zones and building elements. Subclasses
of zone are ‘sites’, which can contain ‘buildings’, ‘storeys’ and ‘spaces’. Apart from
containing other zones, a zone can also be adjacent to another zone. Each zone can be
bounded by physical building elements and can also contain them. Building elements
can also host other building elements (e.g. a wall can host a window). Interfaces
between zones, elements or ‘zone and element’ are quantifiable. Figure 4.1 depicts the
spatial relationships between zones, Table 4.1 is a short list of the classes and properties
that will be used in the final graph (bot:interfaces are omitted in this thesis). The first
column shows the domain of the property in the middle column, the third column its
range. Because the names of the classes and properties are clear about their meaning,
this is not included in Table 4.1 (they are fully described in the BOT Github
repository)39.
Fig. 4.1: Topological relationships in BOT (source: Rasmussen et al., 2017)
39 https://github.com/w3c-lbd-cg/bot (accessed 28/04/2018)
37
CLASSES (domain) PROPERTIES CLASSES (range)
bot:Zone bot:containsZone bot:Zone
bot:adjacentZone bot:Zone
bot:Site1 bot:hasBuilding bot:Building
bot:Building1 bot:hasStorey bot:Storey
bot:Storey1 bot:hasSpace bot:Space
bot:Space1 bot:containsElement bot:Element
bot:adjacentElement bot:Element
bot:Element bot:hostsElement bot:Element 1 bot:Site, bot:Building, bot:Storey and bot:Space are all subclasses of bot:Zone Table 4.1: classes and properties of bot that will be used throughout the thesis
4.1.2 CHML and E-CRM
CHML and E-CRM form an interesting reference point when coping with 3D modelling
of heritage in a Linked Data context. However, some important remarks on its
usefulness for this thesis should be made. The first remark concerns the fact that the
core topic of the thesis is ‘as-is’ modelling of existing buildings, in a AEC-related
perspective. In contrast, E-CRM takes the historical point of view and CHMLs core
business is documenting the modelling process of buildings that do not exist (anymore).
This has important consequences, beginning with the interpretation of the sources.
When modelling a non-existent building, one has to rely on textual sources and images.
In contrast, existing buildings allow to make 3D point clouds, which are both in Table
3.1 (Section 3.7.4) and Chapter II indicated as the most reliable sources for
reconstruction. As a consequence, since for this thesis there are complete point clouds
available, the CHML core activity of interpreting sources about the nature of the
heritage object becomes redundant. The total ‘Level of Information’ as defined in
section 3.7.4 will be 9 (all sources are point clouds), hence, according to the definition,
the ‘Level of Hypothesis’ will be 0. Yet a ‘Level of Hypothesis’ as defined in (Hauck and
Kuroczyński, 2014) does not take the internal structure into account or the fact that
there are nearly always occlusions in a point cloud. The usefulness of the ‘Semantic
LoD’ scheme to this research is thus limited. CHML provides a source-labelling method
(chml:H106_Digital_Source_Type), but these are explicitly specified according to
http://cv.iptc.org/newscodes/digitalsourcetype, which includes only images and not
point clouds. A custom property for referring to point cloud sources will be defined
later in this chapter.
Also the way CHML 3D Objects are never relating directly to the object itself but
always via the reconstruction activity might a valuable concept in CH context, but
seems a bit strange when considering a BIM perspective: in architectural practice, not
38
the modelling activity is central, but the direct link between a virtual object and its
real-world counterpart. The virtual object is a very useful tool for managing, planning
and designing but nevertheless still a ‘tool’. In this context, ‘semantic object concept’
stays applicable to BIM, but the hierarchy between the real-world object and its virtual
representation is much more outspoken.
CHML and especially E-CRM provide an extensive vocabulary for modelling historical
objects, their purposes and events in which they were created. As enrichment with
historical information is outside the scope of the thesis, E-CRM and CHML definitions
are not included in the plugin’s UI. However, since such information can be very
valuable, a method to implement it nevertheless without using the plugin will be
illustrated later in the thesis (Chapter V), scratching only the mere surface of E-CRM
and CHML. A future research project could connect with this thesis in providing a
compatible UI for historical data enrichment, eventually being able to present a project
both as a BIM and as a virtual museum.
4.1.3 Classification
The CHML_TYPE classification is an interesting classification system, each type
referring to multiple online descriptions (such as the Getty Art & Architecture
Thesaurus (AAT)40, DBpedia41 etc.) and handling a system that defines part-whole
relationships and ‘broader terms’. An AEC counterpart of an extensive product
vocabulary would be the bsDD, which has been discussed previously (section 3.6.2), or
the Building Product Ontology42.
However, the scope of CHML_TYPE is very broad and is not limited to physical
building elements: it also encompasses events and activities, places, materials an
persons. Furthermore, a structured overview of defined CHML_TYPEs is currently
lacking, which makes it difficult to use. On the other hand, the bsDD is (like BIM in
its entirety) focussed on modern-day building practice and does not contain old-
fashioned building elements such as, for example, a column acanthus leave.
Furthermore, as we want to keep a modular ontology approach for describing data
about buildings, as formulated in (Rasmussen et al., 2018), and this project is mainly
a proof-of-concept, both CHML_TYPE and the bsDD are considered too heavy. For
these reasons, it is chosen to work with the modular Building Product Ontology43.
4.1.3.1 The Building Product Ontology
The Building Product Ontology has been mentioned shortly in context of Linked Data
building databases (section 3.6.2)). It is compatible with other modular W3C Linked
40 http://www.getty.edu/research/tools/vocabularies/aat/ (accessed 25/05/2018) 41 DBpedia is the Linked Data equivalent of Wikipedia 42 https://github.com/pipauwel/product (accessed 25/05/2018) 43 Note that in a real project, RDF allows to implement both CHML_TYPE and the bsDD: multiple classification systems can be used to make statements about an element.
39
Building Data ontologies (LBD)44, such as BOT. It extends BOT as it defines classes
(and subclasses) for different building elements, as well as for furniture and MEP. The
overall class in the Product Ontology is product:Product, specifically for building
elements there exists the subclass product:BuildingElement. Subclasses include
product:Beam, product:Column, product:Window etc. These are then the superclasses
for product:Beam-BEAM, product:Beam-Hollowcore etc. Adding information with the
Product Ontology gives the bot:Elements in the graph a richer semantic meaning,
distinguishing them as different types of elements. The Product Ontology only defines
one single property, being product:aggregates (domain and range both
product:Product). Therefore, the specific class of an instance should be stated explicitly,
since only the overall class product:Product can be inferred. Since one can always add
further detail without any restriction, it is obvious that not all building elements are
specified in the Product ontology. For example, there exists a class product:Column,
with subclasses like product:Column-COLUMN and product:Column-PILASTER.
There are no classes for a column’s capital, shaft or base. By consequence, there are for
example no classes for ‘acanthus leaves’ in a capital either.
If there is no Product Ontology Class for an element of the Case Study, a subclass of
product:BuildingElement will be created in a custom vocabulary ‘stgp’ (scan-to-graph
products).45 Following the example of CHML_TYPE, a definition from Getty AAT
(Art and Architecture Thesaurus) will be provided (rdfs:seeAlso). The Getty AAT
provides RDF definitions about different architectural elements and is interesting
because it is not limited to modern building practice. As an illustration, an example of
an object definition (Turtle) that relates to the Product Ontology and provides extra
information via the Getty AAT could be:
Subject Predicate Object
stgp:Capital a owl:Class ;
rdfs:subClassOf product:BuildingElement ;
rdfs:label “Capital”@en ,
“Kapiteel”@nl ;
rdfs:seeAlso “http://vocab.getty.edu/aat/300001662" .
4.1.4 GeoSPARQL46
GeoSPARQL is an initiative of the Open Geospatial Consortium (OGC) and provides
a vocabulary for representing geospatial data in RDF an extension to SPARQL to
query this data. Since GIS makes already quite extensively use of the Semantic Web,
GeoSPARQL is a well-maintained ontology that can be used to connect building
information with geographical information. Particularly of interest to this thesis is
44 https://www.w3.org/community/lbd/(accessed on 08/05/2018) 45 A .ttl-document can be found at https://github.com/JWerbrouck/scan-to-graph/blob/master/stgp.ttl (also added as Appendix D) 46 http://www.opengeospatial.org/standards/geosparql (accessed 05/05/2018)
40
‘geo:hasGeometry’, a property that links an instance with its geometrical
representation. Also interesting is the fact that GeoSPARQL makes integration of
georeferenced data into the graph possible, using any geographic reference system.
4.1.4.1 geo:hasGeometry
As an object can have different representations mapped to it, the concept of
geo:hasGeometry will be used as the main connection between an instance and its
geometric representation (its domain is geo:Feature, its range geo:Geometry). It does
not provide, however, any information about the format of the geometry representation,
since GeoSPARQL serializes its geometry standard into WKT Literals (‘Well Known
Text’) 47. The property to link WKT serialization with the geo:Geometry instance is
defined as ‘geo:asWKT’. This property will serve as a model to define the custom
property ‘stg:asSTEP’ that link the geo:Geometry instance to its representation in
STEP format (section 4.2-4.3).
4.1.4.2 Georeferencing
Since every existing building has a location in the world, to be complete, its virtual
model should contain a reference to this location. Multiple Geographic Coordinate
Systems (GCS) exist, both globally used, such as UTM (Universal Transverse
Mercator) or WGS84 (World Geodetic System 1984) as locally (e.g. in Belgium there
is Belgian Lambert 1972). All these systems differ in origin and their way of referring
to a geolocation. Conversion systems exist, but are seldom exact, e.g. because different
GCS use different transformation sysstems. A graph containing a georeferenced
building should thus contain the geocoordinates in its initial system, but may
additionally contain georeferences using other GRS as well. GeoSPARQL provides a
way to georeferenced geometry in a system other than its default (WGS84), which is
interesting because the point clouds for this thesis (and thus the whole resulting digital
reconstruction) use the Belgian Lambert 1972. Often in GIS, an object (point, polygon,
spatial object …) is georeferenced rather than an entire project (e.g. locations of
electricity towers are represented by a simple georeferenced point, since a more detailed
geometry is not necessary to represent it on a (digital) map). In this thesis, the building
project as a whole is referenced with one project origin that could be mapped to the
project Site instead of to each constituent object. As indicated in section 4.1.4.1,
GeoSPARQL uses WKT to link geo:Geometries to their actual representing data. The
path from the bot:Site to the WKT representing the origin in a defined GCS would
look like this:
Subject Predicate Object
inst:Site1 geo:hasGeometry inst:Origin
inst:Origin geo:asWKT <WKTLiteral>
47 http://giswiki.org/wiki/Well_Known_Text (accessed 26/05/2018)
41
GeoSPARQL currently does not include the definition of geometry descriptors as
subclasses of geo:Geometry (point, line, polygon…).48 To specify the inst:Origin is a
project origin and not a general geo:Geometry it is chosen to define a subproperty of
geo:hasGeometry in the context of the thesis (stg:hasOrigin) (see section 4.3). The
relationship becomes then:
Subject Predicate Object
inst:Site1 stg:hasOrigin inst:Origin
inst:Origin geo:asWKT <WKTLiteral>
For referencing the project using another system than WGS84, The WKTLiteral
written out consists of a URI that refers to the GCS, the geometry type (Point) and
its coordinates between brackets49. The datatype of the Literal is, as usual, stated after
the content of the Literal itself. All GCSs that are currently in use are specified at
http://www.opengis.net/def/crs/EPSG/0. In the case of the Gravensteen,
georeferenced in Lambert72, the WKTLiteral would look like this (104500 being the
longitude, 194300 the latitude):
"<http://www.opengis.net/def/crs/EPSG/0/31370>
Point(104500 194300)"^^geo:wktLiteral
With the same method, other GCS with respective coordinates can be mapped to
denote coordinates.
48 https://www.w3.org/2015/spatial/wiki/Further_development_of_GeoSPARQL (accessed 28/04/2018, updated 26/09/2016) 49 As defined in (Perry and Herring, 2012)
42
4.2 Geometry
4.2.1 Geometry implementation in the graph
The previous sections discussed some ontologies that provide definitions to construct
an RDF graph of an as-is building. As is outlined in (Pauwels et al., 2017a) one of the
challenges of Linked Data is the inclusion of geometric data. To reduce the size and
complexity of an RDF graph, two options are outlined:
- remove the geometry from the RDF representation
- improve the RDF representation of geometry
In the case of (Pauwels et al., 2017a), the first option is dismissed because the context
of the paper is about the ifcOWL ontology, which is aimed as a reference standard for
synchronization with EXPRESS IFC (which does include the geometry). This thesis
does not aim to provide a reference standard, but removing the geometry is not
recommended as well: a key reason for performing scan-to-BIM (scan-to-graph) is to
geometrically represent the as-is circumstances, just because they show the real-world
imperfections that are neither included in the building’s topology or an as-planned BIM.
Leaving all geometry out would thus undermine the concept of making an as-is model
and mean a serious impoverishment to the graph itself. On the other hand, as such a
reconstruction project can include a lot of different geometrical representations (due to
its stack of sources: plans, point clouds, 3D-models), including them all directly into
the graph would extremely slow down querying and loading the graph and bring the
file size to multiple Gigabytes, also because some of this geometry cannot really be
improved more qua file size. For example, the point clouds that are used as a modelling
blueprint are in fact already stripped from all information that is not strictly necessary,
only encompassing coordinates, Intensity and RGB values. Their file size is inherent to
the format.
A middle ground between the two above problems could be to include URLs that link
to a web location containing geometrical data as a document; this way a connection to
the internet could provide access to the data without overloading the graph (note the
difference between URI and URL (see section 3.2.2)). This is the way very large files,
such as point clouds, are handled in this thesis: large documents that are available on
the web can be downloaded to a project folder C:/STG/Project, which forms a ‘local’
URL that is used in the graph. The graph references to both local and global URL. It
seems logic to also include some more compact geometric representation in the graph
itself: this way, one of the most important features of scan-to-BIM/graph is kept in the
graph and quickly to retrieve, also making the access to geometry independent of access
to the internet (which is often not as self-evident as it seems)). Such a compact
representation could be BRep geometry (Boundary Representation), which represents
the modelled geometry by defining its boundaries, and is the way most CAD
applications handle their native geometries.
43
4.2.2 Geometry formats in an RDF graph
The previous section argued why BRep geometry will be included directly in the graph
and point clouds will not. However, BRep is just a way for representing geometry and
does not propose a single file format. Pauwels et al. (2017a) discuss several methods to
include geometrical data in an RDF graph. To preserve the compatibility with
EXPRESS IFC and to maintain a maximum of semantic richness, the methods that
are discussed all rely entirely on RDF: single, simple geometric elements being separate
resources that are linked to Literals with their values and to each other by specific
properties. A point such as the project origin that is defined earlier using GeoSPARQL
in a WKT serialization is a very simple example of such basic geometry type. Linked
together, these resources then form more complex geometries.
However, since the use of WKT for the AEC industry is proposed in (Pauwels et al.,
2017a), which is a quite recent publication, using WKT as a way to store all geometric
information of a building project is not (yet?) optimized to cope with complex detailed
geometries in a detailed building model, let alone the out-of-plumb conditions inherent
to as-is modelling. Despite a total decomposition of a project’s geometry being
semantically the most stable strategy, it is also very intensive and cumbersome and
would need a conversion application to be compatible with most CAD packages.
Further on, as indicated, there is absolutely no aim to be a full IFC equivalent (which
is the purpose of ifcOWL), which renders more freedom to cope with geometry. Taking
the said into account and guided by the arguments listed below, a more pragmatic
approach is used for storing the modelled geometry in the graph:
- The definition of classes for geometry-types is outside the scope of this thesis, therefore
Literals are used for storing geometric information, this also provides more flexibility
in terms of describing data and use different formats;
- To keep a reasonable level of semantic richness, the breakdown of geometric structure
is limited to element level (mostly a surface or a volume (‘closed polysurface)) rather
than to absolute geometric basic types such as points and lines;
- This allows to describe complex forms with multiple double-curved surfaces, without
each of these elements becoming a graph on its own;
- Although this approach limits ‘total’ semantic connection that would allow complex
reasoning processes, it suffices for a compact representation of an existing building for
heritage and FM. In the end, all geometric information will still be present, albeit in a
document-way instead of a Linked Data way
Essential is that the geometry format that is used is an open standard, so it can be
imported in almost any CAD software. Also, the best situation would be a storage of
geometric information without information loss. Table 4.2 compares different open
export formats (.obj, .stl, .ply, .stp) according to file size and precision. BRep NURBS
(Non-Uniform Rational B-Spline) as well as Mesh geometry will be included in the
table to make the comparison. Collada DAE, also an open format, is not included in
the comparison because Rhino does not allow to import it (exporting is fine). As a
reference, the point cloud that is used for the column has a file size of 38 146 kB.
44
NURBS exports seem to take significantly less storage space than Mesh exports, which
is obvious due to their different approach (storing vectors compared with storing
sampled vertices). Although the file size of the OBJ -NURBS is only half the one of
the STEP file, its quality is visibly inferior. In Figure 4.2 a close-up of a column’s base
is depicted: the edges of the different surfaces clearly do not match. Therefore, the
approach in this thesis will be to export geometries to STEP (Standard for the
Image Detail Parameters
OBJ (Mesh export: maximum polygons)
- File size: 2853 kB (max. polygons)
1258 kB (min. polygons)
- Precision: maximum amount of polygons
means maximum precision. This also
means that the precision depends on the
size of the object: smaller objects thus
have higher precision. Less linear places
have a higher density.
OBJ (NURBS export)
- File size: 294 kB
- Precision: no polysurfaces, each surface is
imported separately. Edges do not match
(see detail image)
STL and PLY (Mesh)
- File size: 10855 kB (ASCII STL)
1582 kB (Binary STL)
3260 kB (ASCII PLY)
1429 kB (Binary PLY)
- Precision: same remark as with .obj file.
PLY meshes have reduced visibility in
Rhino
STP (NURBS)
- File size: 515 kB
- Precision: volumetric elements are kept
as a whole, seamless connection between
the curves
Table 4.2: comparing different exporting geometries by size and precision (Rhino 6)
45
Exchange of Product Model Data, ISO 10303)50 format, and
then store each resulting file (which comprises a single
(poly)surface) as a Literal (datatype stg:asSTEP). This
Literal will be linked to the corresponding geo:Geometry
instance that is part of a (sub)object. STEP is a format that
was originally developed for the manufacturing industry, but
currently, it serves a broad application area (e.g. EXPRESS
based IFC documents follow the STEP syntax). An
interesting remark is that the information in a STEP file is
also ‘linked’: each entity gets assigned a number, and relates
itself to other entities by referring to their number at specific
places. A disadvantage of formulating the geometry in text Literals instead of in triples,
is indeed that this data cannot ‘reach out’ to other geometry data; two objects may for
example connect to each other at a certain boundary curve, but in the serialization this
information may get lost. An example of a curve definition is given below, to give an
indication about the structure of a STEP file. Each comma-separated value has a
certain meaning51, for instance the tuples with the #-values refer to control points of
the surface that are defined further in the file, grouped per curve:
#882=B_SPLINE_SURFACE_WITH_KNOTS('',3,3,((#2051,#2052,#2053,#2054),(#2055,#2056,#2057,#2058),(#2059,#2060,#2061,#2062),(#2063,#2064,#2065,#2066)),.UNSPECIFIED.,.F.,.F.,.F.,(4,4),(4,4),(0.2684411,0.278963), (1.968001,2.094889),.UNSPECIFIED.); […] #2051=CARTESIAN_POINT('',(46.22439,18.77258,18.49334)); #2052=CARTESIAN_POINT('',(46.22472,18.740040,18.4926));
Note that files added to a graph are not limited to the topics that are discussed in this
thesis: other geometry types and other data, such as CAD plans, could also be included
using a variant on the here used ‘stg:asSTEP’ or ‘geo:asWKT’. The same holds for
single pictures, but adding and relating multiple pictures will also quickly increase the
file size. In the end, it is of course the modeller who decides what to include directly
and what to refer to.
4.2.3 Rhino IDs
A last note on geometry is that it is decided to include for each geometry also a Literal
that contains a unique Rhino ID. This is open to discussion, since a Rhino ID is not
persistent and very file specific. Nevertheless this feature is included, because it means
a serious computational gain for visualizing SPARQL queries in Rhino, using the
original Rhino geometry file that is accessible as an external reference. Otherwise, the
query would have to go through each STEP file to find the corresponding Rhino ID
(which is the ‘name’ of the object in STEP), now this is immediately given by the
property ‘stg:hasRhinoID’. Note that, since a Rhino ID is unique, it is in theory possible
to link multiple Rhino files in one graph.
50 https://www.steptools.com/stds/step/ (accessed 26/05/2018) 51 In the example, each the meaning of place in the tuple is given at:
https://www.steptools.com/stds/stp_aim/html/t_b_spline_surface_with_knots.html (accessed 26/05/2018)
Fig. 4.2: Edge matching in .obj- NURBS (Rhino 6)
46
4.3 Scan-to-graph classes and properties
The previous ontologies (section 4.1) provide a broad range of classes and properties
that can be used throughout this thesis. Some fit in very well, others are interesting
but do not exactly represent the semantics we are looking for. As has been indicated
previously, RDF allows to either refine some of these concepts or define some
completely new ones. Therefore, a small ontology has been constructed, to keep track
of sources and metadata and to link to geometry (section 4.2). For quick reference,
their main purpose can be found in table 4.2.52 The prefix that will be used for the
main vocabulary is ‘stg’, which is an acronym for ‘scan-to-graph’.
Class (Property) Comment RepresentingFile (hasRepresentingFile)
Reference to a document URL that contains a geometric representation of the whole project or a part;
SourceFile
Subclass of RepresentingFile specific for representing sources and not reconstruction deliverables;
PointCloudFile (hasPointCloudFile)
Subclass of SourceFile, refers to a Point Cloud that is a source;
RhinoFile (hasRhinoFile)
Subclass of RepresentingFile, since for modelling in this thesis Rhino 6 is used, this URL refers to the original deliverable;
(hasLocalVersion) A Literal string referring to the local URL on the computer, where a larger document is located. Linked to RepresentingFile;
RhinoID (hasRhinoID)
A RhinoID (Literal string) can be used for quick visualisation and is also used as a tool for storing the geometric information correctly in the Graph;
STEPRepresentation (asSTEP)
A Literal string containing the geometric information of a (part of an) object, serialized in .stp- format;
ProjectOrigin (hasOrigin)
Subclass of geo:Geometry that refers specifically to a project’s origin. Linked to a bot:Site. Linked to the project itself.
ModellingRemark (hasModellingRemark)
A property that refers to a Literal that is a remark about modelling assumptions or metadata (domain: geo:Geometry, range: rdfs:Literal);
(denotesRemark) Links the inst:ModellingRemark to the Literal string that contains the textual remark;
OccludedGeometry (hasOcclusion)
A subclass of ModellingRemark, to indicate a geometry is (partly) occluded in the source file;
LevelOfAccuracy (hasLOA)
A class that relates to remarks concerning the (represented) LOA (section 2.4.2.2). (see: hasLOAvalue and usedDeviationAnalysis);
(hasLOAvalue) A Literal string containing the represented LOA (LOA10, LOA20, etc.). Linked to a LevelOfAccuracy instance;
(usedDeviationAnalysis) A Literal string containing the method that was used for deviation analysis (MICROSCALE or MACROSCALE). Linked to a LevelOfAccuracy instance;
InternalGeometry (via rdf:type)
Denotes whether a geometric 3D object is invisible because it is located inside another object (e.g. part of a console is considered InternalGeometry, because it is located in a wall;
(usedEquipment) Links to the equipment that was used to create the source. Table 4.2: Outline of the Classes and Properties that are defined as a part of the dissertation
52 The main classes and properties that have been constructed for this thesis can be consulted at https://github.com/JWerbrouck/Thesis/blob/master/stg.ttl (also added as Appendix C in Turtle format).
47
The first category of classes that are defined relate to the geometries that represent the
building. They are not expected to be embedded in the graph because of their typical
large file size (see section 4.3.1). Therefore, they are referenced by a URL (an URI that
provides also access to the resource, section 3.2.2) that serves to store the source file
online. As indicated in section 4.2.1, his URL can be connected to a local location on
disk, (C:/STG/Project), which will be done by use of a Literal (string).
The second category of Classes are related to the geometry challenges in section 4.2.1
and to the definition of the project origin (see section 4.1.4.2). All three are related to
specific datatypes.
Lastly, the STG ontology defines the properties to keep track of difficulties,
uncertainties and other metadata. stg:OccludedGeometry and stg:LevelOfAccuracy are
both examples of subclasses of stg:ModellingRemark. It may be clear that a lot of other
subclasses can further be defined for different types of metadata. The equipment that
was used for the survey during which the source was created is also considered
metadata. Currently, this is linked to a Literal which denotes the equipment. In the
future, this could be changed to a class that is provided by the manufacturer, which
contains a lot more details about the TLS or DSLR camera that was used for the point
cloud. Not considered metadata, an extra property was constructed to denote whether
a geometric 3D element is invisibly part of another element, because this occurred quite
frequently during the modelling of the case study (Chapter VI).
48
4.4 Graph Scheme
Now that all the ontologies and the geometry handling have been discussed, the coming
section takes a more graphical approach to lay out the structure of the graph as
generated by the plugin. This will be done for a generic building, using the ontologies
outlined in section 4.1 and 4.3. Classes will not be included in the graph, unless they
cannot be inferred from the used properties. This is the case when using
product:aggregates (with which one can only include the object and subject are
instances of the generic class product:Product). Product Classes will be depicted in the
template graphs with a red class (TBox) node labelled “product:…*”. Such nodes occur
several times in the graph because all these “product:…*” may refer to other product
classes. Full-page figures of Fig. 4.3-5 are also added to Appendix A.
4.4.1 Topology Level
Fig. 4.3 depicts a graph that shows only the abstract building components, structured
by the BOT classes and the elements classified using the Product Ontology (whether
custom-defined classes or not). The project origin as defined in section 4.1.4.2 is linked
to the bot:Site. The URL of the Rhino Document is linked to the bot:Building instance.
In theory, Rhino Documents can be linked to each place in the hierarchy, so its use is
not restricted to one. This does not cause a problem: because the IDs are unique by
definition, the chances to have an ambiguous object-ID relationship in different Rhino
documents are virtually non-existent.
Fig. 4.3: Topology of a graph constructed with the plugin (software: yED v3.18.0.2) (Appendix A)
49
4.4.2 Building Element Level
A graph that depicts the structure on Building Element level is given by Fig. 4.4 and
Fig. 4.5. Fig. 4.4 is a general illustration for how metadata is linked to a geometrical
object. An stg:ModellingRemark is linked to a geo:Geometry instance, so the same is
true for its subclasses stg:OccludedGeometry and stg:LevelOfAccuracy. In the case of
stg:InternalGeometry, the principle is different. Since this is not considered metadata
(section 4.3) but rather a kind of ‘state’ of the object, the geometry is identified as an
stg:InternalGeometry by rdf:type (alongside with the geo:Geometry, that can be
inferred from geo:hasGeometry, but is explicitly stated here for the example).
Fig. 4.5 zooms out to the level of the object that contains the geometry. It depicts the
graph of a column, to clarify the system of ‘sub-elements of sub-elements’ (which can,
in theory, continue infinitely). How far this classification detailing goes is up to the
commissioner and the modeller (as was the case with the ‘Semantic LoD’ in CHML).
This graph of a column can be generalized as a model for how other elements are
included in a graph generated by the plugin.
Every sub-object will be assigned a class of the Product Ontology or a custom extension
of the stgp vocabulary. They will be also generally identified as a bot:Element.
Geo:hasGeometry is used to connect the instances with their geometrical
representation. It serves as a hub to connect different representations to a single
(sub)object. Note that a sub-object can have different geometries, and every geometry
represents a surface or (open or closed) polysurface. It is left to the modeller to decide
how far this breakdown structure goes.53 Closed polysurfaces also contain volumetric
information, so it is recommended to use them whenever possible.54 However, it can be
required to extract a single surface, when a modelling remark applies only to this single
geometry.
Fig. 4.4: Linking metadata to a geometric instance (Appendix A)
53 This relates to the ‘sub-object of sub-object’ detailing as defined above: the amount of geometries will always be more than (or equal to) the amount of sub-objects, unless there are sub-objects defined that do not have a geometric representation and are purely conceptual instances in the graph. 54 Volumetric information is at the time of writing not stored as an individual Linked Data property.
50
Fig. 4.5: Linking building elements with geometry and modelling remarks (yED v3.18.0.2) (Appendix A)
4.5 Conclusion
This chapter illustrated the methodology that is proposed for use in a scan-to-graph
process. Small, modular ontologies (BOT, PRODUCT, GeoSPARQL) were chosen as
the backbone of the process, complemented with a custom defined vocabulary (STG)
that contains classes and properties specific for the purpose of a scan-to-graph process.
It has been shown that product classification systems can be extended for particular
projects. A way of implementing geometry in the graph or providing (local) URL
references to heavier documents has been discussed. Finally, a structure which combines
these topics in an RDF graph has been laid out.
51
Chapter V: Rhino Plugin
During the course of the thesis, a plugin for Rhino 6 (McNeel) was developed to support
the creation of a graph according to the schema described in Chapter IV.55 The main
purpose of the plugin is to be an aid for semantic enrichment of 3D geometries and
exporting the information to an RDF graph, a feature not supported by current BIM
packages. Furthermore, the plugin simplifies the import of point clouds (and relocation
to the project origin), provides subsampling support (via CloudCompare) and uses
them as a basis for both the 3D creation and the ‘semantic objects’ in the graph. It also
provides an interface to visualize geometrical results of a SPARQL query. Although
the plugin supports creation of the graph in Rhino, the resulting graph only
encompasses non-proprietary formats. This chapter describes the main functionality of
the plugin, a demonstration is given at the end of Chapter V.
5.1 Dependencies
The plugin was made using IronPython 2.7 (built-in in Rhino), Python 2.7 and Eto, a
Rhino-implemented cross-platform framework to make user interfaces in .NET (thus
also applying to IronPython). Only rdflib (v4.2.2)56, a Python package with RDF
compatibility, should be installed manually as a Python 2.7 package. Because rdflib is
not fully compatible with IronPython, the user should certainly have installed Python
2.7. 57 Further on, for subsampling point clouds, the open-source application
CloudCompare v2.8.1 Stereo58 is used ‘under the hood’. When using the subsampling
function before importing, the user should first install CloudCompare 59 Lastly,
SPARQL queries are carried out using Stardog60, a triplestore or database for Linked
Data. Stardog has a free Community Edition for non-commercial use. The user should
make sure that (1) Stardog is installed, (2) the folder ‘stardog-5.x.x\bin’ is added as a
PATH (set as an environment variable in windows), (3) the graph that corresponds
with the opened model is stored as a Stardog database (default, it is hosted at
localhost:5820) and (4) the Stardog server is running in the background at the moment
of querying. To be able to import E57 point clouds in Rhino, the extension E57 FILE
IMPORT61 (by Dale Fugier) should be installed.
Fig. 5.1: General information settings at the top of the plugin
55 The commented files from the source code are provided at https://github.com/JWerbrouck/scan-to-graph 56 https://github.com/RDFLib/rdflib (accessed 28/05/2018) 57 default location set in the source code: “C:/Python27/python.exe” 58 http://www.danielgm.net/cc (accessed 4/4/2018) 59 default location set in the source code: “C:/Program Files/CloudCompareStereo/CloudCompare.exe" 60 https://www.stardog.com (accessed 2/05/2018) 61 http://www.food4rhino.com/app/e57-file-import (accessed 12/10/2018)
52
5.2 General information
The plugin allows to define a building site, building, storeys and spaces, which contain
objects or are adjacent to them. Objects are primary characterized by a layer that
contains its representations, such as the point clouds or the reconstructed 3D geometry.
It is possible to further specify sub-objects with a main object (which has a layer) or
another sub-object as a parent. URIs are constructed primary using the name of the
instance in Rhino. In the current version of the plugin, objects and storeys should have
a unique name to avoid errors in the graph. The URI of a space is based on its own
name and the name of the storey it belongs to, so different storeys can have a space
with the same name. Sub-objects only need to have a unique name within the ‘range’
of their parent object but may share their name with a sub-object from another parent
object, e.g. two columns can each have a sub-object ‘Capital’.
At the top of the plugin, the name of the graph is to be defined (Fig. 5.1). A local
version of the graph will be stored in a project folder (C:/STG/ProjectName). The
graph will be serialized in Turtle. Previously created graphs can be loaded with the ‘…’
button. Graphs that are loaded will be serialized again, possibly overwriting the original
file. The textbox below serves for stating an overall URI, which serves as a basis for
giving all instances their own URI (it will be used as a prefix ‘inst:’ in the Turtle files).
In the case study, this URI will be given by: ‘https://github.com/JWerbrouck/scan-
to-graph/casestudy/ ’, but in fact, any URI would be valid, as long as it is unambiguous.
5.3 Project Info Tab
In this tab (fig. 5.2), the overall topology of the building is laid out, by use of bot:Zone
data instances and a coordinate system. Although a bot:Site can contain multiple
bot:Building elements in theory, in the plugin this is currently restricted to only one.
If it would nevertheless be necessary to define multiple buildings sharing one site, this
is possible by either storing them in the same database or merging the graphs together
outside the plugin environment. Building storeys and their spaces of the building are
also part of the building topology and are defined at the bottom of the tab.
Since the point clouds of the case study are referenced using Lambert72, this is
currently the default option for georeferencing. Using another GCS is possible by
copying its openGIS URI (hyperlink by “Coordinate System”) into the coordinate
system widget. Georeferenced point clouds that are imported using the plugin will be
translated to the origin using the project coordinates; large geocoordinates often result
in the project’s canvas becoming too big and unfit for modelling (see further). Note
that this only works with cartesian coordinates, since the Rhino canvas uses a cartesian
system. In the current plugin version, it is only possible to include only one
georeferencing system, although it is in theory possible to add multiple (section 4.1.4.2).
The Project Info Tab also includes a checkbox for including STEP representations in
the graph as Literal strings. The checkbox is checked by default.
53
Fig. 5.2: Plugin tab to define the topology and geographic location of the building (additionally: the option to include geometric information directly in the graph)
54
5.4 Point Clouds Tab
When the basic project topology is defined, one can start modelling the as-is geometry
based on point clouds (or other sources). The plugin streamlines this importing process
for as-is modelling by several steps (Fig. 5.3). First of all, the large file format of huge
detailed point clouds can give performance problems. Therefore, subsampling the point
clouds by calling CloudCompare before importing them is given as an option. When
using the subsampling option, the user locates a folder which contains only the files
that should be subsampled (when it contains other files or folders, it is not sure whether
the import will happen without problems, results may vary). The subsampled point
clouds are then stored in a separate subfolder and imported. Subsampling options
include: octree, spatial and random, as performed by CloudCompare. Each point cloud
is imported as a separate layer, which takes the name of the point cloud and serves as
the representation of the ‘semantic object’. Apart from the point cloud, the layer also
contains the modelled representation of the element. If well-chosen, the name of the
point cloud can give an indication for later object classification: i.e. if it contains a
Product Ontology class (e.g. column-COLUMN_Column1_PresenceChamber.e57 will
give a classification suggestion for product:Column-COLUMN).
Fig. 5.3: Plugin tab for importing and subsampling of point clouds
55
5.5 Element Tab
This tab contains the core of the plugin (Fig. 5.4). The ‘Main Objects’, which may have
a point cloud representation, refer to a layer of the document (and take its name as
the element’s name). An object is located in a space, which is part of a storey, both
defined in the ‘Project Info’ tab. The default relationship between a zone and an object
is ‘bot:containsElement’. The checkbox ‘Adjacent’ provides the option to override this
default relationship and set it as ‘bot:adjacentElement’. In this case, it is possible to
define the zone on the other side, in the plugin limited to a space. A ‘Main Object’ may
be hosted by another element (bot:hostsElement), e.g. a door can be hosted by a wall.
As indicated in the section about the Point Cloud Tab, the element’s type is guessed
from the name of the point cloud (strictly spoken, from the name of the Layer), but
can be changed without consequences. A widget to link the geometries to the
represented LoA after a deviation analysis is provided. Although this widget is
currently mapped to an entire object in the User Interface, in the graph, this LoA will
be linked to each individual geometry, except those that are denoted as occlusions.
An object may have sub-objects, either hosted or aggregated. Apart from this parent-
child relationship, a sub-object has a name and a type. An option is provided to set a
sub-object as an aggregate/hosted element of another sub-object, hereby enabling the
core Linked Data property of virtually unlimited detail. Note that, from a certain point,
the object types will have to be custom-defined, since they will not be included in the
Building Product Ontology anymore. In the current version, this can only manually be
done by adding the URIs of these object to the .csv-file ‘custom_types’, located at the
plugin’s installation folder.
Each object intrinsically aggregates 3D geometry that its corresponding layer contains.
This is denoted by the item ‘self’ in the ‘sub-objects’ list. These geometries are visible
in a list below the sub-object’s properties (“Sub-Object has Geometries:”). When such
an element-ID is selected in this list, the corresponding object will also be selected in
the viewer. Further on, selecting an element makes it possible to add one or more
written notes to this element in the form of a modelling remark (a string linked to
stg:ModellingRemark) or an occlusion (stg:OccludedGeometry), in the plugin both
listed in the lowermost list of the Tab. When ‘self’ is selected in the sub-object list, the
geometry-list also contains the point clouds that are stored in the layer, which are,
unlike 3D objects, not serialized as a geo:Geometry, but as stg:PointCloudFile. When
selecting another sub-object, there is an option to pick the 3D objects (surfaces or
polysurfaces/volumes) in the viewer that assemble this sub-object (restricted to the
geometries in the Object Layer and Default layer), again displayed in the listbox.
To check if the information has been stored correctly, a button at the bottom of the
tab is provided to print all information that is currently stored on the command line,
in a schematic way. This includes the Project Topology, geolocation, the assumptions
that are mapped to object IDs and all Main Object attributes (‘point cloud’, ‘type’,
hosted or not …), their sub-objects and the attributes of these sub-objects.
56
Fig. 5.4: Plugin tab for assigning object attributes, sub-objects and geometries, as well as labeling 3D objects with modelling remarks
57
5.6 SPARQL query tab
The last tab (Fig. 5.7) is a tab for performing SPARQL queries and visualizing
geometry that matches the query. The graph should be loaded in an active Stardog
Database before running a query. To enable reasoning, the corresponding ontologies
should also be loaded into this Stardog database.62 This can be done by going to the
local server (default at localhost:5820, username: admin, password: admin), opening
Database > Browse > Data > +Add. A local graph file (ttl; rdf; owl …) containing the
ontology definitions can then be uploaded. To prevent that all graphs in the database
are also queried when inferencing is not enabled, the ontologies can be added as ‘named
graph’. In our case, this will be done for BOT, Product, GeoSPARQL and the custom
ontology STG.
Fig. 5.5: uploading ontologies as named graphs
The Query tab itself contains an input box where the name of the Stardog database
should be stated. Below, there is the SPARQL query input box63, with the query button
and a function to enable/disable reasoning.64 Results are displayed at the table “Query
Results”, each variable a column. When the query results contain Rhino IDs, the overall
display mode changes to ‘Wireframe’, the objects related to the IDs in the results are
individually set to a ‘Rendered’ mode, which makes them clearly visible in the viewport
(Fig. 5.6). Selecting a row in the results that contains a Rhino ID will also select the
corresponding item in the viewport.
Fig. 5.6: highlighting and selecting objects via a SPARQL query
62 […]>stardog-admin db create -o spatial.enabled=true -n “DB-name” “Path-to-DB” 63 Apparently, when querying a Stardog Database via the command line with “SELECT ?s ?p ?o WHERE {?s ?p ?o}”, Literals with a certain length (e.g. the Literal strings containing STEP geometry) are not interpreted correctly and are split at random places. Results may vary. 64 Make sure, when entering a query in the plugin interface, that a space is added between the command (SELECT, INSERT) and WHERE, also when working with multiple lines.
58
Fig. 5.7: Plugin tab for querying with SPARQL; visualization of the results as a table and in the active viewport.
59
5.7 Additional content
The Plugin provides a UI for constructing graphs with the structure outlined in Chapter
III. However, it is probable that some additional content needs to be added to the
initial graph. The plugin does not provide a UI to add ‘additional content’, since this
is too generic: a UI that uses Linked Data but does not expect Linked Data expertise
from its user should focus on documenting only a part of the information, related with
specific ontologies. This UI thus deals with providing the basic topology and geometry
of an existing building. A future research project could focus on the development of an
interface to implement historical information (e.g. using the structure of E-CRM and
CHML as a basis). When a UI for the type of information that has to be included in
the graph (historical, technical, geographical …) is yet to be developed, this issue can
be solved by performing SPARQL INSERT queries, stating the information in
SPARQL. Stardog (or another RDF store) can be used for this. For example, if someone
wants to add some historical information about the purpose of the Presence Chamber
in the Gravensteen during the Middle Ages. The integration of such information in the
graph (using E-CRM) takes the following steps:
(1) E-CRM has a property ‘P103_was_intended_for’ (domain: E71_Man-Made_Thing;
range: E55_Type)65. The subject of the relationship will be:
‘inst:Ground_Floor_Presence_Chamber’, which is already present in the graph as a
bot:Space, but is hereby also classified as a ‘E71_Man-Made_Thing’. The object will
have to be defined newly, as ‘inst:giving_audience’, now implicitly classified as an E-
CRM ‘E55_Type’. The INSERT query will thus be as follows:
PREFIX ecrm: <http://erlangen-crm.org/140617/>
INSERT {inst:Ground_Floor_Presence_Chamber ecrm:P103_was_intended_for inst:giving_audience}
WHERE {}
The prefix is set because the E-CRM namespace is not yet included as a namespace
from the database (which can be changed in the main settings). The WHERE brackets
can stay empty, since there are no variables assigned here (it would be different when
we would want to say that all bot:Spaces had the purpose of giving audience). After
adding ecrm as a database Prefix (in the Stardog Web Console), the success of above
INSERT can be controlled by performing a quick SPARQL ASK query:
ASK WHERE {?s ecrm:P103_was_intended_for inst:giving_audience}
Which returns ‘true’: such a triple is present in the database. Note that to enable
reasoning, the ontology has to be loaded into the Stardog Database. In the case of
larger ontologies, such as E-CRM or CHML, this can slow down querying.
65 http://erlangen-crm.org/docs/ecrm/current/index.html#anchor-2008326450 (accessed 13/05/2018)
60
5.8 Conclusion
In this chapter, it was shown that the plugin provides an interface for importing point
clouds for a scan-to-graph process; for defining the topology, the geometry and the
semantics of a project and for querying a graph, able to visualize the resulting
geometries. The scope of this thesis limits to this functionality, but a way to
nevertheless add extra information has been explained. In Chapter V, this functionality
will be used in the case study of performing a scan-to-Graph process for the Presence
Chamber in the Gravensteen Castle, Ghent.
61
Chapter VI: Case Study
In this chapter, the scan-to-Graph approach is demonstrated by a case study on the
Presence Chamber of the Gravensteen Castle in Ghent. First, the point clouds that
served as a blueprint for the geometric reconstruction are introduced. After a short
outline of the available point cloud sources (section 6.1.1), it is discussed how these will
be segmented into meaningful parts that will be automatically connected to individual
Linked data objects by use of the plugin (section 6.1.2). Cutting the extremely large
point cloud into smaller subclouds avoids importing entire point clouds in Rhino 6.
This is not only very demanding for the computer hardware but is also very impractical
for modelling. The extraction of such smaller point clouds will happen in Autodesk
Recap Pro v4.2.2.15 (student license). A similar procedure can be followed using
CloudCompare, but there might occur some problems when importing very large point
clouds. The results will have the e57 extension, an open format for point clouds.
The second part of the chapter involves creation of an as-is model in Rhino 6 (section
6.2). It starts with a discussion of the modelling techniques that were used for ‘reverse
engineering’ the Presence Chamber in the House of the Count as part of the
Gravensteen Castle (section 6.2.1). As commercial reverse engineering software is often
costly, part of this thesis was finding efficient techniques for (manual) modelling based
on point clouds without using such applications. BIM tools such as Revit have limited
built-in functionality for reverse engineering, but several semi-automatic plugin
applications exist. However, Revit is not a reverse engineering tool, for which a more
intuitive environment for 3D drawing and -sculpting is needed. As explained in the
introduction, the amount of possible modelling environments grows since we are not
limited to regular BIM applications; Rhinoceros 6 was used as the main modelling tool,
for its modelling versatility as well as for its support for small macros that accelerate
the modelling process and for the development of custom plug-ins.66
When an object’s geometry is modelled, it is compared with the original point cloud,
to estimate the modelling precision by making a deviation analysis (see Chapter II).
Such analysis will be done using CloudCompare. A micro-scale analysis will be
performed on all objects but the doors and windows; these are included in the deviation
analysis of the wall. If the represented Level of Accuracy is insufficient, the 3D
geometry will be corrected, otherwise, the next object can be modelled.
The third part of the chapter illustrates how the plugin can be used for semantic
enrichment of the geometries (section 6.3). It is chosen to model the objects first and
then perform the graph creation, because then, the graph creation can happen more
quickly, without interruption. However, this working sequence is no prerequisite:
starting with creating the graph is also possible. Finally, when the graph has been
generated by the plugin, following the method explained in section 5.7, some additional
information will be added using SPARQL INSERT queries (section 6.4).
66 Rhino is not a dedicated reverse engineering tool either, examples of specific reverse engineering Software for Rhino are RhinoResurf (http://www.resurf3d.com/products.htm (accessed 27/05/2018)) and Rhinoreverse
(https://rhinoreverse.icapp.ch/english/(accessed 27/05/2018)).
62
6.1 Extraction of meaningful point clouds
6.1.1 Sources for reconstruction
In the last years, several scans of the Gravensteen have been made, of which four were
made available for this thesis; three of them were used. The point clouds were already
registered and unified before the start of this thesis. As seen in the Chapter II, the
registration method, the equipment, the weather (and lighting) conditions and the way
of processing all have an impact on the final point cloud. Therefore, some of the point
clouds were more useful than others (for the purpose of this thesis), either because of
their very high accuracy or because of the high coverage. Of course, the advantage of
having extreme resolution comes at the cost of requiring very high storage volumes.
6.1.1.1 KU Leuven – 2017: TLS
The TLS scans made by the Geomatics research group of the Technology Campus
Ghent (KU Leuven) in 2017 were captured using a Leica Scan Station P30, processed
in Leica Cyclone67 and produced an extremely dense point cloud (Fig. 6.2), which makes
it very interesting for both modelling and documenting the graph. This is the main
point cloud that was used for the thesis. However, because of this detail, this scan has
a very large file size, which makes it nearly impossible to load the point cloud entirely
into a non-dedicated application such as Rhinoceros. Options to nevertheless make use
of the point cloud are subsampling or importing only small parts of the point cloud at
once. A subsampling process can make the point cloud a lot more manageable, but at
the same time the density of the point cloud drops. Therefore, whenever possible, the
second option was chosen. The process for segmenting the point clouds is outlined in
the next part of this chapter. Because scanning was done using a TLS, parts of the
building that are invisible from the ground were not scanned, such as the roof (Fig.
6.2).
Fig. 6.2: TLS scan by KU Leuven, 2017 – exterior view (Autodesk Recap Pro)
67 http://hds.leica-geosystems.com/en/Leica-Cyclone_6515.htm (accessed 19/05/2018)
63
6.1.1.2 KU Leuven – 2017: TLS and photogrammetry
Because of the occlusions, the previous point cloud (6.1.1.1) was complemented using
photogrammetric tools (Fig. 6.3). A drone with DJI Phantom 4 Pro camera was used
for this survey. Terrestrial photogrammetry was performed with a Canon EOS 5D,
using a Sigma 24-70mm lens and an aperture with f number f2.8. With these surveys,
a photogrammetric reconstruction of occluded areas (roof, windows, parts occluded by
vegetation, etc.) could be carried out. The ‘merging’ of the TLS and photographs was
done in the RealityCapture68 software. The algorithm combined both the aligned and
georeferenced laser scans and the individual images, but also caused a slight movement
of some points in the laserscanned point clouds. For the sake of accuracy, this point
cloud is only used when the parts to be modelled are occluded in the dense TLS scan
(6.1.1.1).
Fig. 6.3: TLS and photogrammetry scan by KU Leuven,
2017 – exterior view (Autodesk Recap Pro)
6.1.1.3 KU Leuven – 2018: TLS and photogrammetry (2)
Another TLS and photogrammetry scan was made by the same research group in 2018,
completing the missing parts from the building, using a Leica BLK360 and Canon EOS
5D (cf. 6.1.1.2) (Fig. 6.4). The aligned and georeferenced point cloud was also made
available. However, since the modelling part of this thesis is restricted to the presence
chamber, which was already scanned during the first surveying campaign, the use of
this point cloud is limited for this thesis. It was only used to determine the height of
the vault: since the room on the first storey (the count’s bedroom) was not included in
the other scans, its floor (which limits the vault) was extracted from this one.
Fig. 6.4: TLS and photogrammetry scan by KU Leuven,
2018 – Section view (Autodesk Recap Pro)
68 https://www.capturingreality.com/ (accessed 19/05/2018)
64
6.1.2 Extracting point clouds
To make the importing easier and to provide the Rhino plugin with a basic structure
to work with, the elements to be modelled are first extracted from the entire point
cloud using Autodesk Recap Pro v4.2.2.15(student license). As indicated before, the
2017 TLS point cloud from KU Leuven (6.1.1.1) is by far the most precise. The ‘TLS
and photogrammetry’ scan (6.1.1.2) has the least occlusions, since the photogrammetric
approach complements the laser scanned point cloud by adding information about
locations that are not ‘visible’ for the TLS. However, as outlined in section 6.1.1.2, this
point cloud is less accurate. Point cloud 6.1.1.1 will be used to model single objects
(separate as well as hosted elements). Because it contains some occlusions in the walls
(mainly at the southern façade and window sills, Fig. 6.7-6.8), the modelled walls are
the only geometry that will be based on point cloud 6.1.1.2.
Because the modelling of the point cloud to geometry will be restricted to the presence
chamber, the first thing to do is to look what distinct elements are available in this
room and its boundaries. Figure 6.5 and 6.6 give an overview of the elements to model
(apart from walls, vault and floor). Furniture, heraldic shields etc. will not be modelled.
Fig. 6.5-6.6: selection of the elements to model (Autodesk Recap Pro): 1-2: columns; 3: Front door; 4-5: upper windows front; 6: lower window front; 7: fireplace; 8a-b-c: side windows; 9: internal door
3
4
3
8a-b-c
2
6
5
7
1
.
1
.
8c
9
.
65
Fig. 6.7 (left): southern (front) façade partly occluded by a bush (Autodesk Recap Pro) Fig. 6.8 (right): additional photogrammetric point clouds reduces the occlusion zone (Autodesk Recap Pro)
6.1.2.1 Selecting the points
The next step involves selecting the points that belong to the column. For reasons of
visibility, the Limit Box in Autodesk Recap is set to encapsulate only the necessary
points. To make sure the entire column will be exported, the Limit Box should contain
a little information from the elements’ surroundings (in this case the floor and a part
of the vault) (Fig. 6.9). Then, either the visible parts that do not belong to the column
or the ones that do are selected and respectively dismissed from the view or serve as a
basis for a new view, as is done with the part of the vault that is still visible and the
separation between floor and column. Because this part relies mainly on visually
determining the boundaries of the element, these boundaries can be made more explicit
by changing the color of the points (e.g. Elevation instead of RGB) (Fig 6.10).
Nevertheless, the selection of the points that belong to a specific element is not exact,
since it is a manual process. Because such process is very difficult to quantify, only one
general remark on the boundary determination can be given: an overlap between the
elements should be the case rather than selecting too few points. This way, it is made
sure that there are no ‘forgotten’ points. A slight overlap in the point cloud segments
causes no harm at all in the modelling process.
Fig. 6.9 (left): Limit Box around Column 1 (Autodesk Recap Pro)
Fig. 6.10 (right): Changing point colour as an aid for selecting the element’s boundaries (Autodesk Recap Pro)
66
6.1.2.2 Creating a Region
The definition of the points that belong to an element can be done in several stages.
When (part of) the column is selected (e.g. the base), the user clicks on ‘REGION’,
defines the name of the region, in this case the column. In this step of defining regions
within Autodesk Recap, we can already anticipate on the graph construction. The
Rhino plugin will try to match classes of the loaded ontology to the element by
comparing the items in the list of available classes with the name of the point cloud.
Since we know this element is a column in the presence chamber of the castle, and the
Building Product Ontology we are going to use for classifying the elements contains
the class ‘Column’ (and, more specific, a subclass ‘column-COLUMN’) we assign to this
region the following name: ‘column-COLUMN_Column1_PresenceChamber’. Although
the place of the type in the region’s name does not really matter, in this thesis, the
structure ‘Type_Name_Room’ is maintained. Actually, since there is only one room to
be modelled, the room does not have to be included in the name, but it remains a good
practice when dealing with multiple rooms. The name of the corresponding instance in
the graph can be derived from the point cloud’s name. This is just a practical step to
reduce time spent on namegiving and type assignment, however, the plugin allows for
changes to type and instance name and does not require the name to follow this syntax
(section 5.4). Figure 6.11 shows the resulting region when selected in the overview panel
at the bottom right corner.
Fig.
6.11: Column 1 selected in the overview panel (Autodesk Recap Pro)
6.1.2.3 Exporting the point clouds to .e57-format
When the desired regions are defined with corresponding names, the last step in this
section is exporting them to the e57-format. This can be done easily in Recap itself, in
the overview panel bottom right, under ‘Scan Regions’ (“Export Regions”). Exporting
happens after defining the file format and the folder.
The other elements displayed in Figures 6.5-6.6 were also exported to both serve as a
blueprint for the Rhino geometry and be the point cloud representation of the elements
in the Graph. Walls were exported with mitred corners, using point cloud 6.1.1.2 (TLS
and Photogrammetry 1). Hosted elements such as doors and windows were nevertheless
extracted from point cloud 6.1.1.1, as well as the vault, which is the largest point cloud
segment of the project. The height of the vault was determined by extracting the floor
of the upper storey, contained in point cloud 6.1.1.3.
67
6.2 Modelling the Presence Chamber
This section discusses the 3D modelling process itself, including the deviation analysis.
After some general differences with ‘regular’ scan-to-BIM environments are made, some
general options in Rhino that make the modelling process more easy are introduced.
Then, the modelling of the objects themselves is discussed: vault, column, hosted
elements (window, door), walls. Details that are considered ‘mobile’ (e.g. furniture) or
that are not considered relevant for the overall geometry (e.g. lighting fixtures) are not
included in the modelling process. When an object has been modelled, its deviation
with the point cloud is calculated and serves as an indication for the deviation with
the real-world object. The creation of a ‘perfect’ model is not the core goal of the thesis,
and is not possible anyway. As the model mainly serves as an illustration of the scan-
to-graph process, the ambition is to achieve at least LOA20, i.e. 95% (2σ) of the
deviations should be lower than or equal to 50 mm.
Despite its versatility in modelling, Rhino is not an automatic reverse engineering tool,
and professional reverse engineering tools are often too costly and still require a
considerable amount of user input. The built-in tools that are provided for reverse
engineering (‘ExtractConnectedMeshFaces’, ‘Contour/Section’ etc.) are useful for
modelling small objects based on meshes with a limited amount of faces; applying them
to large point clouds often causes a software crash or the results are not satisfying.
Therefore, only manual, regular modelling methods were applied.
Typical scan-to-BIM modelling either happens directly in BIM packages (such as Revit)
or indirectly, e.g. by creating orthophotos, tracing them in a CAD package (such as
AutoCAD) and then importing the vector drawings. The last method is in general less
accurate, because a lot of information is lost. Alternatively, 2D profiles are traced in a
CAD environment, brought into the BIM software, converted to a 3D volume (e.g. by
extrusion) and made a Family (Mezzino, 2017). By working in only one modelling
environment, such constant change of applications is avoided.
The approach that is taken here differs from a BIM modelling approach in that in a
regular BIM workflow, object classification typically happens already before modelling
(i.e. in most cases you choose first what to model, then you model it). In this approach,
classification only takes place in the end, in the graph creation process. Detaching
modelling from classification means at the same time an advantage and a disadvantage.
Not bound to ‘rules’ about building elements, there is an absolute modelling freedom,
which is a reasonable plus considering as-is irregularities and non-modern, project
specific building elements. On the other hand, this also means a lack of geometrical
constraints (e.g. ‘watertight’ element connections or closed volumes are not self-
evident), control mechanisms and modelling suggestions.69 However, in practice one
69 It also means, unlike in classic BIM, that all semantics will have to be added later on, but since this is one of the basic assumptions of the thesis (and of non-geometric nature), this is not considered an obstacle in this context.
68
also sees that in as-is modelling in Revit, custom Families have to be developed to
overcome such class-bound restrictions; further on, due to the limited amount of classes
(Categories in Revit), classification can be inaccurate (e.g. vaults being classified as
‘roof’). Lastly, the advantages of parametric modelling/families are less useful for as-is
modelling, because (especially for pre-industrial buildings)70 each element has unique
distortions.
6.2.1 Preparation
6.2.1.1 View settings
Some preparation had to be set up in order to make the modelling process more smooth.
Modelling on a point cloud heavily relies on visual analysis and less on mathematical
objects or primitive modelling. This is one of the reasons why a scan-to-BIM process is
typically considered very labour-intensive (section 2.4.2.1). Often, the modelling
technique is based on so-called ‘wireframe modelling’: boundary points are identified,
constructed and connected with control point curves or (poly)lines. Because of this ‘3D
drawing’, it is recommended to change some visibility options before starting to model.
Object surfaces, together forming the solid object, will in most cases be based on
NURBS curves that are defined by control points. Using the standard settings, curves
and their control points are often very difficult to trace, hidden behind the point cloud;
changing their visibility significantly enhances the process. This is done in the options
dialog: ‘options – view – display modes’. A copy of the display style ‘Shaded’ and
‘Ghosted’ was made and then adapted to change the curve options (colour to red and
‘width’ to 3) and control point options (size to 8) (Fig 6.13).
6.2.1.2: Modelling aids
To display only specific parts of the working
space, Rhino provides so-called ‘Clipping
Planes’. Their purpose is similar to Revit’s
‘Section Box’ or Recap’s ‘Limit Box’, but in
Rhino, the sections are made by an infinite
plane instead of by a box, which renders
more orientation freedom but also makes
them more difficult to handle (Fig. 6.12).
Combined with the _MPlane command, which allows to pick a planar object and set it
as a working plane that moves along with the object, a Clipping Plane provides a
practical way for modelling non-orthogonal objects. MPlane also proves very useful
applied to other objects than clipping planes; they allow to ‘draw’ on any planar surface.
It may seem obvious, but the flexible use of layers (“clipping planes”, “construction
planes” …) further smoothens the modelling process, as well as enabling the ‘Record
70 A large difference is present between the modelling of pre-industrial and industrial/modern buildings. For the case study of an ancient temple in (Mezzino, 2017), a different approach was used than for the modelling of an office building in (Thomson, 2016). Modelling techniques of course also differ from the person who performs the activity.
Fig. 6.12: Clipping Plane (Rhino 6)
69
History’ button, which keeps the relationship between derived objects and their parent
object.
6.2.1.3 Importing Point Clouds
It has been mentioned that an entire import of the point cloud that is the most precise
(6.1.1.1) is not possible, due to software restrictions. The previous part of this chapter
discussed the segmentation of the point cloud into multiple ‘meaningful’ point clouds
that represent the objects to be modelled. One of the tabs of the thesis plug-in provides
an interface to import .e57-point clouds in different layers. These layers can be thought
of as a ‘semantic container’ that contains all geometric representations of the object, in
this case: the point cloud and the NURBS geometry. Although subsampling was
initially assumed a necessary step, importing all object-pointclouds separately without
subsampling also succeeded. This was probably because the separate point clouds are
not that large individually, and the fact that point clouds that are imported by the
plugin are immediately disabled until needed for modelling.
Fig. 6.13: setting a visibility schema for reverse engineering in Rhino 6
70
6.2.2 Modelling
6.2.2.1 Ceiling
The ceiling of the Presence Chamber is the largest object of the project. It consists of
6 smaller vaults that are supported by the walls (via consoles) and the two central
columns. Each vault is created separately and then connected to the other ones. Since
the vault is highly irregular, creating intersecting extrusions is no option; each surface
and rib is modelled individually.
Tracing the constituting lines in an orthogonal top view is the starting point of each
vault. (Fig. 6.14). These lines are then extruded to planes (Fig 6.15). The MPlane
proved most useful here: setting the extrusion planes one by one as the main working
plane allowed to draw planar ‘control point curves’ very quickly, the opaque (or semi-
transparent) planes clearly displaying the ‘intersection’ between point cloud and plane.
Very roughly, the control points are set out, using 3-4 control points per curve. Then,
the control points are moved in the plane, to align curve and object edge as exact as
possible. This method primary involves visual alignment, but proved quite intuitive
and exact. This procedure is followed for all edges of the vault; the ribs as well as the
surface edges. Surfaces are then constructed between these curves: for the ribs an ‘edge
surface’ or ‘planar surface’ (defined by 4 curves) is used. Each vault surface was divided
in two parts for better precision (Fig. 6.16). Different options were tested for the vault
surfaces (table 6.1). The ‘Sweep2’ command, which sweeps a cross section curve along
2 rail curve, gives the most satisfying results and will be used for constructing the vault
surfaces. Enabling the ‘Record History’ button connects the control points of the edge
curves to the surface, allowing to refit them and see the result immediately (Fig. 6.16).
The total ceiling surface is depicted in Figure 6.17.
Fig. 6.14: orthogonal projection of the defining curves of a vault (Rhino 6) Fig. 6.15: modelling the curves by changing control point location in the working plane (Rhino 6) Fig. 6.16: Record history enables edge curve manipulation after construction of the surface (Rhino 6)
71
Geometry With Point Cloud
1) Patch surface:
- UV surface that divides its curves into
segments: No exact matching between
original curves and surface edges.
Connection with other surfaces will
therefore be an issue.
- At first sight quite close to the point
cloud, however generally below the
points
2) Edge Surface
- Curves seem to fit quite well
- Unwanted convexity sets a clearly
visible deviation from the point cloud.
3) Network Surface
- Better edge matching options
(tolerance) than patch surface, but
still a UV surface that doesn’t
guarantee a watertight connection
with other surfaces.
- At first sight quite close to the point
cloud, however generally above the
points. Also slightly convex close to
the front curve.
4) Sweep2
- Edges match perfectly
- At first sight close to the point cloud,
with a majority of the points above
the surface close to the edges, the
opposite in the centre of the surface.
(this depends mainly on the edge
curves)
Table 6.1: Different surface generation techniques (Rhino 6)
72
Fig. 6.17: total surface of the ceiling (Rhino 6)
The height of the ceiling mass is acquired by importing the floor of the above storey,
extracted from point cloud 6.1.1.3. The vault’s outermost edges are extruded in z-
direction to connect the ceiling surface with the 1st floor. The result is depicted in figure
6.18.
Fig. 6.18: total ‘volume’ of the vault (Rhino 6)
Since only the scanned surface of the vaulted ceiling is relevant for deviation analysis,
this part will be imported into CloudCompare. In a separate Rhino document, the vault
volume is first exploded and the parts that are covered by the point cloud are exported
to an .obj mesh. In general, the point cloud covers the ceiling pretty well: small
occlusions will hardly influence the overall LoA of the ceiling. Therefore, the total
ceiling surface is exported to an .obj-file and imported in CloudCompare, along with
the original point cloud of the ceiling.
Two comparison methods exist for such deviation (Bonduel et al., 2017). The first
method involves involves subsampling the mesh into a ‘model point cloud’ (including
occluded areas) and perform a nearest neighbour calculation with the original point
cloud (which is set as the reference point cloud). Occluded areas will become clearly
73
visible, because the distances of the subsampled points in these areas will typically be
significantly larger than with non-occluded areas. These zones can then be extracted
from the original mesh and a second analysis (without occlusions) can be performed.
Alternatively, occlusions are removed before the first deviation analysis, after a visual
analysis of the point cloud, which is the method that will be used for the deviation
analylsis of the walls (section 6.2.2.2). The other method involves measuring the
distance from the point cloud to the mesh directly. Building elements in the point cloud
that are not modelled (e.g. light fixtures at the consoles) are included in the
measurement, which makes the analysis slightly overestimates the standard deviation.
Therefore, the first option is preferred. Nevertheless, since this is the first section on
modelling, both methods are illustrated here.
In the first method, 85 000 000 points were sampled on the mesh, comparable to the
amount of points in the measured point cloud (± 84 800 000). Since the vault is a quite
large object, a first, visual check is performed to determine subzones in which the cloud-
to-cloud distance exceeds 5 cm but that are not occluded in the original point cloud. A
visual scale has been set up according to the LoA specification. The red zones indicate
either occlusions or zones which were modelled not exact enough. Note that, on the
color scale, the red part is the largest because of some extreme values, and that this is
in no way related to the amount of points in the zone, as the distribution curve at the
right side from the colour scale indicates. Alternatively, the upper boundary of the
color scale can be set to 50 mm, which would display the now red parts in grey. Fig.
6.19 shows the first analysis. There are clearly some red zones which we know to be
not occluded in the original point cloud. Therefore, the geometry in these zones is
refined in the Rhino model, exported to .obj mesh, subsampled, and a second analysis
is carried out (Fig. 6.20). A full-page figure of Fig. 6.20 is added to Appendix A.
Fig. 6.19: First Cloud-to-Cloud deviation analysis, red zones mark either occlusions or deviations in the model that exceed 50 mm (CloudCompare Stereo v2.8.1)
74
Fig. 6.20: Second Cloud-to-Cloud deviation analysis (CloudCompare Stereo v2.8.1) (Appendix A)
As we can see in Figure 6.20, the remodelling step clearly reduced the amount of zones
that exceed the 50 mm limit. The mean distance is 13.8 mm, the standard deviation σ
14.6 mm. According to the Level of Accuracy specification, we can state that the
represented LoA of the vault is LOA20 (2σ < 50 mm), which corresponds with the goal
stated at the beginning of section 6.2.
The second method renders a similar image as the first method, because of the large
coverage of the reference point cloud (Fig. 6.21). The refined model was used
immediately for this analysis. Using this method, a mean distance of -5 mm is reached
and the standard deviation σ is 24.0 mm. This corresponds again to an LOA20, albeit
more close to the 50 mm treshold (2σ = 48.0 mm). As said, the difference with the
previous method in standard deviation is due to the fact that non-modelled elements
are included in the analysis. The lower mean distance is because this method includes
negative distances, depending on the normal of the mesh triangle that is measured.
75
Fig. 6.21: Cloud-to-Mesh deviation analysis (CloudCompare Stereo v2.8.1)
6.2.2.2 Walls, consoles and hosted elements
The modelling of the walls relied on point cloud 6.1.1.2 (TLS and Photogrammetry).
For each wall, at least three sections were made: one at ground level (Fig. 6.22), one
at the lowest part of the vault (following the outermost edges of the ceiling to make a
‘watertight’ connection) (Fig. 6.23) and one at the first floor, marking the end of the
storey. To make the section as clear as possible, two opposite clipping planes are
brought as close together as possible. One of the clipping planes is set as a movable
working plane (MPlane), so when reorienting the clipping planes to another section
level, the working plane changes accordingly. The first step is to model the walls as
closed solids, voids for doors and windows will be cut out later. For now, if a void or
an occlusion (e.g. because of furniture) is present at section level, existing lines are
extrapolated. Since the front façade (south) contains some extra irregularities (Fig.
6.24), these are modelled by tracing the edges on a working plane, similar to the
working planes used in the ceiling, at the two corners of the wall. The corresponding
point at the other corner is then connected by a line, forming an edge surface. This is
done until the overall form of the modelled façade matches the point cloud (Fig. 6.25).
A point can be corrected three dimensionally by holding shift (to enable ORTHO mode)
and moving it closer to its corresponding point cloud vertex.
76
Fig. 6.22: section at ground level (Rhino 6) Fig. 6.23: MPlane for section at the beginning of the ceiling. The edges of the ceiling are traced for the interior of the walls (Rhino 6)
Fig 6.24-26: Irregularities in Front and eastern façade (Rhino 6)
Then, the openings in the wall are created as solid
volumes. A Boolean Difference will later subtract
these volumes from the wall, creating the openings.
To ensure a working Boolean Difference, the
volumes stick a little out of the wall (Fig. 6.27). For
most openings, it suffices to construct parallel
working planes based on the point cloud, drawing
the section with control point curves (or if possible,
arcs) and then lofting these sections one by one.
Finally, the ‘cap’ command converts the loft to a
closed volume that can be subtracted (Fig. 6.27).
After the openings in the wall are made, the hosted elements (windows and doors) can
be modelled (based on separate point cloud segments extracted from point cloud
6.1.1.1). The geometry of the wall opening serves as the basis for constructing them.
Details like hinges and window patterns will not be included in this model. Again, the
boundaries of the elements are drawn and serve as the edges for the volumes. The doors
are modelled very modestly: their contour on the inside and outside is traced and lofted.
For the windows, the frame is traced on the inner side of the wall hole and then
extruded, the depth based on a top view of the point cloud. Since the walls are massive
stone walls, it is assumed that the frames start where the wall ends, i.e. that there are
Fig. 6.27: Closed 'void' volumes will be subtracted from the wall solid (Rhino 6)
77
no parts of the frame that are hidden inside the wall. The lower part of the window
frames in the western façade is located on the outside (Fig. 6.26a-b).
Fig. 6.28a-b: window in the western wall: frame (red) glass (yellow) (Rhino 6) Fig. 6.29: bay window with separate panes (Rhino 6)
The thickness of the glass panels is impossible to determine
from the point cloud, because glass and other transparent
materials reflect and scatters the lasers from the scanner
(Fig. 6.30). Therefore, the glass gets an assumed thickness
of 3 mm, which is a usual thickness for stained glass.71 All
panes are modelled as one volume, except the bay window
at the front (southern) façade, in which individual panes are
separated by a wooden frame (Fig. 6.29).
Lastly, the consoles that transfer the weight of the vault to the walls are modelled
(point cloud 6.1.1.1). Since all consoles in the room are different and highly irregular,
modelling them relies mostly on using Mplanes and visual alignment with their
respective point clouds in a wireframe modelling process (Fig. 6.31). With these curves
as a basis, ‘edge surfaces’ (surfaces defined by three or four nonplanar curves) can be
constructed (Fig. 6.31). Sometimes, when a console-surface was quite orthogonal, an
extrusion was made instead of an edge surface. These surfaces are joined until they
formed a closed polysurface. It is made sure that the solid model perpetrates the wall,
so a Boolean Difference can later make a perfect alignment with the wall surface (Fig.
6.31-32).
71 http://www.madehow.com/Volume-2/Stained-Glass.html (accessed 18/05/2018)
Fig. 6.30: scattered points due to reflection (Rhino 6)
78
Fig. 6.31: manual “Wireframe modelling” (Rhino 6)
Fig. 6.32: Aligning the consoles to the walls by performing a Boolean difference (Rhino 6)
To transfer the weight of the ceiling via the wall to the ground, a console needs to be
anchored. The problem is that we do not have any information about how deep the
console perpetrates the wall. A reasoned assumption has to be made. For this
assumption, a rough guess was made that a console has an invisible part in the wall
with the approximate dimensions of a
boundary box around its outer part (i.e.
the console sticks as deep in the wall as it
sticks out). This meets structural
requirements: the inner compression is
certainly lower than or equal to the outer
weight and there will be no high
eccentricity of the resultant force (as the
weight of the wall and the above storeys
compensate the weight of the ceiling). A
total console is depicted in Fig. 6.33.
The southern (front) façade wall will serve as the example input for the deviation
analysis of the walls. Since this is the most geometrically outspoken wall in the model,
it is assumed that it is the most critical one, i.e. the one that contains the largest
deviations. In this case study, we assume that the represented LoA of the other walls
(and their hosted elements and consoles) will not be larger than the LoA of this one,
although only a deviation analysis for all elements can give a final answer.
The wall solid covers a lot of surfaces that are not present in the point cloud: either
occluded areas or simply because the elements are modelled as solids. The wall, hosted
elements and consoles are extracted from the model and adapted to the point cloud:
‘false’ surfaces and occluded areas are removed entirely or split (Fig. 6.34a-d). When
matching the reference point cloud, which consists of the wall surfaces, hosted elements
and 3 consoles, the adapted model is exported to .obj mesh. 1 100 000 points are
sampled on the mesh (Cloudcompare), slightly more than the ± 1 050 000 points of the
wall’s point cloud.
Fig. 6.33: inner part (yellow) and outer part (red) of a console in the middle of a wall (Rhino 6)
79
Fig. 6.34a-d: Removing occlusions and ‘false’ surfaces (Rhino6)
Fig. 6.35a-b show the results of a cloud-to-cloud comparison. It yielded a mean distance
of 16.8 mm and a standard deviation σ of 13.3 mm. This corresponds with LoA20 (2σ
< 50 mm).
80
6.35a-b: Cloud/Mesh deviation analysis for the southern (front) wall (CloudCompare Stereo v2.8.1)
81
6.2.2.3 Columns
The Presence Chamber contains 2 columns in the middle of the room. Both columns
are severely damaged at the base, and to a lesser extent at the capital (Fig. 6.37a). It
is chosen to base the model on their assumed ‘as-was’ geometry rather than include all
the current distortions. Note that this does not
correspond with a ‘perfect’ column: for example,
the column’s real eccentricity is still present in the
model. Abstraction is made from the acanthus
leaves at the capital. For both base and capital, a
section that is considered typical for how the
column might have been, is drawn on an
intersecting working plane (Fig. 6.36). This section
is then swept with 2 rail curves (‘_sweep2 ’). The
curves that define the geometry of the modelled
column are depicted in figure 5.37d.
Fig. 6.37a-d: Visualizations of the column
a) ‘as-is’ column (Autodesk Recap Pro)
b) Assumed ‘as-was’ column (Rhino 6)
c) Overlay Point cloud – 3D model (Rhino 6) d) Model-defining curves (Rhino 6)
Fig. 6.36: drawing an assumed representative section for the column's base (Rhino 6)
82
Only the accuracy of the capital and the shaft is checked, because we know already the
modelled base and the base in the point cloud do not match. After the ‘false’ surfaces
due to solid modelling have been removed, the columns are imported in CloudCompare.
On each one, 2 500 000 points are sampled, which are compared to the original point
cloud (± 2 400 000 points, including the base). The first column (Fig. 6.38) yields a
mean distance of 3.9 mm and a standard deviation of 5.1 mm. This corresponds with
LOA30 (2σ = 10.2 mm < 15 mm). For the second column (Fig 6.39), also a mean
distance of 3.9 mm is calculated. The standard deviation σ is 4.2 mm, also
corresponding with LOA30.
Fig. 6.38: Cloud-to-cloud analysis of the first column (shaft and capital) (CloudCompare Stereo v2.8.1)
Fig. 6.39: Cloud-to-Cloud analysis of the second column (shaft and capital) (CloudCompare Stereo v2.8.1)
83
6.2.2.5 Fireplace
The last part that is modelled is the fireplace, in the scan depicted as a throne corner.
Again, mostly a wireframe modelling technique was used. The cavity is modelled similar
to the hosted element voids: a solid is modelled and subtracted from the wall. The 3D
model of the fireplace consists of four main elements: two pilasters, a lintel and a roof.
The following assumptions are made:
- The height of the cavity (occluded area) is set to the start of the ‘roof’
- Part of the base of the right pilaster was occluded by furniture at the time of the
surveys. It is modelled by interpolating visible edges.
- The ‘roof’ perpetrates the wall as deep as the ‘lintel’.
To be able to calculate e.g. volumes that stick out later, the inside part of the elements
are separated from the outside parts, forming different solids. Figure 6.40a-b depict the
different parts of the fireplace and their location in the wall. Figure 6.41a-b visually
compares the geometry with the point cloud.
Fig. 6.40a-b: different parts of the fireplace. Assumed height of the inner cavity.
Fig. 6.41a-b: resulting geometry with and without point cloud
84
Again, occlusions and ‘false surfaces’ are removed before sampling the 3D model in
CloudCompare (2 100 000 points). The void at the back of the model is the place where
the throne, which is not modelled, occludes the wall behind it (Fig. 6.42). The
calculated mean distance is 6.8 mm, the standard deviation σ is 10.6 mm. The 3D
model of the fireplace has thus LOA20 as well (2σ < 50 mm).
Fig. 6.42: Cloud-to-Cloud deviation analysis of the Fireplace (Cloudcompare Stereo v2.8.1)
85
6.3 Plugin demonstration
In this section, the working of the plugin is illustrated by performing the semantic
enrichment of three elements: the Eastern Wall, the fireplace (hosted) and a column.
These will be considered exemplary for the rest of the objects. The entire enrichment
step takes place in the ‘Semantics’ tab, ideally after the Project Info has been set,
although this is no prerequisite. At least one Object layer (i.e. not the Default Layer)
has to be present in the document.
6.3.1 Semantic definitions of the Eastern Wall
When selecting an object in the object dropdown menu, the first step is stating the
storey and the space it belongs to, in this case: ‘Ground_Floor’ and ‘Presence_Chamber’
(Fig. 6.44 -(1)). Next, as the element is a wall, it will be set as a ‘bot:adjacentElement’
(an element that defines the boundaries of a zone) instead of the default setting.
‘bot:containsElement’. Checking the ‘Adjacent’ checkbox will enable setting a second
zone to be adjacent to the element. In the case of the Eastern Wall, this is
‘EXTERIOR’, hypothetically defined as a space (Fig. 6.44 -(2)). As the wall itself is
not hosted, the checkbox ‘Hosted’ is left unchecked. After a geometric deviation analysis
as performed in section 6.2, the LoA can be set in the numeric counter (Fig. 6.44 -(3)).
The object type (product:Wall-SOLIDWALL) is guessed based on the name of the
point cloud, but can be changed without consequences (Fig. 6.44 -(4)). As the hosted
elements of the Eastern Wall are based on separate point clouds, they are not set as
dependent sub-objects here; instead they will be modelled as individual elements and
set to ‘Hosted’ (see 6.3.2). The sub-object ‘self’ represents the Wall itself and contains
all geometries in the Object Layer, including point clouds (Fig. 6.44 -(5)). Selecting an
item in the ‘Sub-Object has Geometries’-list selects the corresponding geometry in the
viewer, mainly for checking purposes (Fig. 6.44 -(5)).
6.3.2 Semantic definitions of the Fireplace
The storey and space of the Fireplace are identical to those of the wall: ‘Ground_Floor’
and ‘Presence_Chamber’. As the Fireplace is hosted by the Eastern Wall, the ‘Hosted’
checkbox is checked and the hosting element is chosen in the dropdown menu (Fig.
6.45 -(1)). The element type is guessed correctly as an stgp:Fireplace. As indicated in
section 6.2.2.5, the Fireplace consists of four elements, each one with an internal and
an external (visible) part. These elements are constructed as (aggregated) sub-elements
(Fig. 6.45 -(2)):
- Fireplace_Pilaster1 (product:Column-PILASTER),
- Fireplace_Pilaster2 (idem),
- Fireplace_Lintel (product:Beam-LINTEL)
- Fireplace_Hood (stgp:Fireplace_Hood)
86
The “Remark” button serves to denote certain elements with a modelling remark (as a
Unicode string linked to an stg:ModellingRemark) (Fig. 6.45 -(3)). This can be a
modelling assumption or a general remark. An element can have multiple remarks. In
this example, internal parts are labelled with ‘INNERPART’, the plugin software will
interpret this as a ‘command’ to label it as an stg:InternalGeometry. The “Occlusion”
button will internally link the geometry instance to an stg:OccludedGeometry instance,
in the UI denoted as a remark “OCCLUDED_AREA” listed below (Fig. 6.45 -(4)).
In theory, an entire element can be labelled as occlusions, but a more detailed approach
is to extract the elements subsurfaces that are occluded in the point cloud, and then
denoting these separate surfaces as occluded areas. Such individual subsurfaces can be
copied only by selecting them with ctrl+SHIFT pressed, and a regular ‘_Copy’
command, the starting point and end point being the same. They will be automatically
listed as object geometries after refreshing the object in the object dropdown menu and
can be labelled as ‘OCCLUDED_AREA’ in a regular way. Note that at the time of
writing, they will be linked to the (sub-)object just like the other geometries
(‘geo:hasGeometry’). Fig 6.45 shows the surfaces that were set as occlusions.
6.3.3 Semantic definitions of a column
In the previous section, three elements were modelled for each column: a capital, a
shaft and a base. These elements each represent a sub-object and are denoted by the
custom stgp classes stgp:Capital, stgp:Shaft and stgp:Base. To illustrate the possibility
to ever-refine the graph, a fourth stgp class has been defined for this example:
stgp:Acanthus. Although no acanthus leaves were modelled, they can be set as entities
in the graph anyway. An illustrating sub-class ‘Column1_Acanthus’ is constructed,
without any aggregated geometries. The optional dropdown menu ‘part of subobject’ is
used to state its parent object. Like the other Sub-Objects, either an aggregated or a
hosted relationship with the parent object can be defined (in this case with another
Sub-Object). (Fig. 6.43)
Fig. 6.43: Defining a Sub-Object as a child from another Sub-Object
87
1
4
2 5
3
Fig
ure
6.4
4: Set
ting t
he
sem
antics
of th
e E
ast
ern W
all (
Rhin
o 6
/ S
TG
Plu
gin
) 1)
Obje
ct, S
tore
y, Space
2)
Type
3)
LoA
4)
P
roduct
Types
5)
Set
ting/Sel
ecting O
bje
cts
in t
he
vie
wer
88
1
3
2
4
Fig
ure
6.4
5: Set
ting t
he
sem
antics
of th
e Firep
lace
, occ
luded
ele
men
ts a
re s
elec
ted in t
he
vie
wer
(R
hin
o 6
/ S
TG
Plu
gin
) 1)
Set
ting t
he
host
ing p
are
nt
obje
ct
2)
Def
inin
g t
he
Firepla
ce’s S
ub-O
bje
cts
3)
‘Rem
ark
’ and ‘O
cclu
sion’
4)
Lis
t w
ith r
em
ark
s on t
he
sele
cted
geo
met
ry inst
ance
89
6.4 Additional content
A method to add relevant information that cannot (yet) be implemented using the
plugin’s user interface was outlined in section 5.7. Now, this method will be applied to
the case study. As the main example, information about the survey methods will be
added to the point cloud URIs.
All objects, except the walls and the point cloud that sets the height of the vault, are
based on point cloud 6.1.1.1. To make this point cloud, a Leica Scan Station P30 was
used during the first surveying campaign. After loading the graph in Stardog, we can
add this information:
INSERT {?pointcloud stg:usedEquipment "Leica Scan Station P30"}
WHERE {{?object stg:hasPointCloudFile ?pointcloud . ?object a product:Door-DOOR}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a product:Door-DOOR}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a stgp:FirePlace}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a product:Window-WINDOW}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a stgp:Vault}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a product:Column-COLUMN}
UNION {{?object stg:hasPointCloudFile ?pointcloud . ?object a product:DiscreteAccessory-BRACKET}}
This can be done either in the Stardog Web Console (localhost:5820) or using the
SPARQL tab of the plugin. Fig. 6.46 shows that the SPARQL INSERT was successful
(Appendix E, line 737-822).
Fig. 6.46: Performing a check for the SPARQL INSERT (STG Plugin)
90
We can do the same for the walls. For the point clouds that were used as a source for
the wall (6.1.1.2), the same ‘Leica Scan Station P30’ was used for the TLS part, as this
point cloud is based on point cloud 6.1.1.1 (this step could thus have been included in
the previous query as well). For the terrestrial photogrammetry, a Canon EOS 5D was
used, with lens SIGMA 24-70mm and aperture F2.8. The drone which was used for
aerial photogrammetry had a built-in 4K camera "DJI Phantom 4 Pro”. These are all
added to the graph. Note that they are added as strings here, which are less stable
than URIs. As indicated in section 4.2, the best situation would be to link to the
manufacturer’s RDF database.
INSERT {?pointcloud stg:usedEquipment "Leica Scan Station P30","Canon EOS 5D (lens: SIGMA 24-70mm;
aperture F2.8","DJI Phantom 4 Pro"}
WHERE {?object stg:hasPointCloudFile ?pointcloud . ?object a product:Wall-SOLIDWALL}
The last point cloud we have to include the survey equipment for, is the floor of the
count’s bedroom, which serves as the limit for the vault’s height. This survey campaign
was done in the winter of 2018 with a Leica BLK360 TLS. There rises a problem in
this case, because the vault is based on two point clouds. In the above query, both have
been set to be made by the Leica Scan Station P30. To solve this, we can either
explicitly state the name of the point cloud in the SPARQL query, or change this
manually in the .ttl file. As a demonstration, the second option is chosen, although
such approach is not recommended because it is very sensitive to errors. After exporting
the RDF graph from the Stardog Web Console, it can be opened in a simple text editor
and manually changed. Somewhere in the file, there is stated that:
inst:PC_Slab-Floor_Floor_BedroomOfTheCount_subsampled_OCTREE_LEVEL_10_SUBSAMPLED
stg:usedEquipment “Leica Scan Station P30" .
The object of this triple can be changed by hand to “Leica BLK360” (Fig. 6.47), after
which the Turtle file can be saved.
Fig. 6.47: Manually changing information in a Turtle file (Notepad ++)
As a finish for the project, two more triples are inserted, one to link the Rhino document
to the graph and one to relate the building to the DBpedia page of the Gravensteen
Castle (Appendix E, line 16-19):
inst:Gravensteen_Ghent stg:hasRhinoFile inst:RF_ Casestudy_Gravensteen_PresenceChamber .
inst:RF_Casestudy_Gravensteen_PresenceChamber stg:hasLocalVersion
“C:/STG/CaseStudy_PresenceChamber_nogeom/RhinoFiles/Casestudy_Gravensteen_PresenceChamber.3dm” .
inst:Gravensteen_Ghent rdfs:seeAlso <http://uk.dbpedia.org/page/Gravensteen_(Gent)> .
91
This concludes the section about the graph creation. Further information could be
added similarly, either with SPARQL queries, manual or by coding another plugin
(extension) to perform such enrichment in a more user friendly environment.
6.5 Conclusion
This chapter illustrated the workflow of the proposed scan-to-graph process with a case
study. The first part did not differ substantially with a regular scan-to-BIM process:
the available point clouds were analysed and segmented as a preparation for modelling.
3D modelling was done in Rhinoceros 6, not a BIM- but a CAD-environment, which
(probably) helped to cope with irregular geometries. Comparing the 3D models of the
elements with the point clouds is an essential feedback step for any as-is geometry
reconstruction method: for scan-to-graph as well as for scan-to-BIM and regular reverse
engineering processes. The last step involved semantic enrichment of the model, using
the plugin and beyond, resulting in an RDF graph that includes general semantic
information, metadata about occlusions, modelling remarks and the used point clouds,
and (optionally) also the 3D geometries. Only the graph without STEP representation
of the geometries is added as Appendix E. Due to property rights, the graph that
contains STEP representations, the .3dm Rhino file and the segmented point cloud
objects are not openly accessible. Images of the total model are added to Appendix A.
92
Chapter VII: Discussion and conclusion
An alternative approach for scan-to-BIM processes was developed in this master thesis.
It was investigated how Linked Data techniques might be used to cope with several
challenges that prevent a more frequent application of modelling an as-is BIM of an
existing building. The research resulted in a proposed variant of scan-to-BIM, called
‘scan-to-graph’. The scan-to-graph process is embedded in the Resource Description
Framework (RDF), the basic ‘language’ to connect information in a Linked Data
context. This final chapter evaluates the research regarding several criteria.
It can be concluded that a main advantage of a using Linked Data approach instead of
a ‘classic’ scan-to-BIM process, is that such approach enables to deal more efficiently
with metadata handling (section 7.1). Further, it has been shown implicitly that the
general advantages of Linked Data in the AEC industry as outlined by Pauwels et al.
(2017b) also apply to digitalisation of existing buildings. In this thesis, this was mainly
related to the there defined ‘Interoperability’ and ‘Linking across domains’ (section 7.2).
To be able to perform such a scan-to-graph process, a basic plugin for Rhino 6 was
developed to connect semantics with geometry and serialize the total into an RDF
graph (reviewed in section 7.3). As a consequence of detaching from ‘conventional’ BIM,
there was a greater freedom to choose the main 3D modelling environment. In the
modelling phase of the case study, the use of Rhino 6 for as-is modelling has been tested
(reviewed in section 7.4).
7.1 Metadata handling
Point clouds, made using TLS or photogrammetry, currently serve as the main basis
for the creation of a 3D model of an existing building. Although point clouds cover a
great part of the building geometry, remote sensing techniques will never reveal all
information about a building. Even a total coverage of the visible geometry does not
contain any information about the inner structure of walls, floors etc. Besides, in reality,
such a total coverage is seldom the case. Therefore, a ‘scan-to-graph’ vocabulary was
defined as a main deliverable of this research. It contains specific Linked Data classes
and properties for keeping track of sources, modelling remarks and occlusions. These
modelling remarks are currently bundled in a very general class. In the case study, their
use was restricted for linking comments about geometric assumptions (plus stating that
a geometry is ‘internal’). One subclass has already been defined, for making statements
about occlusions. In future research, subclasses specific for assumptions on inner
structure or other uncertainties could be developed.
Another metadata-related definition included in the scan-to-graph vocabulary keeps
track of the (represented) Level of Accuracy of the 3D reconstructed objects, after a
geometric deviation analysis has been carried out. This way, an easy check on the
93
accuracy of the model could be carried out by a simple SPARQL query, e.g. for better
monitoring of the 3D modelling process by the client or when a modelling team is
working together on one project.
7.2 General LD benefits applied to existing buildings
7.2.1 ‘Interoperability’
In Pauwels et al. (2017b), ‘Interoperability’ relates to the use of Linked Data for
developing a common data model that can use the same content in different
applications, while preserving the actual information. In the context of this research,
this ambition was interpreted as the implementation of parallel representations of the
same geometry, using open formats for use in various software packages. Current
geometry formats that are linked are E57 for the source point clouds and STEP for the
3D representation: E57 is linked as a URL reference (a URI that also represents the
location of the document) both local and on the web, STEP geometries are made
explicit in the graph by embedding them in the graphs as Literal strings. It has been
mentioned that other geometry types (e.g. meshes) might be included in a similar way.
7.2.2 ‘Linking across domains’
Another promising Linked Data consequence outlined in Pauwels et al. (2017b) is
‘Linking across domains’. Examples are the combination of ontologies, combining
product data with building data and connecting BIM with other disciplines. The
proposed scan-to-graph vocabulary engages in the principles for a semantic web-based
AEC Industry, which rely on the combination of simple, modular and extensible
ontologies (as outlined in Rasmussen et al., 2018) rather than large ontologies that try
to be all-encompassing. Describing information by use of ontologies allows the
information to be structured in a centralized model, the modularity as a means to keep
it at manageable size.
The combination of product data with building data is mentioned in Pauwels et al.
(2017b) in the case of product manufacturer data. In the context of as-is buildings, this
can be broadened to also implement historical products. As an illustration, the product
classes that were developed for specific elements in the case study are an extension to
the Building Product Ontology, but also link to RDF-based definitions of these
elements on the Art and Architecture Thesaurus (AAT). This way, extra ‘product
information’ is implicitly added to the product classes.
Connecting BIM with other disciplines is an important concept in the case of as-built
semantic 3D models. Cultural Heritage was mentioned in the text as such a discipline
(e.g. for extending a BIM’s usage to a virtual museum), other examples could be
Facility Management or methods to map energy analysis results of existing buildings
to the model. The core idea of the scan-to-graph process ‘as proposed’ is to serve as a
94
general basis for such existing-building-related disciplines to go further on this trend of
modular ontologies that are small and easy to connect.
7.3 Plugin
Since no software for performing a scan-to-graph process exists (nor a 3D modelling
software that uses Linked Data in general), a large part of the thesis consisted in coding
such application as a plugin for Rhino 6. The functionality of the plugin is limited to
building topology, product classification and the core domain of the scan-to-graph
vocabulary: mapping sources and modelling uncertainties to geometry. Going further
on section 7.2, future research might extend the plugin with methods for enriching the
geometry with information related to heritage, structure, building physics etc.,
ultimately providing a Linked Data-based BIM environment.
Although the graphs that are generated by the plugin only consist in open-formats, the
plugin itself has a major dependency on the commercial Rhino software environment.
This was considered a necessary step, because this dependency was outweighed by the
benefits: a well-supported interface to write custom plugins and the versatility of Rhino
for modelling distorted geometries.
7.4 CAD vs BIM for as-is modelling
Apart from the plugin development, the modelling phase of the case study was the
most time-intensive part of the research. Indicated by Volk et al. (2014), the demanding
process of modelling of out-of-plumb geometries is one of the core reasons that prevent
a more frequent application of BIM for existing buildings. A consequence of
programming the plugin in Rhino was that a CAD environment was used for modelling
instead of a BIM environment. As only one environment was used and an in-detail
comparison between reverse engineering functionality of different packages was out of
the scope of this thesis (and would be influenced anyway by the experience of the
modeller), only some general remarks on the use of Rhino for as-is modelling can be
made, without comparing it to other environments.
Generally, the Rhino interface provided an easy to use workflow, providing quick
perspective changes in 3D as well as orthogonal views. 2D drawing on reference planes
in a 3D perspective proved a valuable technique for pinpointing quite precisely the
curvature of different surfaces. A flexible use of orientable section planes and layers
significantly improved visibility and alignment to the point cloud underlay.
The greatest remark is that every object was modelled by drawing curves, and
constructing surfaces between these curves, thus literally defining an object by its
boundaries and then converting it to a closed polysurface. This was necessary because
95
solid primitives are too platonic to represent irregular as-is geometries. Probably, this
was the greatest strength as well as the greatest issue: using edges as the basic
components of each object significantly eased the implementation of geometric
irregularities. On the other hand, every object had to be converted to a closed solid by
hand, and its edges should be totally ‘watertight’. This manual control step is very
intensive and sensible for errors. Checking for invalid geometries or ‘capping’ indicated
openings that were often invisible, mathematical inconsistencies, cancelled out much of
the time that was gained by Rhino’s advanced modelling workflow.
7.5 Conclusion
It may be clear that the possibilities of using Linked Data for semantic enrichment of
existing buildings are not limited to the topics of this dissertation. Eventually, Linked
Data can serve as a unifying framework to combine as-is BIM, as-planned BIM,
Cultural Heritage, GIS and so on, in a truly centralized semantic model that links all
the available information about a building: its geometry, its structure, its history and
all the other disciplines that are concerned with existing buildings. With such prospect
in mind, it is hoped that this research project can indeed serve as a background for
further research regarding the implementation of these topics.
96
Bibliography
Abdul-Ghafour, S., Ghodous, P., Shariat, B., Perna, E., 2007. A common design- features ontology for product data semantics interoperability, in: Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence. IEEE
Computer Society, pp. 443–446.
Albertz, J., 2002. Albrecht Meydenbauer - Pioneer of photogrammetric documentation of the cultural heritage. Int. Arch. Photogramm. Remote Sens.
Spat. Inf. Sci. 34, 19–25.
Allemang, D., Hendler, J., 2011. Semantic web for the working ontologist: effective modeling in RDFS and OWL. Elsevier. USA.
Barbosa, M.J., Pauwels, P., Ferreira, V., Mateus, L., 2016. Towards increased BIM
usage for existing building interventions. Struct. Surv. 34, 168–190.
Bassier, M., Vergauwen, M., Van Genechten, B., 2017. Automated classification of heritage buildings for as-built bim using machine learning techniques. https://doi.org/10.5194/isprs-annals-IV-2-W2-25-2017
Bonduel, M., Bassier, M., Vergauwen, M., Pauwels, P., Klein, R., 2017. Scan-to- bim output validation: towards a standardized geometric quality assessment of building information models based on point clouds. ISPRS - Int. Arch.
Photogramm. Remote Sens. Spat. Inf. Sci. XLII-2/W8, 45–52. https://doi.org/10.5194/isprs-archives-XLII-2-W8-45-2017
Dubois, S., Vanhellemont, Y., de Bouw, M., 2017. WTCB Innovation Paper: Geometrische opmeting in hoge resultie - 3D digitalisering in het BIM-tijdperk. WTCB.
Hauck, O., Kuroczyński, P., 2014. Cultural Heritage Markup Language, in: Proceedings of the 20th International Conference on Cultural Heritage and New Technologies 2015, Museen der Stadt Wien, Vienna.
Kraus, K., 2012. Photogrammetrie: Geometrische Informationen aus Photographien und Laserscanneraufnahmen. Walter de Gruyter, Germany.
Kuroczyński, P., Hauck, O., Dworak, D., 2016. 3D Models on Triple Paths-New Pathways for Documenting and Visualizing Virtual Reconstructions, in: 3D
Research Challenges in Cultural Heritage II. Springer, pp. 149–172.
Liao, C.-T., Huang, H.-H., 2012. Classification by Using Multispectral Point
Cloud Data. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 137– 141.
Luhmann, T., Robson, S., Kyle, S.A., Boehm, J., 2014. Close range photogrammetry: principles, techniques and applications. Walter de Gruyter, Germany.
Mezzino, D., 2017. Cultural built heritage’s tangible & intangible dimensions and digitalization challenges (PhD Thesis). Politecnico di Torino-Carleton University.
Nan, L., Xie, K., Sharf, A., 2012. A Search-Classify Approach for Cluttered Indoor Scene Understanding. https://doi.org/10.1145/2366145.2366156
Nuttens, T., Stal, C., Wisbecq, J., Deruyter, G., De Wulf, A., 2014. Field comparison of pulse-based and phase-based laser scanners for civil engineering applications, in: 14th International Multidisciplinary Scientific Geoconference (SGEM). pp.
169–176.
97
Ochmann, S., Vock, R., Wessel, R., Klein, R., 2016. Automatic reconstruction of parametric building models from indoor point clouds. Spec. Issue
CADGraphics 2015 54, 94–103. https://doi.org/10.1016/j.cag.2015.07.008
Perry, M., Herring, J., 2012. OGC GeoSPARQL - A Geographic Query Language for RDF Data. (OGC 11-052r4). Open Geospatial Consortium.
Pauwels, P., Krijnen, T., Terkaj, W., Beetz, J., 2017a. Enhancing the ifcOWL ontology with an alternative representation for geometric data. Autom. Constr.
80, 77–94. https://doi.org/10.1016/j.autcon.2017.03.001
Pauwels, P., Terkaj, W., 2016. EXPRESS to OWL for construction industry: Towards a recommendable and usable ifcOWL ontology. Autom. Constr. 63,
100–133. https://doi.org/10.1016/j.autcon.2015.12.003
Pauwels, P., Van Deursen, D., Verstraeten, R., De Roo, J., De Meyer, R., Van de Walle, R., Van Campenhout, J., 2011. A semantic rule checking environment
for building performance checking. Autom. Constr. 20, 506–518.
Pauwels, P., Zhang, S., Lee, Y.-C., 2017b. Semantic web technologies in AEC
industry: A literature overview. Autom. Constr. 73, 145–165. https://doi.org/10.1016/j.autcon.2016.10.003
Rasmussen, M., Pauwels, P., Lefrançois, M., Schneider, G.F., Hviid, C., Karlshøj, J., 2017. Recent changes in the building topology ontology, in: LDAC2017-5th Linked Data in Architecture and Construction Workshop.
Rasmussen, M., Pauwels, P., Hviid, C., Karlshøj, J., 2018. The BOT ontology: standards within a decentralized web-based AEC Industry. Under review.
Schwermann, R., 2017. Lecture slides for the course ‘Photogrammetrie’ at RWTH Aachen. RWTH Aachen, Aachen.
Segaran, T., Evans, C., Taylor, J., 2009. Programming the Semantic Web: Build
Flexible Applications with Graph Data. O’Reilly Media, Inc. USA.
Shi, Y., Long, P., Xu, K., Huang, H., Xiong, Y., 2016. Data-driven contextual
modeling for 3D scene understanding. Comput. Graph. 55, 55–67. https://doi.org/10.1016/j.cag.2015.11.003
Svenshon, H., Grellert, M., 2010. Rekonstruktion ohne Befund? in: Befund und
Rekonstruktion. Mitteilungen der Deutschen Gesellschaft für Archäologie des Mittelalters und der Neuzeit, 22. Paderborn.
Thomson, C.P.H., 2016. From Point Cloud to Building Information Model: Capturing and Processing Survey Data Towards Automation for High Quality 3D Models to Aid a BIM Process (Doctoral). UCL (University College London).
U.S. Institute of Building Documentation, 2016. Level of Accuracy (LOA) for Building Documentation. Document C120 Version 2.0.
van Genechten, B., 2008. Theory and practice on Terrestrial Laser Scanning. Universidad Politecnica de Valencia Editorial. Valencia, Spain.
Volk, R., Stengel, J., Schultmann, F., 2014. Building Information Modeling (BIM) for
existing buildings - Literature review and future needs. Autom. Constr. 38,
109–127. https://doi.org/10.1016/j.autcon.2013.10.023
98
Referenced websites: - Hausenblas, M., http://5stardata.info/en/, accessed 23 May 2018, updated 31/08/2015
- Advamag, Inc., http://www.madehow.com/Volume-2/Stained-Glass.html, accessed 18/05/2018
- BAM Advies & Engineering, https://www.bambouwentechniek.nl/nieuws/2017/11/augmented-reality-in-
facilitair-management, accessed 10/04/2018, updated 10/11/2017
- BauDataWeb, http://semantic.eurobau.com/, accessed 29/04/2018
- Berners-Lee, T., https://tools.ietf.org/html/rfc3986, accessed 21/04/2018, last updated 01/2005
- BIMobject, http://bimobject.com/en-us, accessed 25/05/2018
- BuildingSMART, http://bsdd.buildingsmart.org/, accessed 29/04/2018
- BuildingSMART, http://www.ifcwiki.org/index.php?title=IFC_Wiki, accessed 28/04/2018, updated
26/04/2016
- BuildingSMART,http://www.buildingsmart-tech.org/, accessed 28/04/2018
- Capturing Reality s.r.o, https://www.capturingreality.com/ , accessed 19/05/2018
- CloudCompare, http://www.danielgm.net/cc, accessed 04/04/2018
- DURAARK, http://duraark.eu/tag/standardization/, accessed 29/04/2018
- Fugier, D., http://www.food4rhino.com/app/e57-file-import, accessed 12/10/2017
- Getty, http://www.getty.edu/research/tools/vocabularies/aat/, accessed 25/05/2018, updated 07/03/2017
- Heath, T., http://linkeddata.org/faq, accessed 18/04/2018
- Herder Institut, http://www.patrimonium.net/find/67/result/17db7047-5cec-e404-90c2-ce32c099f3bb,
accessed 10/05/2018
- Holten, M., Pauwels, P., Schneider, G., Verhelst, L., https://github.com/w3c-lbd-cg/bot, accessed
28/04/2018, updated 21/03/2018
- iCapp GmbH, https://rhinoreverse.icapp.ch/english/, accessed 27/05/2018
- International Committee for Documentation, http://www.cidoc-crm.org/, accessed 10/05/2018
- Leica Geosystems, http://hds.leica-geosystems.com/en/Leica-Cyclone_6515.htm, accessed 19/05/2018
- LUMsearch, https://lumsearch.com/en-US#0, accessed 29-04-2018
- OGC, http://www.opengeospatial.org/standards/geosparql, accessed 05/05/2018
- Pauwels, P., Holten, M., Terkaj, W., Schneider, G., https://github.com/pipauwel/product, accessed
25/05/2018, updated 18/04/2018
- RDFLib Team, https://github.com/RDFLib/rdflib, accessed 28/05/2018, last updated 14/05/2018
- RESURF, http://www.resurf3d.com/products.htm, accessed 27/05/2018
- Riegl, http://www.riegl.com/nc/products/terrestrial-scanning/produktdetail/product/scanner/58/,
accessed 31/05/2018
- Riegl, https://www.3dlasermapping.com/riegl-uav-laser-scanners/, accessed 25/05/2018, accessed 25/05/2018
- Schiemann, B., Oischinger, M., Görz, G., http://erlangen-crm.org/, accessed 10/05/2018
- Schiemann, B., Oischinger, M., Görz, G., http://erlangen-crm.org/docs/ecrm/current/index.html#anchor-
2008326450, accessed 13/05/2018
- Stardog Union, https://www.stardog.com, accessed 2/05/2018
- STEP Tools, Inc., https://www.steptools.com/stds/stp_aim/html/t_b_spline_surface_with_knots.html,
accessed 26/05/2018
- W3C, https://www.w3.org/2001/tag/doc/qnameids-2004-01-14.html, accessed 21/04/2018, updated
14/01/2004
- W3C, https://www.w3.org/2015/spatial/wiki/Further_development_of_GeoSPARQL, accessed 28/04/2018,
updated 26/09/2016
- W3C, https://www.w3.org/community/lbd/, accessed 08/05/2018
- W3C, https://www.w3.org/standards/semanticweb/, accessed 20/04/2018
- W3C, https://www.w3.org/standards/semanticweb/inference, accessed 05/05/2018, updated 24/06/2014
- W3C, https://www.w3.org/standards/semanticweb/ontology, accessed 28/05/2018
- W3C, https://www.w3.org/TR/rdf11-primer/, accessed 15/05/2018, updated 24/06/2014
- W3C, https://www.w3.org/TR/rdf-schema/, accessed 22/04/2018, updated 25/02/2014
- W3C, https://www.w3.org/TR/rdf-sparql-query, accessed 30/04/2018, updated 26/03/2013
- W3C, https://www.w3.org/TR/turtle/, accessed 21/04/2018, updated 25/02/2014
- W3C, https://www.w3.org/wiki/URI, accessed 20/04/2018, updated 01/02/2005
99
Appendix A
Full-page figures
Fig. 4.3: Topology of a graph constructed with the plugin (p48)
Fig. 4.4: Linking metadata to a geometric instance (p49)
Fig. 4.5: Linking building elements with geometry and modelling remarks (p50)
Fig. 6.20: Second Cloud-to-Cloud deviation analysis of the vault (p76)
Fig. 6. x: Case study model
100
Fig. 4.3: Topology of a graph constructed with the plugin (p48)
(yEd v3.18.0.2 )
101
Fig. 4.4: Linking metadata to a geometric instance (p49)
(yEd v3.18.0.2 )
102
Fig. 4.5: Linking building elements with geometry and modelling remarks (p50)
(yEd v3.18.0.2 )
103
Fig. 6.20: second Cloud-to-Cloud deviation analysis
(CloudCompare Stereo v2.8.1) (p76)
104
Fig. 6. x: Case study model
(Rhino 6)
105
106
Appendix B
Generate STEP geometry from an RDF graph from rdflib import Namespace 1
from rdflib.plugins.sparql import prepareQuery 2
import rdflib 3
import os 4
5
#replace this path by the path to your .tll graph 6
graph_to_parse = str(r"C:\Users\jeroe\Desktop\RhinoTests\Columns.ttl") 7
8
#SPARQL query for retrieving all STEP geometries in the graph 9
SPARQLquery = str('SELECT ?STEP ?RhinoID WHERE {?entity stg:asSTEP ?STEP . ?entity 10
stg:hasRhinoID ?RhinoID}') 11
12
#main function 13
def STEPquery(graph,query): 14
g=rdflib.Graph() 15
16
#binding the STG namespace to the graph 17
STG = Namespace("https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stg.ttl#") 18
g.namespace_manager.bind("stg", STG, override=False) 19
20
if os.path.exists(graph): 21
try: 22
file_format = rdflib.util.guess_format(graph) 23
g.parse(graph, format=str(file_format)) 24
except: 25
pass 26
27
#preparing the query in rdflib 28
STEPQuery = prepareQuery(query,initNs={"stg" : STG}) 29
30
#performing the query 31
STEPS = g.query(STEPQuery) 32
33
#each result represents a STEP geometry, which is stored in a folder ‘GEOM’ in the same folder as the 34
#graph. The filename corresponds with the RhinoID of the geometry, the second variable in the 35
#prepared SPARQL query 36
for row in STEPS: 37
storeFolder = graph.rstrip(".ttl") + r"\\GEOM\\reconstruction\\" 38
print storeFolder 39
if not os.path.exists(storeFolder): 40
os.makedirs(storeFolder) 41
reconstructionFile = storeFolder + row[1] + ".stp" 42
reconstruction = open(reconstructionFile,'w') 43
for line in row[0]: 44
reconstruction.write(line) 45
return 46
47
#the function is executed with the arguments stated in line 7 (name of the graph) and 10 (query) 48
STEPquery(graph_to_parse,SPARQLquery)49
107
Appendix C
STG ontology (*.ttl) @prefix : <https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stg.ttl#> . 1
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . 2
@prefix owl: <http://www.w3.org/2002/07/owl#> . 3
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . 4
@prefix bot: <https://w3id.org/bot#> . 5
@prefix geo: <http://www.opengis.net/ont/geosparql#> . 6
@prefix stg: <https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stg.ttl#> . 7
8
<https://github.com/JWerbrouck/Thesis/blob/master/stg.ttl> rdf:type owl:Ontology . 9
10
################################################################# 11
# Classes 12
################################################################# 13
14
stg:RepresentingFile a owl:Class ; 15
rdfs:label "File with geometrical representation"@en , 16
"Bestand met geometrie"@nl ; 17
18
rdfs:comment "A file that contains a geometrical representation of the whole or a part of the whole."@en , 19
"Een bestand dat een geometrische representatie van het geheel of van een deel ervan bevat."@nl . 20
21
stg:SourceFile a owl:Class ; 22
rdfs:label "Source"@en , 23
"Bron"@nl ; 24
rdfs:comment "File that contains information about the real-world object and can serve as a modelling source."@en , 25
"Bestand met informatie over het reële object, dat kan dienen als bronmateriaal bij het modelleren."@nl ; 26
27
rdfs:subClassOf stg:RepresentingFile. 28
29
stg:PointCloudFile a owl:Class ; 30
rdfs:subClassOf stg:SourceFile ; 31
rdfs:label "Pointcloudfile"@en , 32
"Puntwolkdocument"@nl ; 33
34
rdfs:comment "Refers to an external point cloud file that represents the instance geometrically (URI)."@en , 35
"Verwijst naar een extern puntwolkbestand dat het voorwerp geometrisch representeert (URI)."@nl . 36
37
stg:RhinoFile a owl:Class ; 38
rdfs:subClassOf stg:RepresentingDocument ; 39
40
rdfs:label ".3dm file"@en , 41
".3dm bestand"@nl ; 42
43
rdfs:comment "Refers to an external Rhinoceros (McNeel) document (.3dm) that contains a geometrical 44
representation of the whole model or parts of it."@en , 45
"Verwijst naar een extern Rhinoceros (McNeel) document (.3dm) dat een geometrische representatie 46
van het gehele model of delen ervan bevat."@nl . 47
48
stg:RhinoID a rdfs:Datatype ; 49
rdfs:label "Literal containing Rhino-ID"@en , 50
"Literal met Rhino-ID"@nl ; 51
52
rdfs:comment "A Literal (as datatype) that encapsulates a Rhino-ID that identifies an object representation in an 53
stg:RhinoDocument."@en , 54
"Een Literal (as datatype) die een Rhino-ID bevat dat een objectrepresentatie in een 55
stg:RhinoDocument identificeert."@nl. 56
57
stg:STEPRepresentation a rdfs:DataType ; 58
rdfs:label "STEP Literal"@en , 59
"STEP Literal"@nl ; 60
61
rdfs:comment "A Literal that contains the raw STEP representation of the instance"@en , 62
"Een Literal die de STEP representatie van het object als ruwe data beschrijft"@nl . 63
108
stg:InternalGeometry a owl:Class ; 64
rdfs:label "Internal Geometry"@en ; 65
rdfs:comment "Indicates whether a geometry is the part of an object contained in another object (e.g. part of a 66
console is 'internal' in a wall)"@en . 67
68
stg:ProjectOrigin a owl:Class ; 69
rdfs:label "Project Origin"@en ; 70
rdfs:comment "A geometry referring to the Project's Origin"@en ; 71
rdfs:subClassOf geo:Geometry . 72
73
stg:ModellingRemark a owl:Class ; 74
rdfs:label "Modellingremark"@en ; 75
rdfs:comment "Something that contains information about a geometry"@en . 76
77
stg:Assumption rdf:type owl:Class ; 78
rdfs:label "Assumption"@en ; 79
rdfs:comment "An assumption that contains information about a geometry"@en ; 80
rdfs:subClassOf stg:ModellingRemark . 81
82
stg:LevelOfAccuracy a owl:Class ; 83
rdfs:label "USIBD LOA"@en ; 84
rdfs:comment "States the (represented) LOA of a geometry after deviation analysis (definition according to USIBD 85
LOA specification)"@en ; 86
rdfs:subClassOf stg:ModellingRemark . 87
88
stg:OccludedGeometry a owl:Class ; 89
rdfs:label "Assumption"@en ; 90
rdfs:comment "Indicates whether a geometry contains an occluded area"@en ; 91
rdfs:subClassOf stg:ModellingRemark . 92
93
################################# 94
# OBJECT PROPERTIES 95
################################# 96
97
stg:hasRepresentingFile a owl:ObjectProperty ; 98
rdfs:label "Document with geometry"@en , 99
"Document met geometrie"@nl ; 100
rdfs:comment "Relates an instance to a document that is its geometrical representation"@en , 101
"Relateert een objectinstantie aan een document dat dit geometrisch representeert"@nl ; 102
rdfs:range stg:RepresentingDocument. 103
104
stg:hasRhinoFile a owl:ObjectProperty ; 105
rdfs:label "has Rhino File"@en , 106
"heeft Rhino bestand"@nl ; 107
rdfs:comment "Connects an instance with an external Rhino (.3dm) file that contains its geometry."@en , 108
"Verbindt een instantie met een extern Rhino (.3dm) document dat de geometrie van dit element 109
bevat"@nl ; 110
rdfs:subPropertyOf stg:hasRepresentingFile ; 111
rdfs:range stg:RhinoFile . 112
113
stg:hasPointCloudFile a owl:ObjectProperty ; 114
rdfs:label "has Point Cloud File"@en , 115
"heeft Puntwolkbestand"@nl ; 116
rdfs:comment "Connects an instance with an external Point Cloud file that contains its geometry."@en , 117
"Verbindt een instantie met een externe puntwolk die de geometrie van dit element bevat"@nl ; 118
rdfs:subPropertyOf stg:hasRepresentingFile ; 119
rdfs:range stg:PointCloudFile . 120
121
stg:hasLocalVersion a owl:ObjectProperty ; 122
rdfs:label "has Point Cloud File"@en , 123
"heeft Puntwolkbestand"@nl ; 124
rdfs:comment "Connects an instance with an external (local) file that contains its geometry."@en , 125
"Verbindt een instantie met een extern (lokaal) bestand dat de geometrie van dit element bevat"@nl ; 126
rdfs:subPropertyOf stg:hasRepresentingFile ; 127
rdfs:domain stg:PointCloudFile ; 128
rdfs:range rdfs:Literal . 129
109
stg:asSTEP a owl:DatatypeProperty; 130
rdfs:label "as STEP"@en , 131
"als STEP"@nl ; 132
rdfs:comment "connects an instance with a Literal containing its raw STEP serialization"@en , 133
"Verbindt een instantie met een Literal die de ruwe STEP-serializatie ervan behelst"@nl ; 134
rdfs:subPropertyOf geo:hasSerialization ; 135
rdfs:domain geo:Geometry ; 136
rdfs:range stg:STEPRepresentation . 137
138
stg:hasRhinoID a owl:DatatypeProperty ; 139
rdfs:label "reference to RhinoID"@en , 140
"refereert naar RhinoID"@nl ; 141
rdfs:comment "connects an instance with a Literal containing a RhinoID as a reference to the object representation 142
in the stg:RhinoFile"@en , 143
"Verbindt een instantie met een Literal die de ruwe OBJ-serializatie ervan behelst"@nl ; 144
rdfs:domain geo:Geometry ; 145
rdfs:range stg:RhinoID . 146
147
stg:hasOrigin a owl:ObjectProperty ; 148
rdfs:subClassOf geo:hasGeometry ; 149
rdfs:label "links to Project Origin"@en ; 150
rdfs:domain geo:Feature ; 151
rdfs:range stg:ProjectOrigin . 152
153
stg:hasModellingRemark a owl:ObjectProperty ; 154
rdfs:label "Modelling remark"@en , 155
"Modelleeropmerking"@nl ; 156
rdfs:comment "Statement about the geometry, such as metadata or an assumption that was made while modelling 157
as-built geometry"@en , 158
"Opmerking over de geometrie, zoals metadata of een modelleerveronderstelling"@nl ; 159
rdfs:domain geo:Geometry ; 160
rdfs:range stg:ModellingRemark . 161
162
stg:hasOcclusion a owl:ObjectProperty ; 163
rdfs:label "Occlusion"@en , 164
"Occlusion"@nl ; 165
rdfs:subPropertyOf stg:hasModellingRemark ; 166
rdfs:comment "Defines a geometry as an occluded area"@en , 167
"Definieert een geometrie als een occluded area."@nl ; 168
rdfs:domain geo:Geometry ; 169
rdfs:range stg:OccludedGeometry . 170
171
stg:hasLOA a owl:ObjectProperty ; 172
rdfs:label "Level of Accuracy"@en , 173
"Level of Accuracy"@nl ; 174
rdfs:subPropertyOf stg:hasModellingRemark ; 175
rdfs:comment "States that a geometry has a certain (represented) LOA (USIBD definition)"@en , 176
"Linkt een geometrie aan een bepaalde (represented) LOA (zoals gedefinieerd door USIBD)."@nl ; 177
rdfs:domain geo:Geometry ; 178
rdfs:range stg:LevelOfAccuracy . 179
180
stg:usedDeviationAnalysis a owl:DatatypeProperty ; 181
rdfs:label "Deviation Analysis"@en ; 182
rdfs:comment "States that a geometry has a certain (represented) LOA (USIBD definition)"@en ; 183
rdfs:domain stg:LevelOfAccuracy ; 184
rdfs:range rdfs:Literal . 185
186
stg:hasLOAvalue a owl:DatatypeProperty ; 187
rdfs:label "LOA value"@en ; 188
rdfs:comment "States the represented LOA value of a geometry after a geometric deviation analysis"@en ; 189
rdfs:domain stg:LevelOfAccuracy ; 190
rdfs:range rdfs:Literal . 191
192
stg:denotesRemark a owl:DatatypeProperty ; 193
rdfs:label "reference to modelling remark"@en , 194
"refereert naar modelleeropmerking"@nl ; 195
rdfs:comment "connects a geometry with a Literal (string) that contains information about the modelling 196
process"@en , 197
"Verbindt een geometrie met een Literal (string) die informatie over het modelleerprocess bevat"@nl ;198
rdfs:domain geo:Geometry ; 199
rdfs:range stg:Assumption .200
110
Appendix D
STG products (*.ttl) @prefix product: <https://w3id.org/product/BuildingElements#> . 1
@prefix stgp: <https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stg.ttl#> . 2
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . 3
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . 4
@prefix owl: <http://www.w3.org/2002/07/owl#> . 5
6
stgp:Fireplace a owl:Class ; 7
rdfs:subClassOf product:BuildingElement ; 8
rdfs:label "Fireplace"@en , 9
"Open haard"@nl ; 10
rdfs:seeAlso <http://vocab.getty.edu/aat/300052267> . 11
12
stgp:Fireplace_hood a owl:Class ; 13
rdfs:subClassOf product:BuildingElement ; 14
rdfs:label "Fireplace hood"@en , 15
"Overkapping haard"@nl ; 16
rdfs:seeAlso <http://vocab.getty.edu/page/aat/300069730> . 17
18
stgp:Capital a owl:Class ; 19
rdfs:subClassOf product:BuildingElement ; 20
rdfs:label "Capital"@en , 21
"Kapiteel"@nl ; 22
rdfs:seeAlso <http://vocab.getty.edu/aat/300001662> . 23
24
stgp:Shaft a owl:Class ; 25
rdfs:subClassOf product:BuildingElement ; 26
rdfs:label "Shaft"@en , 27
"Schacht"@nl ; 28
rdfs:seeAlso <http://vocab.getty.edu/aat/300001754> . 29
30
31
stgp:Base a owl:Class ; 32
rdfs:subClassOf product:BuildingElement ; 33
rdfs:label "Base"@en , 34
"Voet"@nl ; 35
rdfs:seeAlso <http://vocab.getty.edu/aat/300233843> . 36
37
stgp:WindowFrame a owl:Class ; 38
rdfs:subClassOf product:BuildingElement ; 39
rdfs:label "Window frame"@en , 40
"Raamkader"@nl ; 41
rdfs:seeAlso <http://vocab.getty.edu/page/aat/300003118> . 42
43
stgp:WindowPane a owl:Class ; 44
rdfs:subClassOf product:BuildingElement ; 45
rdfs:label "Window pane"@en , 46
"glaspaneel"@nl ; 47
rdfs:seeAlso <http://vocab.getty.edu/page/aat/300209746> . 48
49
stgp:Vault a owl:Class ; 50
rdfs:subClassOf product:BuildingElement ; 51
rdfs:label "Vault"@en , 52
"Gewelf"@nl ; 53
rdfs:seeAlso <http://vocab.getty.edu/page/aat/300076608> . 54
55
56
stgp:Acanthus a owl:Class ; 57
rdfs:subClassOf product:BuildingElement ; 58
rdfs:label "Acanthus leaf"@en , 59
"Acanthusblad"@nl ; 60
rdfs:seeAlso <http://vocab.getty.edu/aat/300164902> . 61
111
Appendix E
Presence chamber – without geometry (*.ttl) @prefix : <http://api.stardog.com/> . 1
@prefix owl: <http://www.w3.org/2002/07/owl#> . 2
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . 3
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . 4
@prefix stardog: tag:stardog:api:> . 5
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . 6
@prefix bot: <https://w3id.org/bot#> . 7
@prefix geo: <http://www.opengis.net/ont/geosparql#> . 8
@prefix inst: <https://github.com/JWerbrouck/scan-to-graph/casestudy/CaseStudy_PresenceChamber_nogeom/> . 9
@prefix product: <https://w3id.org/product#> . 10
@prefix stg: <https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stg.ttl#> . 11
@prefix stgp: <https://raw.githubusercontent.com/JWerbrouck/Thesis/master/stgp.ttl#> . 12
@prefix xml: <http://www.w3.org/XML/1998/namespace> . 13
14
inst:Gravensteen_Ghent geo:hasGeometry inst:ProjectOrigin ; 15
stg:hasRhinoFile inst:RF_Casestudy_Gravensteen_PresenceChamber ; 16
rdfs:seeAlso <http://uk.dbpedia.org/page/Gravensteen_(Gent)> . 17
inst:RF_Casestudy_Gravensteen_PresenceChamber stg:hasLocalVersion 18
“C:/STG/CaseStudy_PresenceChamber_nogeom/RhinoFiles/Casestudy_Gravensteen_PresenceChamber.3dm” . 19
inst:ProjectOrigin geo:asWKT """<http://www.opengis.net/def/crs/EPSG/0/31370> 20
Point(104500 194300)"""^^geo:wktLiteral . 21
inst:Gravensteen_Ghent bot:hasBuilding inst:House_of_the_count . 22
inst:ASSUMPTION_c8a0c72b-d53b-4cf5-b55a-6fdab424c109 a stg:Assumption ; 23
stg:denotesRemark "DUNGEON_not-present-in-model" . 24
inst:First_Floor bot:hasSpace inst:First_Floor_Bedroom_of_the_count .inst:First_Floor_Bedroom_of_the_count 25
bot:adjacentElement inst:Vault_Vault1_PresenceChamber . 26
inst:GEOM-0653cb6e-d857-4c67-b2b8-6737e5ca1d23 stg:hasLOA inst:LOA-0653cb6e-d857-4c67-b2b8-6737e5ca1d23 ; 27
stg:hasRhinoID "0653cb6e-d857-4c67-b2b8-6737e5ca1d23"^^stg:RhinoID . 28
inst:LOA-0653cb6e-d857-4c67-b2b8-6737e5ca1d23 stg:hasLOAvalue "LOA20" ; 29
stg:usedDeviationAnalysis "MICROSCALE" . 30
inst:GEOM-06db8367-1bd5-49d0-893c-71cb4b016f38 stg:hasRhinoID "06db8367-1bd5-49d0-893c-71cb4b016f38"^^stg:RhinoID ; 31
stg:hasOcclusion inst:OCCLUSION_c19ff85e-6256-4f1f-ad53-4213281d08be . 32
inst:OCCLUSION_c19ff85e-6256-4f1f-ad53-4213281d08be a stg:OccludedGeometry . 33
inst:GEOM-0f5dcd5b-9551-4153-bb15-eeaf5bd67f3f stg:hasLOA inst:LOA-0f5dcd5b-9551-4153-bb15-eeaf5bd67f3f ; 34
stg:hasRhinoID "0f5dcd5b-9551-4153-bb15-eeaf5bd67f3f"^^stg:RhinoID . 35
inst:LOA-0f5dcd5b-9551-4153-bb15-eeaf5bd67f3f stg:hasLOAvalue "LOA20" ; 36
stg:usedDeviationAnalysis "MICROSCALE" . 37
inst:GEOM-11b2cc78-d37c-473d-8845-45fb468ff265 stg:hasLOA inst:LOA-11b2cc78-d37c-473d-8845-45fb468ff265 ; 38
stg:hasRhinoID "11b2cc78-d37c-473d-8845-45fb468ff265"^^stg:RhinoID . 39
inst:LOA-11b2cc78-d37c-473d-8845-45fb468ff265 stg:hasLOAvalue "LOA20" ; 40
stg:usedDeviationAnalysis "MICROSCALE" . 41
inst:GEOM-1290af6e-b10c-40d6-9fb8-f4952c3f63dc stg:hasLOA inst:LOA-1290af6e-b10c-40d6-9fb8-f4952c3f63dc ; 42
stg:hasRhinoID "1290af6e-b10c-40d6-9fb8-f4952c3f63dc"^^stg:RhinoID . 43
inst:LOA-1290af6e-b10c-40d6-9fb8-f4952c3f63dc stg:hasLOAvalue "LOA20" ; 44
stg:usedDeviationAnalysis "MICROSCALE" . 45
inst:GEOM-146ec253-dd39-4eaa-a35e-36a5e9935ecf a stg:InternalGeometry ; 46
stg:hasLOA inst:LOA-146ec253-dd39-4eaa-a35e-36a5e9935ecf ; 47
stg:hasRhinoID "146ec253-dd39-4eaa-a35e-36a5e9935ecf"^^stg:RhinoID . 48
inst:LOA-146ec253-dd39-4eaa-a35e-36a5e9935ecf stg:hasLOAvalue "LOA20" ; 49
stg:usedDeviationAnalysis "MICROSCALE" . 50
inst:GEOM-1b379e00-ccd4-4662-bf08-b24ccd69907b stg:hasLOA inst:LOA-1b379e00-ccd4-4662-bf08-b24ccd69907b ; 51
stg:hasRhinoID "1b379e00-ccd4-4662-bf08-b24ccd69907b"^^stg:RhinoID . 52
inst:LOA-1b379e00-ccd4-4662-bf08-b24ccd69907b stg:hasLOAvalue "LOA20" ; 53
stg:usedDeviationAnalysis "MICROSCALE" . 54
inst:GEOM-1e6a47a0-b5ec-4f15-af20-32dd664adc79 stg:hasRhinoID "1e6a47a0-b5ec-4f15-af20-32dd664adc79"^^stg:RhinoID ; 55
stg:hasOcclusion inst:OCCLUSION_d950b6ba-f24e-4e58-b664-d4d84c931630 . 56
inst:OCCLUSION_d950b6ba-f24e-4e58-b664-d4d84c931630 a stg:OccludedGeometry . 57
inst:GEOM-1e857f13-9455-450c-81fc-1e1c87271dda stg:hasLOA inst:LOA-1e857f13-9455-450c-81fc-1e1c87271dda ; 58
stg:hasRhinoID "1e857f13-9455-450c-81fc-1e1c87271dda"^^stg:RhinoID . 59
inst:LOA-1e857f13-9455-450c-81fc-1e1c87271dda stg:hasLOAvalue "LOA20" ; 60
stg:usedDeviationAnalysis "MICROSCALE" . 61
inst:GEOM-22e22f9f-a4ed-48d7-9451-d1a5b5c88dde stg:hasLOA inst:LOA-22e22f9f-a4ed-48d7-9451-d1a5b5c88dde ; 62
stg:hasRhinoID "22e22f9f-a4ed-48d7-9451-d1a5b5c88dde"^^stg:RhinoID . 63
inst:LOA-22e22f9f-a4ed-48d7-9451-d1a5b5c88dde stg:hasLOAvalue "LOA20" ; 64
112
stg:usedDeviationAnalysis "MICROSCALE" . 65
inst:GEOM-263ad3f7-5adb-4c11-af19-cd6fb9d294a4 stg:hasLOA inst:LOA-263ad3f7-5adb-4c11-af19-cd6fb9d294a4 ; 66
stg:hasRhinoID "263ad3f7-5adb-4c11-af19-cd6fb9d294a4"^^stg:RhinoID . 67
inst:LOA-263ad3f7-5adb-4c11-af19-cd6fb9d294a4 stg:hasLOAvalue "LOA20" ; 68
stg:usedDeviationAnalysis "MICROSCALE" . 69
inst:GEOM-2b837b8b-eaaf-4120-90ba-cded3694c199 stg:hasRhinoID "2b837b8b-eaaf-4120-90ba-cded3694c199"^^stg:RhinoID ; 70
stg:hasOcclusion inst:OCCLUSION_d694efe9-27dd-48c2-892a-77d363048964 . 71
inst:OCCLUSION_d694efe9-27dd-48c2-892a-77d363048964 a stg:OccludedGeometry . 72
inst:GEOM-3220279d-fbc4-4706-a843-f28312cffccf stg:hasRhinoID "3220279d-fbc4-4706-a843-f28312cffccf"^^stg:RhinoID ; 73
stg:hasOcclusion inst:OCCLUSION_d489aa27-1159-4b85-89b6-1340f9fbcf66 . 74
inst:OCCLUSION_d489aa27-1159-4b85-89b6-1340f9fbcf66 a stg:OccludedGeometry . 75
inst:GEOM-35601207-c06d-4243-ac3b-c46418356ad4 stg:hasLOA inst:LOA-35601207-c06d-4243-ac3b-c46418356ad4 ; 76
stg:hasRhinoID "35601207-c06d-4243-ac3b-c46418356ad4"^^stg:RhinoID . 77
inst:LOA-35601207-c06d-4243-ac3b-c46418356ad4 stg:hasLOAvalue "LOA20" ; 78
stg:usedDeviationAnalysis "MACROSCALE" . 79
inst:GEOM-391fe24c-47d6-430d-94f1-e44145420a56 stg:hasRhinoID "391fe24c-47d6-430d-94f1-e44145420a56"^^stg:RhinoID ; 80
stg:hasOcclusion inst:OCCLUSION_05b369f9-4446-4761-b485-120e236de725 . 81
inst:OCCLUSION_05b369f9-4446-4761-b485-120e236de725 a stg:OccludedGeometry . 82
inst:GEOM-3afba0cb-87be-4032-ad12-540233ecdba3 stg:hasLOA inst:LOA-3afba0cb-87be-4032-ad12-540233ecdba3 ; 83
stg:hasRhinoID "3afba0cb-87be-4032-ad12-540233ecdba3"^^stg:RhinoID . 84
inst:LOA-3afba0cb-87be-4032-ad12-540233ecdba3 stg:hasLOAvalue "LOA20" ; 85
stg:usedDeviationAnalysis "MICROSCALE" . 86
inst:GEOM-3dfb6a4d-654d-4b6b-9d65-5bdde8a15f91 stg:hasLOA inst:LOA-3dfb6a4d-654d-4b6b-9d65-5bdde8a15f91 ; 87
stg:hasRhinoID "3dfb6a4d-654d-4b6b-9d65-5bdde8a15f91"^^stg:RhinoID . 88
inst:LOA-3dfb6a4d-654d-4b6b-9d65-5bdde8a15f91 stg:hasLOAvalue "LOA20" ; 89
stg:usedDeviationAnalysis "MICROSCALE" . 90
inst:GEOM-3e5b5265-08e8-45a4-bd52-ee9c43e38753 a stg:InternalGeometry ; 91
stg:hasLOA inst:LOA-3e5b5265-08e8-45a4-bd52-ee9c43e38753 ; 92
stg:hasRhinoID "3e5b5265-08e8-45a4-bd52-ee9c43e38753"^^stg:RhinoID . 93
inst:LOA-3e5b5265-08e8-45a4-bd52-ee9c43e38753 stg:hasLOAvalue "LOA20" ; 94
stg:usedDeviationAnalysis "MICROSCALE" . 95
inst:GEOM-3fd1d60b-5074-47e6-9160-ae969a8d1bca stg:hasRhinoID "3fd1d60b-5074-47e6-9160-ae969a8d1bca"^^stg:RhinoID ; 96
stg:hasOcclusion inst:OCCLUSION_ddd7d000-611f-44f2-9bed-89d7d476e381 . 97
inst:OCCLUSION_ddd7d000-611f-44f2-9bed-89d7d476e381 a stg:OccludedGeometry . 98
inst:GEOM-4121dd99-f57e-4b75-95f0-256a5c8903b8 stg:hasLOA inst:LOA-4121dd99-f57e-4b75-95f0-256a5c8903b8 ; 99
stg:hasRhinoID "4121dd99-f57e-4b75-95f0-256a5c8903b8"^^stg:RhinoID . 100
inst:LOA-4121dd99-f57e-4b75-95f0-256a5c8903b8 stg:hasLOAvalue "LOA20" ; 101
stg:usedDeviationAnalysis "MICROSCALE" . 102
inst:GEOM-47a8d95e-7d49-4ca5-b187-d381c2e665fc stg:hasRhinoID "47a8d95e-7d49-4ca5-b187-d381c2e665fc"^^stg:RhinoID ; 103
stg:hasOcclusion inst:OCCLUSION_262acc42-09c0-4ca1-ae37-8509b0f78675 . 104
inst:OCCLUSION_262acc42-09c0-4ca1-ae37-8509b0f78675 a stg:OccludedGeometry . 105
inst:GEOM-4a48c36d-8ea9-470e-902b-ddb4aa549352 a stg:InternalGeometry ; 106
stg:hasLOA inst:LOA-4a48c36d-8ea9-470e-902b-ddb4aa549352 ; 107
stg:hasRhinoID "4a48c36d-8ea9-470e-902b-ddb4aa549352"^^stg:RhinoID . 108
inst:LOA-4a48c36d-8ea9-470e-902b-ddb4aa549352 stg:hasLOAvalue "LOA20" ; 109
stg:usedDeviationAnalysis "MICROSCALE" . 110
inst:GEOM-4a8a4f93-e679-40b0-868c-9bcadc7aa446 stg:hasRhinoID "4a8a4f93-e679-40b0-868c-9bcadc7aa446"^^stg:RhinoID ; 111
stg:hasOcclusion inst:OCCLUSION_2dd91f04-1a30-4cba-b45f-33789b6e8555 . 112
inst:OCCLUSION_2dd91f04-1a30-4cba-b45f-33789b6e8555 a stg:OccludedGeometry . 113
inst:GEOM-4b013ebf-ce18-4d13-93b3-fc12fb7db11d a stg:InternalGeometry ; 114
stg:hasLOA inst:LOA-4b013ebf-ce18-4d13-93b3-fc12fb7db11d ; 115
stg:hasRhinoID "4b013ebf-ce18-4d13-93b3-fc12fb7db11d"^^stg:RhinoID . 116
inst:LOA-4b013ebf-ce18-4d13-93b3-fc12fb7db11d stg:hasLOAvalue "LOA20" ; 117
stg:usedDeviationAnalysis "MICROSCALE" . 118
inst:GEOM-4bb93f39-c77b-47ae-80fc-783fba897158 a stg:InternalGeometry ; 119
stg:hasLOA inst:LOA-4bb93f39-c77b-47ae-80fc-783fba897158 ; 120
stg:hasRhinoID "4bb93f39-c77b-47ae-80fc-783fba897158"^^stg:RhinoID . 121
inst:LOA-4bb93f39-c77b-47ae-80fc-783fba897158 stg:hasLOAvalue "LOA20" ; 122
stg:usedDeviationAnalysis "MICROSCALE" . 123
inst:GEOM-504f6903-26ca-4267-a0ec-33d72fd4e61f stg:hasLOA inst:LOA-504f6903-26ca-4267-a0ec-33d72fd4e61f ; 124
stg:hasRhinoID "504f6903-26ca-4267-a0ec-33d72fd4e61f"^^stg:RhinoID . 125
inst:LOA-504f6903-26ca-4267-a0ec-33d72fd4e61f stg:hasLOAvalue "LOA20" ; 126
stg:usedDeviationAnalysis "MICROSCALE" . 127
inst:GEOM-55ea9f56-d5be-42e8-97d1-4cc6b1ec1b76 stg:hasLOA inst:LOA-55ea9f56-d5be-42e8-97d1-4cc6b1ec1b76 ; 128
stg:hasRhinoID "55ea9f56-d5be-42e8-97d1-4cc6b1ec1b76"^^stg:RhinoID . 129
inst:LOA-55ea9f56-d5be-42e8-97d1-4cc6b1ec1b76 stg:hasLOAvalue "LOA20" ; 130
stg:usedDeviationAnalysis "MICROSCALE" . 131
inst:GEOM-57402d12-78d4-45ef-8f2c-3c73bb2df31b stg:hasLOA inst:LOA-57402d12-78d4-45ef-8f2c-3c73bb2df31b ; 132
stg:hasRhinoID "57402d12-78d4-45ef-8f2c-3c73bb2df31b"^^stg:RhinoID . 133
inst:LOA-57402d12-78d4-45ef-8f2c-3c73bb2df31b stg:hasLOAvalue "LOA20" ; 134
stg:usedDeviationAnalysis "MICROSCALE" . 135
113
inst:GEOM-57ad9eb0-683c-4396-89c5-13a3252b245c stg:hasLOA inst:LOA-57ad9eb0-683c-4396-89c5-13a3252b245c ; 136
stg:hasRhinoID "57ad9eb0-683c-4396-89c5-13a3252b245c"^^stg:RhinoID . 137
inst:LOA-57ad9eb0-683c-4396-89c5-13a3252b245c stg:hasLOAvalue "LOA20" ; 138
stg:usedDeviationAnalysis "MICROSCALE" . 139
inst:GEOM-5cbd2d80-00c3-45ac-86f1-5c651ec759c8 stg:hasLOA inst:LOA-5cbd2d80-00c3-45ac-86f1-5c651ec759c8 ; 140
stg:hasRhinoID "5cbd2d80-00c3-45ac-86f1-5c651ec759c8"^^stg:RhinoID . 141
inst:LOA-5cbd2d80-00c3-45ac-86f1-5c651ec759c8 stg:hasLOAvalue "LOA20" ; 142
stg:usedDeviationAnalysis "MICROSCALE" . 143
inst:GEOM-60ada2c4-07d3-4ab0-ab96-6e81227dc4bf stg:hasLOA inst:LOA-60ada2c4-07d3-4ab0-ab96-6e81227dc4bf ; 144
stg:hasRhinoID "60ada2c4-07d3-4ab0-ab96-6e81227dc4bf"^^stg:RhinoID . 145
inst:LOA-60ada2c4-07d3-4ab0-ab96-6e81227dc4bf stg:hasLOAvalue "LOA20" ; 146
stg:usedDeviationAnalysis "MICROSCALE" . 147
inst:GEOM-61adbd9d-e0fe-4360-9f29-1cd9cf160d31 stg:hasLOA inst:LOA-61adbd9d-e0fe-4360-9f29-1cd9cf160d31 ; 148
stg:hasRhinoID "61adbd9d-e0fe-4360-9f29-1cd9cf160d31"^^stg:RhinoID . 149
inst:LOA-61adbd9d-e0fe-4360-9f29-1cd9cf160d31 stg:hasLOAvalue "LOA20" ; 150
stg:usedDeviationAnalysis "MICROSCALE" . 151
inst:GEOM-627f4a0a-8886-4ce9-9f0a-8185128bbefa stg:hasRhinoID "627f4a0a-8886-4ce9-9f0a-8185128bbefa"^^stg:RhinoID ; 152
stg:hasOcclusion inst:OCCLUSION_830e3dbc-3c4b-4f2d-9f3d-197bb176bf51 . 153
inst:OCCLUSION_830e3dbc-3c4b-4f2d-9f3d-197bb176bf51 a stg:OccludedGeometry . 154
inst:GEOM-62eca92f-c9dc-4c8c-a9fa-4e42747bb5ed stg:hasLOA inst:LOA-62eca92f-c9dc-4c8c-a9fa-4e42747bb5ed ; 155
stg:hasRhinoID "62eca92f-c9dc-4c8c-a9fa-4e42747bb5ed"^^stg:RhinoID . 156
inst:LOA-62eca92f-c9dc-4c8c-a9fa-4e42747bb5ed stg:hasLOAvalue "LOA20" ; 157
stg:usedDeviationAnalysis "MICROSCALE" . 158
inst:GEOM-636d2d7a-33f8-4b57-83ed-6c6a04ec5383 stg:hasRhinoID "636d2d7a-33f8-4b57-83ed-6c6a04ec5383"^^stg:RhinoID ; 159
stg:hasOcclusion inst:OCCLUSION_ac1686c0-5370-465e-861d-be22f8875c56 . 160
inst:OCCLUSION_ac1686c0-5370-465e-861d-be22f8875c56 a stg:OccludedGeometry . 161
inst:GEOM-69cc9fc5-4e93-4ba0-89a8-8273dd54a523 stg:hasLOA inst:LOA-69cc9fc5-4e93-4ba0-89a8-8273dd54a523 ; 162
stg:hasRhinoID "69cc9fc5-4e93-4ba0-89a8-8273dd54a523"^^stg:RhinoID . 163
inst:LOA-69cc9fc5-4e93-4ba0-89a8-8273dd54a523 stg:hasLOAvalue "LOA20" ; 164
stg:usedDeviationAnalysis "MICROSCALE" . 165
inst:GEOM-6b10ba3c-7156-42d8-8cc4-8e2af224318e a stg:InternalGeometry ; 166
stg:hasLOA inst:LOA-6b10ba3c-7156-42d8-8cc4-8e2af224318e ; 167
stg:hasRhinoID "6b10ba3c-7156-42d8-8cc4-8e2af224318e"^^stg:RhinoID . 168
inst:LOA-6b10ba3c-7156-42d8-8cc4-8e2af224318e stg:hasLOAvalue "LOA20" ; 169
stg:usedDeviationAnalysis "MICROSCALE" . 170
inst:GEOM-6be805c7-9151-47ed-8404-0573c2e51d94 stg:hasLOA inst:LOA-6be805c7-9151-47ed-8404-0573c2e51d94 ; 171
stg:hasRhinoID "6be805c7-9151-47ed-8404-0573c2e51d94"^^stg:RhinoID . 172
inst:LOA-6be805c7-9151-47ed-8404-0573c2e51d94 stg:hasLOAvalue "LOA20" ; 173
stg:usedDeviationAnalysis "MICROSCALE" . 174
inst:GEOM-6c0a7b83-9b0c-4101-982d-bb6ff33c3f37 stg:hasLOA inst:LOA-6c0a7b83-9b0c-4101-982d-bb6ff33c3f37 ; 175
stg:hasRhinoID "6c0a7b83-9b0c-4101-982d-bb6ff33c3f37"^^stg:RhinoID . 176
inst:LOA-6c0a7b83-9b0c-4101-982d-bb6ff33c3f37 stg:hasLOAvalue "LOA20" ; 177
stg:usedDeviationAnalysis "MICROSCALE" . 178
inst:GEOM-7299434a-f80a-4617-a9a9-99eb0d134ab0 stg:hasLOA inst:LOA-7299434a-f80a-4617-a9a9-99eb0d134ab0 ; 179
stg:hasRhinoID "7299434a-f80a-4617-a9a9-99eb0d134ab0"^^stg:RhinoID . 180
inst:LOA-7299434a-f80a-4617-a9a9-99eb0d134ab0 stg:hasLOAvalue "LOA20" ; 181
stg:usedDeviationAnalysis "MICROSCALE" . 182
inst:GEOM-74345ce9-b538-4fd3-99cf-7029d3a0537a stg:hasLOA inst:LOA-74345ce9-b538-4fd3-99cf-7029d3a0537a ; 183
stg:hasRhinoID "74345ce9-b538-4fd3-99cf-7029d3a0537a"^^stg:RhinoID . 184
inst:LOA-74345ce9-b538-4fd3-99cf-7029d3a0537a stg:hasLOAvalue "LOA20" ; 185
stg:usedDeviationAnalysis "MICROSCALE" . 186
inst:GEOM-74e53f81-aa30-4d39-90a6-4ecdc8786d21 stg:hasLOA inst:LOA-74e53f81-aa30-4d39-90a6-4ecdc8786d21 ; 187
stg:hasRhinoID "74e53f81-aa30-4d39-90a6-4ecdc8786d21"^^stg:RhinoID . 188
inst:LOA-74e53f81-aa30-4d39-90a6-4ecdc8786d21 stg:hasLOAvalue "LOA20" ; 189
stg:usedDeviationAnalysis "MICROSCALE" . 190
inst:GEOM-7510a140-2c78-4350-bd43-b0998bf883ac stg:hasLOA inst:LOA-7510a140-2c78-4350-bd43-b0998bf883ac ; 191
stg:hasRhinoID "7510a140-2c78-4350-bd43-b0998bf883ac"^^stg:RhinoID . 192
inst:LOA-7510a140-2c78-4350-bd43-b0998bf883ac stg:hasLOAvalue "LOA20" ; 193
stg:usedDeviationAnalysis "MICROSCALE" . 194
inst:GEOM-795b31a4-5924-43b2-9f52-050ce1f90408 stg:hasLOA inst:LOA-795b31a4-5924-43b2-9f52-050ce1f90408 ; 195
stg:hasRhinoID "795b31a4-5924-43b2-9f52-050ce1f90408"^^stg:RhinoID . 196
inst:LOA-795b31a4-5924-43b2-9f52-050ce1f90408 stg:hasLOAvalue "LOA20" ; 197
stg:usedDeviationAnalysis "MACROSCALE" . 198
inst:GEOM-7afb8778-6d52-482d-94a2-5a147a45872a a stg:InternalGeometry ; 199
stg:hasLOA inst:LOA-7afb8778-6d52-482d-94a2-5a147a45872a ; 200
stg:hasRhinoID "7afb8778-6d52-482d-94a2-5a147a45872a"^^stg:RhinoID . 201
inst:LOA-7afb8778-6d52-482d-94a2-5a147a45872a stg:hasLOAvalue "LOA20" ; 202
stg:usedDeviationAnalysis "MICROSCALE" . 203
inst:GEOM-7b72caee-29b4-4f78-a4c8-a5196dbe7985 a stg:InternalGeometry ; 204
stg:hasLOA inst:LOA-7b72caee-29b4-4f78-a4c8-a5196dbe7985 ; 205
stg:hasRhinoID "7b72caee-29b4-4f78-a4c8-a5196dbe7985"^^stg:RhinoID . 206
114
inst:LOA-7b72caee-29b4-4f78-a4c8-a5196dbe7985 stg:hasLOAvalue "LOA20" ; 207
stg:usedDeviationAnalysis "MICROSCALE" . 208
inst:GEOM-7c0ed26d-5176-4360-998b-cad866209290 stg:hasLOA inst:LOA-7c0ed26d-5176-4360-998b-cad866209290 ; 209
stg:hasRhinoID "7c0ed26d-5176-4360-998b-cad866209290"^^stg:RhinoID . 210
inst:LOA-7c0ed26d-5176-4360-998b-cad866209290 stg:hasLOAvalue "LOA20" ; 211
stg:usedDeviationAnalysis "MICROSCALE" . 212
inst:GEOM-81cf12d2-46a0-4af2-93da-49b821602ce7 stg:hasRhinoID "81cf12d2-46a0-4af2-93da-49b821602ce7"^^stg:RhinoID ; 213
stg:hasOcclusion inst:OCCLUSION_858de8c5-747e-4ea2-9e6c-fe8c026b924a . 214
inst:OCCLUSION_858de8c5-747e-4ea2-9e6c-fe8c026b924a a stg:OccludedGeometry . 215
inst:GEOM-82f3f5e9-a393-4231-a2b3-00999c74798a stg:hasLOA inst:LOA-82f3f5e9-a393-4231-a2b3-00999c74798a ; 216
stg:hasRhinoID "82f3f5e9-a393-4231-a2b3-00999c74798a"^^stg:RhinoID . 217
inst:LOA-82f3f5e9-a393-4231-a2b3-00999c74798a stg:hasLOAvalue "LOA20" ; 218
stg:usedDeviationAnalysis "MICROSCALE" . 219
inst:GEOM-8ce7df41-7e69-402f-b96a-d30c37e3c578 stg:hasRhinoID "8ce7df41-7e69-402f-b96a-d30c37e3c578"^^stg:RhinoID ; 220
stg:hasOcclusion inst:OCCLUSION_4156e7d5-5877-43f8-a774-a91b4d20eec8 . 221
inst:OCCLUSION_4156e7d5-5877-43f8-a774-a91b4d20eec8 a stg:OccludedGeometry . 222
inst:GEOM-8d3222ef-6d94-40df-836c-252841591c14 stg:hasLOA inst:LOA-8d3222ef-6d94-40df-836c-252841591c14 ; 223
stg:hasRhinoID "8d3222ef-6d94-40df-836c-252841591c14"^^stg:RhinoID . 224
inst:LOA-8d3222ef-6d94-40df-836c-252841591c14 stg:hasLOAvalue "LOA20" ; 225
stg:usedDeviationAnalysis "MICROSCALE" . 226
inst:GEOM-8d54ec2f-f568-4ecd-acae-aabae90dcad1 stg:hasLOA inst:LOA-8d54ec2f-f568-4ecd-acae-aabae90dcad1 ; 227
stg:hasRhinoID "8d54ec2f-f568-4ecd-acae-aabae90dcad1"^^stg:RhinoID . 228
inst:LOA-8d54ec2f-f568-4ecd-acae-aabae90dcad1 stg:hasLOAvalue "LOA20" ; 229
stg:usedDeviationAnalysis "MICROSCALE" . 230
inst:GEOM-8d60dfe9-0bc0-44b4-a2d5-0f2c57bb2dd9 stg:hasLOA inst:LOA-8d60dfe9-0bc0-44b4-a2d5-0f2c57bb2dd9 ; 231
stg:hasRhinoID "8d60dfe9-0bc0-44b4-a2d5-0f2c57bb2dd9"^^stg:RhinoID . 232
inst:LOA-8d60dfe9-0bc0-44b4-a2d5-0f2c57bb2dd9 stg:hasLOAvalue "LOA20" ; 233
stg:usedDeviationAnalysis "MICROSCALE" . 234
inst:GEOM-901acbc0-f3fe-4e82-b666-bd53a012c35b stg:hasRhinoID "901acbc0-f3fe-4e82-b666-bd53a012c35b"^^stg:RhinoID ; 235
stg:hasOcclusion inst:OCCLUSION_5372fb0d-d1d4-458d-8b40-30266dec5321 . 236
inst:OCCLUSION_5372fb0d-d1d4-458d-8b40-30266dec5321 a stg:OccludedGeometry . 237
inst:GEOM-9343d353-239f-4c9d-a7b7-34dfb836671c stg:hasRhinoID "9343d353-239f-4c9d-a7b7-34dfb836671c"^^stg:RhinoID ; 238
stg:hasOcclusion inst:OCCLUSION_6f7f7463-75f0-425b-9559-120a0b6e7ee1 . 239
inst:OCCLUSION_6f7f7463-75f0-425b-9559-120a0b6e7ee1 a stg:OccludedGeometry . 240
inst:GEOM-9675ac49-3d7a-467a-8ebf-cc27cd59e062 stg:hasLOA inst:LOA-9675ac49-3d7a-467a-8ebf-cc27cd59e062 ; 241
stg:hasRhinoID "9675ac49-3d7a-467a-8ebf-cc27cd59e062"^^stg:RhinoID . 242
inst:LOA-9675ac49-3d7a-467a-8ebf-cc27cd59e062 stg:hasLOAvalue "LOA20" ; 243
stg:usedDeviationAnalysis "MICROSCALE" . 244
inst:GEOM-978b4f19-2f1f-4de7-a9c8-12703a19ee2c a stg:InternalGeometry ; 245
stg:hasLOA inst:LOA-978b4f19-2f1f-4de7-a9c8-12703a19ee2c ; 246
stg:hasRhinoID "978b4f19-2f1f-4de7-a9c8-12703a19ee2c"^^stg:RhinoID . 247
inst:LOA-978b4f19-2f1f-4de7-a9c8-12703a19ee2c stg:hasLOAvalue "LOA20" ; 248
stg:usedDeviationAnalysis "MICROSCALE" . 249
inst:GEOM-99d2871b-fbaa-47eb-b621-8a292bd440c0 stg:hasLOA inst:LOA-99d2871b-fbaa-47eb-b621-8a292bd440c0 ; 250
stg:hasRhinoID "99d2871b-fbaa-47eb-b621-8a292bd440c0"^^stg:RhinoID . 251
inst:LOA-99d2871b-fbaa-47eb-b621-8a292bd440c0 stg:hasLOAvalue "LOA20" ; 252
stg:usedDeviationAnalysis "MICROSCALE" . 253
inst:GEOM-9b3e8da2-71a9-4b75-8e12-74fd7fca292e stg:hasRhinoID "9b3e8da2-71a9-4b75-8e12-74fd7fca292e"^^stg:RhinoID ; 254
stg:hasOcclusion inst:OCCLUSION_0c00ecaa-87d6-4e24-8d73-8fb14b70b3b9 . 255
inst:OCCLUSION_0c00ecaa-87d6-4e24-8d73-8fb14b70b3b9 a stg:OccludedGeometry . 256
inst:GEOM-9dc07124-d318-4221-80c3-07c73565a9f3 stg:hasLOA inst:LOA-9dc07124-d318-4221-80c3-07c73565a9f3 ; 257
stg:hasRhinoID "9dc07124-d318-4221-80c3-07c73565a9f3"^^stg:RhinoID . 258
inst:LOA-9dc07124-d318-4221-80c3-07c73565a9f3 stg:hasLOAvalue "LOA20" ; 259
stg:usedDeviationAnalysis "MICROSCALE" . 260
inst:GEOM-a0a5a585-9180-4afb-8805-e14c568fee64 stg:hasLOA inst:LOA-a0a5a585-9180-4afb-8805-e14c568fee64 ; 261
stg:hasRhinoID "a0a5a585-9180-4afb-8805-e14c568fee64"^^stg:RhinoID . 262
inst:LOA-a0a5a585-9180-4afb-8805-e14c568fee64 stg:hasLOAvalue "LOA20" ; 263
stg:usedDeviationAnalysis "MICROSCALE" . 264
inst:GEOM-a5b02310-6def-48a8-a0df-d16b2e91604c stg:hasLOA inst:LOA-a5b02310-6def-48a8-a0df-d16b2e91604c ; 265
stg:hasRhinoID "a5b02310-6def-48a8-a0df-d16b2e91604c"^^stg:RhinoID . 266
inst:LOA-a5b02310-6def-48a8-a0df-d16b2e91604c stg:hasLOAvalue "LOA20" ; 267
stg:usedDeviationAnalysis "MICROSCALE" . 268
inst:GEOM-a74e192d-4dba-4e5a-aea7-2511d79f9c55 stg:hasLOA inst:LOA-a74e192d-4dba-4e5a-aea7-2511d79f9c55 ; 269
stg:hasRhinoID "a74e192d-4dba-4e5a-aea7-2511d79f9c55"^^stg:RhinoID . 270
inst:LOA-a74e192d-4dba-4e5a-aea7-2511d79f9c55 stg:hasLOAvalue "LOA20" ; 271
stg:usedDeviationAnalysis "MICROSCALE" . 272
inst:GEOM-a7ce8db2-e505-4ce1-9d76-5d55ca0569ce stg:hasRhinoID "a7ce8db2-e505-4ce1-9d76-5d55ca0569ce"^^stg:RhinoID ; 273
stg:hasOcclusion inst:OCCLUSION_ffe14a1e-f305-4460-82dc-5919ecf2f128 . 274
inst:OCCLUSION_ffe14a1e-f305-4460-82dc-5919ecf2f128 a stg:OccludedGeometry . 275
inst:GEOM-aaeebed5-0b3f-40ac-b0da-b4222e654ed1 stg:hasRhinoID "aaeebed5-0b3f-40ac-b0da-b4222e654ed1"^^stg:RhinoID ; 276
115
stg:hasOcclusion inst:OCCLUSION_ab9bf6e6-a653-4063-8c02-4c0f73e27fde . 277
inst:OCCLUSION_ab9bf6e6-a653-4063-8c02-4c0f73e27fde a stg:OccludedGeometry . 278
inst:GEOM-abdfe35b-3528-44dd-8d1d-4b9bfa1f832d stg:hasRhinoID "abdfe35b-3528-44dd-8d1d-4b9bfa1f832d"^^stg:RhinoID ; 279
stg:hasOcclusion inst:OCCLUSION_a105e1af-3d30-435b-a9b6-7cd9a0a1c0bc . 280
inst:OCCLUSION_a105e1af-3d30-435b-a9b6-7cd9a0a1c0bc a stg:OccludedGeometry . 281
inst:GEOM-abe44844-9d4d-4dba-968d-41c367259f5b stg:hasLOA inst:LOA-abe44844-9d4d-4dba-968d-41c367259f5b ; 282
stg:hasRhinoID "abe44844-9d4d-4dba-968d-41c367259f5b"^^stg:RhinoID . 283
inst:LOA-abe44844-9d4d-4dba-968d-41c367259f5b stg:hasLOAvalue "LOA20" ; 284
stg:usedDeviationAnalysis "MICROSCALE" . 285
inst:GEOM-af8c4282-04aa-410d-8ee6-6937e46dd2d7 stg:hasLOA inst:LOA-af8c4282-04aa-410d-8ee6-6937e46dd2d7 ; 286
stg:hasRhinoID "af8c4282-04aa-410d-8ee6-6937e46dd2d7"^^stg:RhinoID . 287
inst:LOA-af8c4282-04aa-410d-8ee6-6937e46dd2d7 stg:hasLOAvalue "LOA20" ; 288
stg:usedDeviationAnalysis "MICROSCALE" . 289
inst:GEOM-c10d7a35-aafd-462d-9b18-49eea47ce23a stg:hasLOA inst:LOA-c10d7a35-aafd-462d-9b18-49eea47ce23a ; 290
stg:hasRhinoID "c10d7a35-aafd-462d-9b18-49eea47ce23a"^^stg:RhinoID ; 291
stg:hasModellingRemark inst:ASSUMPTION_c8a0c72b-d53b-4cf5-b55a-6fdab424c109 . 292
inst:LOA-c10d7a35-aafd-462d-9b18-49eea47ce23a stg:hasLOAvalue "LOA20" ; 293
stg:usedDeviationAnalysis "MICROSCALE" . 294
inst:GEOM-c267fc35-e59d-4666-85f3-2ffea3fb9f37 stg:hasLOA inst:LOA-c267fc35-e59d-4666-85f3-2ffea3fb9f37 ; 295
stg:hasRhinoID "c267fc35-e59d-4666-85f3-2ffea3fb9f37"^^stg:RhinoID . 296
inst:LOA-c267fc35-e59d-4666-85f3-2ffea3fb9f37 stg:hasLOAvalue "LOA20" ; 297
stg:usedDeviationAnalysis "MICROSCALE" . 298
inst:GEOM-c86a84fe-eb3e-4082-9533-25f359ddb3c5 a stg:InternalGeometry ; 299
stg:hasLOA inst:LOA-c86a84fe-eb3e-4082-9533-25f359ddb3c5 ; 300
stg:hasRhinoID "c86a84fe-eb3e-4082-9533-25f359ddb3c5"^^stg:RhinoID . 301
inst:LOA-c86a84fe-eb3e-4082-9533-25f359ddb3c5 stg:hasLOAvalue "LOA20" ; 302
stg:usedDeviationAnalysis "MICROSCALE" . 303
inst:GEOM-cadaed32-8a2d-4670-8be3-54b4cee9291a stg:hasLOA inst:LOA-cadaed32-8a2d-4670-8be3-54b4cee9291a ; 304
stg:hasRhinoID "cadaed32-8a2d-4670-8be3-54b4cee9291a"^^stg:RhinoID . 305
inst:LOA-cadaed32-8a2d-4670-8be3-54b4cee9291a stg:hasLOAvalue "LOA20" ; 306
stg:usedDeviationAnalysis "MACROSCALE" . 307
inst:GEOM-cea9c82d-82f6-47e0-b822-8a9b139f575c stg:hasLOA inst:LOA-cea9c82d-82f6-47e0-b822-8a9b139f575c ; 308
stg:hasRhinoID "cea9c82d-82f6-47e0-b822-8a9b139f575c"^^stg:RhinoID . 309
inst:LOA-cea9c82d-82f6-47e0-b822-8a9b139f575c stg:hasLOAvalue "LOA20" ; 310
stg:usedDeviationAnalysis "MICROSCALE" . 311
inst:GEOM-d2f9fe94-feae-417a-9130-ba56a6b35fd8 stg:hasLOA inst:LOA-d2f9fe94-feae-417a-9130-ba56a6b35fd8 ; 312
stg:hasRhinoID "d2f9fe94-feae-417a-9130-ba56a6b35fd8"^^stg:RhinoID . 313
inst:LOA-d2f9fe94-feae-417a-9130-ba56a6b35fd8 stg:hasLOAvalue "LOA20" ; 314
stg:usedDeviationAnalysis "MICROSCALE" . 315
inst:GEOM-d6e96ceb-1c7a-4138-85ff-10798cef8901 stg:hasLOA inst:LOA-d6e96ceb-1c7a-4138-85ff-10798cef8901 ; 316
stg:hasRhinoID "d6e96ceb-1c7a-4138-85ff-10798cef8901"^^stg:RhinoID . 317
inst:LOA-d6e96ceb-1c7a-4138-85ff-10798cef8901 stg:hasLOAvalue "LOA20" ; 318
stg:usedDeviationAnalysis "MICROSCALE" . 319
inst:GEOM-da808752-4f31-4e22-b55d-ce5458862952 stg:hasRhinoID "da808752-4f31-4e22-b55d-ce5458862952"^^stg:RhinoID ; 320
stg:hasOcclusion inst:OCCLUSION_17d2aed0-fdc4-4a26-8de1-b2ca662d0d99 . 321
inst:OCCLUSION_17d2aed0-fdc4-4a26-8de1-b2ca662d0d99 a stg:OccludedGeometry . 322
inst:GEOM-dce4833f-5da8-4975-9c93-6abb4445e105 stg:hasRhinoID "dce4833f-5da8-4975-9c93-6abb4445e105"^^stg:RhinoID ; 323
stg:hasOcclusion inst:OCCLUSION_95f28813-3816-4d9e-98f9-0d89dd8c3fe9 . 324
inst:OCCLUSION_95f28813-3816-4d9e-98f9-0d89dd8c3fe9 a stg:OccludedGeometry . 325
inst:GEOM-def17de8-de10-4233-9b95-94e2824b38e2 stg:hasLOA inst:LOA-def17de8-de10-4233-9b95-94e2824b38e2 ; 326
stg:hasRhinoID "def17de8-de10-4233-9b95-94e2824b38e2"^^stg:RhinoID . 327
inst:LOA-def17de8-de10-4233-9b95-94e2824b38e2 stg:hasLOAvalue "LOA20" ; 328
stg:usedDeviationAnalysis "MICROSCALE" . 329
inst:GEOM-e1682e7f-c3db-4aaa-b07c-0c38230e20fc stg:hasRhinoID "e1682e7f-c3db-4aaa-b07c-0c38230e20fc"^^stg:RhinoID ; 330
stg:hasOcclusion inst:OCCLUSION_2e14626d-fba4-45b6-9eac-b8793f17c138 . 331
inst:OCCLUSION_2e14626d-fba4-45b6-9eac-b8793f17c138 a stg:OccludedGeometry . 332
inst:GEOM-e3a27e90-8039-4be5-9062-67174dd40350 stg:hasLOA inst:LOA-e3a27e90-8039-4be5-9062-67174dd40350 ; 333
stg:hasRhinoID "e3a27e90-8039-4be5-9062-67174dd40350"^^stg:RhinoID . 334
inst:LOA-e3a27e90-8039-4be5-9062-67174dd40350 stg:hasLOAvalue "LOA20" ; 335
stg:usedDeviationAnalysis "MICROSCALE" . 336
inst:GEOM-eacf5b69-e569-4d82-9291-2d03752984f2 stg:hasRhinoID "eacf5b69-e569-4d82-9291-2d03752984f2"^^stg:RhinoID ; 337
stg:hasOcclusion inst:OCCLUSION_f080b8e0-85a7-48be-bfbf-f72b3b7c3600 . 338
inst:OCCLUSION_f080b8e0-85a7-48be-bfbf-f72b3b7c3600 a stg:OccludedGeometry . 339
inst:GEOM-f5a14160-0dce-4674-b7be-0a9529824f74 stg:hasLOA inst:LOA-f5a14160-0dce-4674-b7be-0a9529824f74 ; 340
stg:hasRhinoID "f5a14160-0dce-4674-b7be-0a9529824f74"^^stg:RhinoID . 341
inst:LOA-f5a14160-0dce-4674-b7be-0a9529824f74 stg:hasLOAvalue "LOA20" ; 342
stg:usedDeviationAnalysis "MICROSCALE" . 343
inst:GEOM-fa704821-89ee-4f10-99ef-036e0abf83f3 stg:hasRhinoID "fa704821-89ee-4f10-99ef-036e0abf83f3"^^stg:RhinoID ; 344
stg:hasOcclusion inst:OCCLUSION_6f8a99e2-b279-475d-9f15-092bafc9c5c7 . 345
inst:OCCLUSION_6f8a99e2-b279-475d-9f15-092bafc9c5c7 a stg:OccludedGeometry . 346
inst:GEOM-fced63bb-3d53-4e03-b9e1-0eefa2dd8299 stg:hasLOA inst:LOA-fced63bb-3d53-4e03-b9e1-0eefa2dd8299 ; 347
116
stg:hasRhinoID "fced63bb-3d53-4e03-b9e1-0eefa2dd8299"^^stg:RhinoID . 348
inst:LOA-fced63bb-3d53-4e03-b9e1-0eefa2dd8299 stg:hasLOAvalue "LOA20" ; 349
stg:usedDeviationAnalysis "MICROSCALE" . 350
inst:GEOM-fd1e2273-36bc-4769-a7b6-53c17a245d32 stg:hasLOA inst:LOA-fd1e2273-36bc-4769-a7b6-53c17a245d32 ; 351
stg:hasRhinoID "fd1e2273-36bc-4769-a7b6-53c17a245d32"^^stg:RhinoID . 352
inst:LOA-fd1e2273-36bc-4769-a7b6-53c17a245d32 stg:hasLOAvalue "LOA20" ; 353
stg:usedDeviationAnalysis "MICROSCALE" . 354
inst:GEOM-fdc0f919-e006-4c0a-8d40-ae462a9e7430 stg:hasLOA inst:LOA-fdc0f919-e006-4c0a-8d40-ae462a9e7430 ; 355
stg:hasRhinoID "fdc0f919-e006-4c0a-8d40-ae462a9e7430"^^stg:RhinoID . 356
inst:LOA-fdc0f919-e006-4c0a-8d40-ae462a9e7430 stg:hasLOAvalue "LOA20" ; 357
stg:usedDeviationAnalysis "MICROSCALE" . 358
inst:House_of_the_count bot:hasStorey inst:First_Floor , inst:Ground_Floor . 359
inst:Ground_Floor bot:hasSpace inst:Ground_Floor_Dungeon , inst:Ground_Floor_EXTERIOR , 360
inst:Ground_Floor_Presence_Chamber . 361
inst:Ground_Floor_Dungeon bot:adjacentElement inst:Door-DOOR_InternalDoor1_PresenceChamber-Dungeon , inst:wall-362
SOLIDWALL_InternalWall1_PresenceChamber . 363
inst:Ground_Floor_EXTERIOR bot:adjacentElement inst:Slab-FLOOR_Floor_PresenceChamber , inst:Window-364
WINDOW_FrontWindow-low_PresenceChamber , inst:door-DOOR_FrontDoor_PresenceChamber , inst:wall-365
SOLIDWALL_EasternWall1_PresenceChamber , inst:wall-SOLIDWALL_WesternWall1_PresenceChamber . 366
inst:Ground_Floor_Presence_Chamber bot:adjacentElement inst:Door-DOOR_InternalDoor1_PresenceChamber-Dungeon , 367
inst:Slab-FLOOR_Floor_PresenceChamber , inst:Vault_Vault1_PresenceChamber , inst:Window-WINDOW_FrontWindow-368
low_PresenceChamber , inst:door-DOOR_FrontDoor_PresenceChamber , inst:wall-SOLIDWALL_EasternWall1_PresenceChamber 369
, inst:wall-SOLIDWALL_InternalWall1_PresenceChamber , inst:wall-SOLIDWALL_WesternWall1_PresenceChamber ; 370
bot:adjacentZone inst:First_Floor_Bedroom_of_the_count , inst:Ground_Floor_Dungeon , 371
inst:Ground_Floor_EXTERIOR ; 372
bot:containsElement inst:Console_console10_PresenceChamber , inst:Console_console1_PresenceChamber , 373
inst:Console_console2_PresenceChamber , inst:Console_console3_PresenceChamber , inst:Console_console4_PresenceChamber , 374
inst:Console_console5_PresenceChamber , inst:Console_console6_PresenceChamber , inst:Console_console7_PresenceChamber , 375
inst:Console_console8_PresenceChamber , inst:Console_console9_PresenceChamber , inst:FirePlace_FirePlace_PresenceChamber 376
, inst:Window-WINDOW_FrontWindow-upper1_PresenceChamber , inst:Window-WINDOW_FrontWindow-377
upper2_PresenceChamber , inst:Window-WINDOW_SideWindow1_PresenceChamber , inst:Window-378
WINDOW_SideWindow2_PresenceChamber , inst:Window-WINDOW_SideWindow3_PresenceChamber , inst:column-379
COLUMN_Column1_PresenceChamber , inst:column-COLUMN_Column2PresenceChamber , inst:wall-380
SOLIDWALL_SouthernWall1_PresenceChamber ; 381
stg:hasRhinoFile inst:RF_Casestudy_Gravensteen_PresenceChamber ; 382
<http://erlangen-crm.org/140617/P103_was_intended_for> inst:giving_audience . 383
inst:Door-DOOR_InternalDoor1_PresenceChamber-Dungeon geo:hasGeometry inst:GEOM-35601207-c06d-4243-ac3b-384
c46418356ad4 , inst:GEOM-a0a5a585-9180-4afb-8805-e14c568fee64 ; 385
a bot:Element , product:Door-DOOR ; 386
stg:hasPointCloudFile inst:PC_Door-DOOR_InternalDoor1_PresenceChamber-Dungeon . 387
inst:Slab-FLOOR_Floor_PresenceChamber geo:hasGeometry inst:GEOM-8858aa45-d7ac-4980-8056-84623318b98b , inst:GEOM-388
ce8af5c9-7a22-473f-ae0c-80eeb0d6197f , inst:GEOM-d0f891ee-54bb-469e-9c97-fd35a36c08f0 , inst:GEOM-de47a3d6-426d-4579-389
acc2-d6af08f640a0 , inst:GEOM-7b3a6f90-f037-4357-8902-bc047ffd97d3 , inst:GEOM-e23bb912-d4f9-4941-802c-6411a972a315 ; 390
a bot:Element , product:Slab-FLOOR . 391
inst:Vault_Vault1_PresenceChamber geo:hasGeometry inst:GEOM-47a8d95e-7d49-4ca5-b187-d381c2e665fc , inst:GEOM-60ada2c4-392
07d3-4ab0-ab96-6e81227dc4bf , inst:GEOM-81cf12d2-46a0-4af2-93da-49b821602ce7 , inst:GEOM-9b3e8da2-71a9-4b75-8e12-393
74fd7fca292e , inst:GEOM-abdfe35b-3528-44dd-8d1d-4b9bfa1f832d , inst:GEOM-abe44844-9d4d-4dba-968d-41c367259f5b , 394
inst:GEOM-d2f9fe94-feae-417a-9130-ba56a6b35fd8 ; 395
a bot:Element , stgp:Vault ; 396
stg:hasLocalVersion "C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Slab-397
Floor_Floor_BedroomOfTheCount.subsampled_OCTREE_LEVEL_10_SUBSAMPLED.e57" , 398
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Vault_Vault1_PresenceChamber.e57" ; 399
stg:hasPointCloudFile inst:PC_Slab-400
Floor_Floor_BedroomOfTheCount_subsampled_OCTREE_LEVEL_10_SUBSAMPLED , inst:PC_Vault_Vault1_PresenceChamber . 401
inst:Window-WINDOW_FrontWindow-low_PresenceChamber geo:hasGeometry inst:GEOM-8858aa45-d7ac-4980-8056-402
84623318b98b , inst:GEOM-ce8af5c9-7a22-473f-ae0c-80eeb0d6197f , inst:GEOM-d0f891ee-54bb-469e-9c97-fd35a36c08f0 , 403
inst:GEOM-de47a3d6-426d-4579-acc2-d6af08f640a0 , inst:GEOM-7b3a6f90-f037-4357-8902-bc047ffd97d3 , inst:GEOM-e23bb912-404
d4f9-4941-802c-6411a972a315 ; 405
a bot:Element , product:Window-WINDOW ; 406
product:aggregates inst:_2_FrontWindow-low_frame , inst:_2_FrontWindow-low_pane ; 407
stg:hasPointCloudFile inst:PC_Window-WINDOW_FrontWindow-low_PresenceChamber . 408
inst:door-DOOR_FrontDoor_PresenceChamber geo:hasGeometry inst:GEOM-1e857f13-9455-450c-81fc-1e1c87271dda , inst:GEOM-409
795b31a4-5924-43b2-9f52-050ce1f90408 ; 410
a bot:Element , product:Door-DOOR ; 411
stg:hasPointCloudFile inst:PC_door-DOOR_FrontDoor_PresenceChamber . 412
inst:wall-SOLIDWALL_EasternWall1_PresenceChamber geo:hasGeometry inst:GEOM-6be805c7-9151-47ed-8404-0573c2e51d94 , 413
inst:GEOM-cea9c82d-82f6-47e0-b822-8a9b139f575c , inst:GEOM-dce4833f-5da8-4975-9c93-6abb4445e105 ; 414
a bot:Element , product:Wall-SOLIDWALL ; 415
stg:hasPointCloudFile inst:PC_wall-SOLIDWALL_EasternWall1_PresenceChamber ; 416
bot:hostsElement inst:Console_console7_PresenceChamber , inst:Console_console8_PresenceChamber , 417
inst:FirePlace_FirePlace_PresenceChamber . 418
117
inst:wall-SOLIDWALL_InternalWall1_PresenceChamber geo:hasGeometry inst:GEOM-62eca92f-c9dc-4c8c-a9fa-4e42747bb5ed , 419
inst:GEOM-c10d7a35-aafd-462d-9b18-49eea47ce23a , inst:GEOM-fd1e2273-36bc-4769-a7b6-53c17a245d32 ; 420
a bot:Element , product:Wall-SOLIDWALL ; 421
stg:hasPointCloudFile inst:PC_wall-SOLIDWALL_InternalWall1_PresenceChamber ; 422
bot:hostsElement inst:Door-DOOR_InternalDoor1_PresenceChamber-Dungeon , 423
inst:Console_console4_PresenceChamber , inst:Console_console5_PresenceChamber , inst:Console_console6_PresenceChamber . 424
inst:wall-SOLIDWALL_WesternWall1_PresenceChamber geo:hasGeometry inst:GEOM-263ad3f7-5adb-4c11-af19-cd6fb9d294a4 , 425
inst:GEOM-4a8a4f93-e679-40b0-868c-9bcadc7aa446 , inst:GEOM-504f6903-26ca-4267-a0ec-33d72fd4e61f , inst:GEOM-fa704821-426
89ee-4f10-99ef-036e0abf83f3 ; 427
a bot:Element , product:Wall-SOLIDWALL ; 428
stg:hasPointCloudFile inst:PC_wall-SOLIDWALL_WesternWall1_PresenceChamber ; 429
bot:hostsElement inst:Console_console2_PresenceChamber , inst:Console_console3_PresenceChamber , inst:Window-430
WINDOW_SideWindow1_PresenceChamber , inst:Window-WINDOW_SideWindow2_PresenceChamber , inst:Window-431
WINDOW_SideWindow3_PresenceChamber . 432
inst:Console_console10_PresenceChamber geo:hasGeometry inst:GEOM-0f5dcd5b-9551-4153-bb15-eeaf5bd67f3f , inst:GEOM-433
4bb93f39-c77b-47ae-80fc-783fba897158 , inst:GEOM-d6e96ceb-1c7a-4138-85ff-10798cef8901 , inst:GEOM-da808752-4f31-4e22-b55d-434
ce5458862952 ; 435
a bot:Element , product:DiscreteAccessory-BRACKET ; 436
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console10_PresenceChamber . 437
inst:Console_console1_PresenceChamber geo:hasGeometry inst:GEOM-11b2cc78-d37c-473d-8845-45fb468ff265 , inst:GEOM-438
7afb8778-6d52-482d-94a2-5a147a45872a , inst:GEOM-8ce7df41-7e69-402f-b96a-d30c37e3c578 , inst:GEOM-8d3222ef-6d94-40df-439
836c-252841591c14 ; 440
a bot:Element , product:DiscreteAccessory-BRACKET ; 441
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console2_PresenceChamber . 442
inst:Console_console2_PresenceChamber geo:hasGeometry inst:GEOM-901acbc0-f3fe-4e82-b666-bd53a012c35b , inst:GEOM-443
9675ac49-3d7a-467a-8ebf-cc27cd59e062 , inst:GEOM-978b4f19-2f1f-4de7-a9c8-12703a19ee2c , inst:GEOM-9dc07124-d318-4221-80c3-444
07c73565a9f3 ; 445
a bot:Element , product:DiscreteAccessory-BRACKET ; 446
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console2_PresenceChamber . 447
inst:Console_console3_PresenceChamber geo:hasGeometry inst:GEOM-1b379e00-ccd4-4662-bf08-b24ccd69907b , inst:GEOM-448
2b837b8b-eaaf-4120-90ba-cded3694c199 , inst:GEOM-3dfb6a4d-654d-4b6b-9d65-5bdde8a15f91 , inst:GEOM-3e5b5265-08e8-45a4-449
bd52-ee9c43e38753 ; 450
a bot:Element , product:DiscreteAccessory-BRACKET ; 451
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console3_PresenceChamber . 452
inst:Console_console4_PresenceChamber geo:hasGeometry inst:GEOM-3fd1d60b-5074-47e6-9160-ae969a8d1bca , inst:GEOM-453
4b013ebf-ce18-4d13-93b3-fc12fb7db11d , inst:GEOM-a74e192d-4dba-4e5a-aea7-2511d79f9c55 , inst:GEOM-fdc0f919-e006-4c0a-8d40-454
ae462a9e7430 ; 455
a bot:Element , product:DiscreteAccessory-BRACKET ; 456
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console4_PresenceChamber . 457
inst:Console_console5_PresenceChamber geo:hasGeometry inst:GEOM-6b10ba3c-7156-42d8-8cc4-8e2af224318e , inst:GEOM-458
7299434a-f80a-4617-a9a9-99eb0d134ab0 , inst:GEOM-a5b02310-6def-48a8-a0df-d16b2e91604c , inst:GEOM-e1682e7f-c3db-4aaa-459
b07c-0c38230e20fc ; 460
a bot:Element , product:DiscreteAccessory-BRACKET ; 461
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console5_PresenceChamber . 462
inst:Console_console6_PresenceChamber geo:hasGeometry inst:GEOM-3afba0cb-87be-4032-ad12-540233ecdba3 , inst:GEOM-463
4121dd99-f57e-4b75-95f0-256a5c8903b8 , inst:GEOM-4a48c36d-8ea9-470e-902b-ddb4aa549352 , inst:GEOM-eacf5b69-e569-4d82-464
9291-2d03752984f2 ; 465
a bot:Element , product:DiscreteAccessory-BRACKET ; 466
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console6_PresenceChamber . 467
inst:Console_console7_PresenceChamber geo:hasGeometry inst:GEOM-61adbd9d-e0fe-4360-9f29-1cd9cf160d31 , inst:GEOM-468
a7ce8db2-e505-4ce1-9d76-5d55ca0569ce , inst:GEOM-c86a84fe-eb3e-4082-9533-25f359ddb3c5 , inst:GEOM-f5a14160-0dce-4674-469
b7be-0a9529824f74 ; 470
a bot:Element , product:DiscreteAccessory-BRACKET ; 471
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console7_PresenceChamber . 472
inst:Console_console8_PresenceChamber geo:hasGeometry inst:GEOM-627f4a0a-8886-4ce9-9f0a-8185128bbefa , inst:GEOM-473
7b72caee-29b4-4f78-a4c8-a5196dbe7985 , inst:GEOM-7c0ed26d-5176-4360-998b-cad866209290 , inst:GEOM-8d54ec2f-f568-4ecd-474
acae-aabae90dcad1 ; 475
a bot:Element , product:DiscreteAccessory-BRACKET ; 476
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console8_PresenceChamber . 477
inst:Console_console9_PresenceChamber geo:hasGeometry inst:GEOM-146ec253-dd39-4eaa-a35e-36a5e9935ecf , inst:GEOM-478
391fe24c-47d6-430d-94f1-e44145420a56 , inst:GEOM-cadaed32-8a2d-4670-8be3-54b4cee9291a , inst:GEOM-e3a27e90-8039-4be5-479
9062-67174dd40350 ; 480
a bot:Element , product:DiscreteAccessory-BRACKET ; 481
stg:hasPointCloudFile inst:PC_DiscreteAccessory-BRACKET_console9_PresenceChamber . 482
inst:FirePlace_FirePlace_PresenceChamber geo:hasGeometry inst:GEOM-06db8367-1bd5-49d0-893c-71cb4b016f38 , inst:GEOM-483
55ea9f56-d5be-42e8-97d1-4cc6b1ec1b76 , inst:GEOM-636d2d7a-33f8-4b57-83ed-6c6a04ec5383 , inst:GEOM-9343d353-239f-4c9d-484
a7b7-34dfb836671c , inst:GEOM-aaeebed5-0b3f-40ac-b0da-b4222e654ed1 , inst:GEOM-598b7463-b909-4945-8c4f-51453f1d098c , 485
inst:GEOM-bddcf7f1-82c8-4ac8-8c78-e4ae427e09bb , inst:GEOM-c9774c35-a877-4892-95e2-667639debbf7 , inst:GEOM-066126c1-486
16c2-4e1a-9dbe-0614c8143020 , inst:GEOM-26736f49-2921-46c0-b6c6-3776ec842df9 , inst:GEOM-661fd183-552b-40bd-a177-487
2ea4b41150dd , inst:GEOM-b4dc5efd-c04e-4cb9-b8c8-182c5985f339 , inst:GEOM-6002f9e7-bdfa-4c6c-b490-a716d29e0acf , 488
118
inst:GEOM-dc11ca4e-b887-4de6-bdf9-34ba10cdfd88 , inst:GEOM-65437a28-3046-4a09-8ab1-975649ceb456 , inst:GEOM-c436d5e6-489
f8b1-4caa-8844-6463e79e52a7 , inst:GEOM-d04e8aea-d5ec-4614-babf-5e689af1200f ; 490
a bot:Element , stgp:Fireplace ; 491
product:aggregates inst:_20_Fireplace_Hood , inst:_20_Fireplace_Lintel , inst:_20_Fireplace_Pilaster1 , 492
inst:_20_Fireplace_Pilaster2 ; 493
stg:hasPointCloudFile inst:PC_ThronePlace_ThronePlace_PresenceChamber . 494
inst:Window-WINDOW_FrontWindow-upper1_PresenceChamber geo:hasGeometry inst:GEOM-82f3f5e9-a393-4231-a2b3-495
00999c74798a , inst:GEOM-8d60dfe9-0bc0-44b4-a2d5-0f2c57bb2dd9 , inst:GEOM-fa4d4e7c-c452-4887-b5a8-8147a2a7765c , 496
inst:GEOM-dddf6781-d2bb-4810-aab5-c7c6c27f1eec ; 497
a bot:Element , product:Window-WINDOW ; 498
product:aggregates inst:_3_FrontWindow-upper1_frame , inst:_3_FrontWindow-upper1_pane ; 499
stg:hasPointCloudFile inst:PC_Window-WINDOW_FrontWindow-upper1_PresenceChamber . 500
inst:Window-WINDOW_FrontWindow-upper2_PresenceChamber geo:hasGeometry inst:GEOM-57ad9eb0-683c-4396-89c5-501
13a3252b245c , inst:GEOM-def17de8-de10-4233-9b95-94e2824b38e2 , inst:GEOM-05c24c87-6807-48b0-b584-8789edacb700 , 502
inst:GEOM-f2659e80-4ea2-406d-badb-799b253077e1 ; 503
a bot:Element , product:Window-WINDOW ; 504
product:aggregates inst:_4_FrontWindow-upper2_frame , inst:_4_FrontWindow-upper2_pane ; 505
stg:hasPointCloudFile inst:PC_Window-WINDOW_FrontWindow-upper2_PresenceChamber . 506
inst:Window-WINDOW_SideWindow1_PresenceChamber geo:hasGeometry inst:GEOM-74345ce9-b538-4fd3-99cf-7029d3a0537a , 507
inst:GEOM-74e53f81-aa30-4d39-90a6-4ecdc8786d21 , inst:GEOM-2fc1aa0b-a6e1-471e-bb10-f120697464fa , inst:GEOM-c241d074-508
1b10-474b-9a51-e64f3996f114 ; 509
a bot:Element , product:Window-WINDOW ; 510
product:aggregates inst:_5_SideWindow1_frame , inst:_5_SideWindow1_pane ; 511
stg:hasPointCloudFile inst:PC_Window-WINDOW_SideWindow1_PresenceChamber . 512
inst:Window-WINDOW_SideWindow2_PresenceChamber geo:hasGeometry inst:GEOM-0653cb6e-d857-4c67-b2b8-6737e5ca1d23 513
, inst:GEOM-69cc9fc5-4e93-4ba0-89a8-8273dd54a523 , inst:GEOM-080570ee-839a-4172-8355-e559d7a31003 , inst:GEOM-32a2c04f-514
4c63-4934-a0f9-f5bbf5dc00a9 ; 515
a bot:Element , product:Window-WINDOW ; 516
product:aggregates inst:_6_SideWindow2_frame , inst:_6_SideWindow2_pane ; 517
stg:hasPointCloudFile inst:PC_Window-WINDOW_SideWindow2_PresenceChamber . 518
inst:Window-WINDOW_SideWindow3_PresenceChamber geo:hasGeometry inst:GEOM-99d2871b-fbaa-47eb-b621-8a292bd440c0 519
, inst:GEOM-c267fc35-e59d-4666-85f3-2ffea3fb9f37 , inst:GEOM-87a9e3fa-7500-46c9-807e-03727f4f3373 , inst:GEOM-c178e09a-520
5f71-4de8-ae17-92ab85bb0f68 ; 521
a bot:Element , product:Window-WINDOW ; 522
product:aggregates inst:_7_SideWindow3_frame , inst:_7_SideWindow3_pane ; 523
stg:hasPointCloudFile inst:PC_Window-WINDOW_SideWindow3_PresenceChamber . 524
inst:column-COLUMN_Column1_PresenceChamber geo:hasGeometry inst:GEOM-57402d12-78d4-45ef-8f2c-3c73bb2df31b , 525
inst:GEOM-604cd4cb-9b68-4c26-b572-f838b2796017 , inst:GEOM-6b645576-18a9-4ed9-a945-faca6f36da85 , inst:GEOM-b6835a59-526
4a8f-4816-a9ce-a656f2216f23 ; 527
a bot:Element , product:Column-COLUMN ; 528
product:aggregates inst:_8_Column1_Base , inst:_8_Column1_Capital , inst:_8_Column1_Shaft ; 529
stg:hasPointCloudFile inst:PC_column_COLUMN_Column1_PresenceChamber . 530
inst:column-COLUMN_Column2PresenceChamber geo:hasGeometry inst:GEOM-6c0a7b83-9b0c-4101-982d-bb6ff33c3f37 , 531
inst:GEOM-f5281de0-e6f4-483b-b995-f24d4ef2d46f , inst:GEOM-84bf7526-d97b-4341-9446-4ea65a7324ee , inst:GEOM-2073b362-532
3817-499b-8120-36329a678354 ; 533
a bot:Element , product:Column-COLUMN ; 534
product:aggregates inst:_9_Column2_Acanthus , inst:_9_Column2_Base , inst:_9_Column2_Capital , 535
inst:_9_Column2_Shaft ; 536
stg:hasPointCloudFile inst:PC_column_COLUMN_Column2PresenceChamber . 537
inst:wall-SOLIDWALL_SouthernWall1_PresenceChamber geo:hasGeometry inst:GEOM-1290af6e-b10c-40d6-9fb8-f4952c3f63dc , 538
inst:GEOM-1e6a47a0-b5ec-4f15-af20-32dd664adc79 , inst:GEOM-3220279d-fbc4-4706-a843-f28312cffccf , inst:GEOM-5cbd2d80-539
00c3-45ac-86f1-5c651ec759c8 ; 540
a bot:Element , product:Wall-SOLIDWALL ; 541
stg:hasPointCloudFile inst:PC_wall-SOLIDWALL_SouthernWall1_PresenceChamber ; 542
bot:hostsElement inst:Window-WINDOW_FrontWindow-low_PresenceChamber , inst:door-543
DOOR_FrontDoor_PresenceChamber , inst:Console_console10_PresenceChamber , inst:Console_console1_PresenceChamber , 544
inst:Console_console9_PresenceChamber , inst:Window-WINDOW_FrontWindow-upper1_PresenceChamber , inst:Window-545
WINDOW_FrontWindow-upper2_PresenceChamber . 546
inst:RF_Casestudy_Gravensteen_PresenceChamber stg:hasLocalVersion 547
"C:/STG/CaseStudy_PresenceChamber_nogeom/Casestudy_Gravensteen_PresenceChamber.3dm" . 548
inst:LOA-05c24c87-6807-48b0-b584-8789edacb700 stg:hasLOAvalue "LOA20" ; 549
stg:usedDeviationAnalysis "MACROSCALE" . 550
inst:LOA-066126c1-16c2-4e1a-9dbe-0614c8143020 stg:hasLOAvalue "LOA20" ; 551
stg:usedDeviationAnalysis "MICROSCALE" . 552
inst:LOA-080570ee-839a-4172-8355-e559d7a31003 stg:hasLOAvalue "LOA20" ; 553
stg:usedDeviationAnalysis "MACROSCALE" . 554
inst:LOA-2073b362-3817-499b-8120-36329a678354 stg:hasLOAvalue "LOA30" ; 555
stg:usedDeviationAnalysis "MICROSCALE" . 556
inst:LOA-26736f49-2921-46c0-b6c6-3776ec842df9 stg:hasLOAvalue "LOA20" ; 557
stg:usedDeviationAnalysis "MICROSCALE" . 558
inst:LOA-2fc1aa0b-a6e1-471e-bb10-f120697464fa stg:hasLOAvalue "LOA20" ; 559
119
stg:usedDeviationAnalysis "MACROSCALE" . 560
inst:LOA-32a2c04f-4c63-4934-a0f9-f5bbf5dc00a9 stg:hasLOAvalue "LOA20" ; 561
stg:usedDeviationAnalysis "MACROSCALE" . 562
inst:LOA-598b7463-b909-4945-8c4f-51453f1d098c stg:hasLOAvalue "LOA20" ; 563
stg:usedDeviationAnalysis "MICROSCALE" . 564
inst:LOA-6002f9e7-bdfa-4c6c-b490-a716d29e0acf stg:hasLOAvalue "LOA20" ; 565
stg:usedDeviationAnalysis "MICROSCALE" . 566
inst:LOA-604cd4cb-9b68-4c26-b572-f838b2796017 stg:hasLOAvalue "LOA30" ; 567
stg:usedDeviationAnalysis "MICROSCALE" . 568
inst:LOA-65437a28-3046-4a09-8ab1-975649ceb456 stg:hasLOAvalue "LOA20" ; 569
stg:usedDeviationAnalysis "MICROSCALE" . 570
inst:LOA-661fd183-552b-40bd-a177-2ea4b41150dd stg:hasLOAvalue "LOA20" ; 571
stg:usedDeviationAnalysis "MICROSCALE" . 572
inst:LOA-6b645576-18a9-4ed9-a945-faca6f36da85 stg:hasLOAvalue "LOA30" ; 573
stg:usedDeviationAnalysis "MICROSCALE" . 574
inst:LOA-7b3a6f90-f037-4357-8902-bc047ffd97d3 stg:hasLOAvalue "LOA20" ; 575
stg:usedDeviationAnalysis "MICROSCALE" . 576
inst:LOA-84bf7526-d97b-4341-9446-4ea65a7324ee stg:hasLOAvalue "LOA30" ; 577
stg:usedDeviationAnalysis "MICROSCALE" . 578
inst:LOA-87a9e3fa-7500-46c9-807e-03727f4f3373 stg:hasLOAvalue "LOA20" ; 579
stg:usedDeviationAnalysis "MACROSCALE" . 580
inst:LOA-8858aa45-d7ac-4980-8056-84623318b98b stg:hasLOAvalue "LOA20" ; 581
stg:usedDeviationAnalysis "MACROSCALE" . 582
inst:LOA-b6835a59-4a8f-4816-a9ce-a656f2216f23 stg:hasLOAvalue "LOA30" ; 583
stg:usedDeviationAnalysis "MICROSCALE" . 584
inst:LOA-c178e09a-5f71-4de8-ae17-92ab85bb0f68 stg:hasLOAvalue "LOA20" ; 585
stg:usedDeviationAnalysis "MACROSCALE" . 586
inst:LOA-c241d074-1b10-474b-9a51-e64f3996f114 stg:hasLOAvalue "LOA20" ; 587
stg:usedDeviationAnalysis "MACROSCALE" . 588
inst:LOA-c436d5e6-f8b1-4caa-8844-6463e79e52a7 stg:hasLOAvalue "LOA20" ; 589
stg:usedDeviationAnalysis "MICROSCALE" . 590
inst:LOA-c9774c35-a877-4892-95e2-667639debbf7 stg:hasLOAvalue "LOA20" ; 591
stg:usedDeviationAnalysis "MICROSCALE" . 592
inst:LOA-ce8af5c9-7a22-473f-ae0c-80eeb0d6197f stg:hasLOAvalue "LOA20" ; 593
stg:usedDeviationAnalysis "MICROSCALE" . 594
inst:LOA-d0f891ee-54bb-469e-9c97-fd35a36c08f0 stg:hasLOAvalue "LOA20" ; 595
stg:usedDeviationAnalysis "MACROSCALE" . 596
inst:LOA-dc11ca4e-b887-4de6-bdf9-34ba10cdfd88 stg:hasLOAvalue "LOA20" ; 597
stg:usedDeviationAnalysis "MICROSCALE" . 598
inst:LOA-dddf6781-d2bb-4810-aab5-c7c6c27f1eec stg:hasLOAvalue "LOA20" ; 599
stg:usedDeviationAnalysis "MACROSCALE" . 600
inst:LOA-de47a3d6-426d-4579-acc2-d6af08f640a0 stg:hasLOAvalue "LOA20" ; 601
stg:usedDeviationAnalysis "MACROSCALE" . 602
inst:LOA-f2659e80-4ea2-406d-badb-799b253077e1 stg:hasLOAvalue "LOA20" ; 603
stg:usedDeviationAnalysis "MACROSCALE" . 604
inst:LOA-f5281de0-e6f4-483b-b995-f24d4ef2d46f stg:hasLOAvalue "LOA30" ; 605
stg:usedDeviationAnalysis "MICROSCALE" . 606
inst:LOA-fa4d4e7c-c452-4887-b5a8-8147a2a7765c stg:hasLOAvalue "LOA20" ; 607
stg:usedDeviationAnalysis "MACROSCALE" . 608
inst:OCCLUSION_43c7b808-feed-4c33-b0fc-9bfc056c7fb7 a stg:OccludedGeometry . 609
inst:OCCLUSION_553ee962-abb7-4f20-87f7-84adc5bfcacd a stg:OccludedGeometry . 610
inst:OCCLUSION_c42d04bc-206d-4a3f-b556-beb24c08541f a stg:OccludedGeometry . 611
inst:OCCLUSION_db2a054e-3ce0-4d55-b2e2-312568486564 a stg:OccludedGeometry . 612
inst:_20_Fireplace_Hood geo:hasGeometry inst:GEOM-598b7463-b909-4945-8c4f-51453f1d098c , inst:GEOM-bddcf7f1-82c8-4ac8-613
8c78-e4ae427e09bb , inst:GEOM-c9774c35-a877-4892-95e2-667639debbf7 ; 614
a stgp:Fireplace_Hood , bot:Element . 615
inst:GEOM-598b7463-b909-4945-8c4f-51453f1d098c a stg:InternalGeometry ; 616
stg:hasLOA inst:LOA-598b7463-b909-4945-8c4f-51453f1d098c ; 617
stg:hasRhinoID "598b7463-b909-4945-8c4f-51453f1d098c"^^stg:RhinoID . 618
inst:GEOM-bddcf7f1-82c8-4ac8-8c78-e4ae427e09bb stg:hasRhinoID "bddcf7f1-82c8-4ac8-8c78-e4ae427e09bb"^^stg:RhinoID ; 619
stg:hasOcclusion inst:OCCLUSION_43c7b808-feed-4c33-b0fc-9bfc056c7fb7 . 620
inst:GEOM-c9774c35-a877-4892-95e2-667639debbf7 stg:hasLOA inst:LOA-c9774c35-a877-4892-95e2-667639debbf7 ; 621
stg:hasRhinoID "c9774c35-a877-4892-95e2-667639debbf7"^^stg:RhinoID . 622
inst:_20_Fireplace_Lintel geo:hasGeometry inst:GEOM-22e22f9f-a4ed-48d7-9451-d1a5b5c88dde , inst:GEOM-066126c1-16c2-4e1a-623
9dbe-0614c8143020 , inst:GEOM-26736f49-2921-46c0-b6c6-3776ec842df9 , inst:GEOM-661fd183-552b-40bd-a177-2ea4b41150dd , 624
inst:GEOM-b4dc5efd-c04e-4cb9-b8c8-182c5985f339 ; 625
a bot:Element , product:Beam-LINTEL . 626
inst:GEOM-066126c1-16c2-4e1a-9dbe-0614c8143020 a stg:InternalGeometry ; 627
stg:hasLOA inst:LOA-066126c1-16c2-4e1a-9dbe-0614c8143020 ; 628
stg:hasRhinoID "066126c1-16c2-4e1a-9dbe-0614c8143020"^^stg:RhinoID . 629
inst:GEOM-26736f49-2921-46c0-b6c6-3776ec842df9 a stg:InternalGeometry ; 630
120
stg:hasLOA inst:LOA-26736f49-2921-46c0-b6c6-3776ec842df9 ; 631
stg:hasRhinoID "26736f49-2921-46c0-b6c6-3776ec842df9"^^stg:RhinoID . 632
inst:GEOM-661fd183-552b-40bd-a177-2ea4b41150dd stg:hasLOA inst:LOA-661fd183-552b-40bd-a177-2ea4b41150dd ; 633
stg:hasRhinoID "661fd183-552b-40bd-a177-2ea4b41150dd"^^stg:RhinoID . 634
inst:GEOM-b4dc5efd-c04e-4cb9-b8c8-182c5985f339 stg:hasRhinoID "b4dc5efd-c04e-4cb9-b8c8-182c5985f339"^^stg:RhinoID ; 635
stg:hasOcclusion inst:OCCLUSION_db2a054e-3ce0-4d55-b2e2-312568486564 . 636
inst:_20_Fireplace_Pilaster1 geo:hasGeometry inst:GEOM-7510a140-2c78-4350-bd43-b0998bf883ac , inst:GEOM-af8c4282-04aa-637
410d-8ee6-6937e46dd2d7 , inst:GEOM-6002f9e7-bdfa-4c6c-b490-a716d29e0acf , inst:GEOM-dc11ca4e-b887-4de6-bdf9-638
34ba10cdfd88 ; 639
a bot:Element , product:Column-PILASTER . 640
inst:GEOM-6002f9e7-bdfa-4c6c-b490-a716d29e0acf stg:hasLOA inst:LOA-6002f9e7-bdfa-4c6c-b490-a716d29e0acf ; 641
stg:hasRhinoID "6002f9e7-bdfa-4c6c-b490-a716d29e0acf"^^stg:RhinoID . 642
inst:GEOM-dc11ca4e-b887-4de6-bdf9-34ba10cdfd88 a stg:InternalGeometry ; 643
stg:hasLOA inst:LOA-dc11ca4e-b887-4de6-bdf9-34ba10cdfd88 ; 644
stg:hasRhinoID "dc11ca4e-b887-4de6-bdf9-34ba10cdfd88"^^stg:RhinoID . 645
inst:_20_Fireplace_Pilaster2 geo:hasGeometry inst:GEOM-fced63bb-3d53-4e03-b9e1-0eefa2dd8299 , inst:GEOM-65437a28-3046-646
4a09-8ab1-975649ceb456 , inst:GEOM-c436d5e6-f8b1-4caa-8844-6463e79e52a7 , inst:GEOM-d04e8aea-d5ec-4614-babf-647
5e689af1200f ; 648
a bot:Element , product:Column-PILASTER . 649
inst:GEOM-65437a28-3046-4a09-8ab1-975649ceb456 stg:hasLOA inst:LOA-65437a28-3046-4a09-8ab1-975649ceb456 ; 650
stg:hasRhinoID "65437a28-3046-4a09-8ab1-975649ceb456"^^stg:RhinoID . 651
inst:GEOM-c436d5e6-f8b1-4caa-8844-6463e79e52a7 a stg:InternalGeometry ; 652
stg:hasLOA inst:LOA-c436d5e6-f8b1-4caa-8844-6463e79e52a7 ; 653
stg:hasRhinoID "c436d5e6-f8b1-4caa-8844-6463e79e52a7"^^stg:RhinoID . 654
inst:GEOM-d04e8aea-d5ec-4614-babf-5e689af1200f stg:hasRhinoID "d04e8aea-d5ec-4614-babf-5e689af1200f"^^stg:RhinoID ; 655
stg:hasOcclusion inst:OCCLUSION_c42d04bc-206d-4a3f-b556-beb24c08541f . 656
inst:_2_FrontWindow-low_frame geo:hasGeometry inst:GEOM-8858aa45-d7ac-4980-8056-84623318b98b ; 657
a bot:Element , stgp:WindowFrame . 658
inst:GEOM-8858aa45-d7ac-4980-8056-84623318b98b stg:hasLOA inst:LOA-8858aa45-d7ac-4980-8056-84623318b98b ; 659
stg:hasRhinoID "8858aa45-d7ac-4980-8056-84623318b98b"^^stg:RhinoID . 660
inst:_2_FrontWindow-low_pane geo:hasGeometry inst:GEOM-ce8af5c9-7a22-473f-ae0c-80eeb0d6197f , inst:GEOM-d0f891ee-54bb-661
469e-9c97-fd35a36c08f0 , inst:GEOM-de47a3d6-426d-4579-acc2-d6af08f640a0 ; 662
a bot:Element , stgp:WindowPane . 663
inst:GEOM-ce8af5c9-7a22-473f-ae0c-80eeb0d6197f stg:hasLOA inst:LOA-ce8af5c9-7a22-473f-ae0c-80eeb0d6197f ; 664
stg:hasRhinoID "ce8af5c9-7a22-473f-ae0c-80eeb0d6197f"^^stg:RhinoID . 665
inst:GEOM-d0f891ee-54bb-469e-9c97-fd35a36c08f0 stg:hasLOA inst:LOA-d0f891ee-54bb-469e-9c97-fd35a36c08f0 ; 666
stg:hasRhinoID "d0f891ee-54bb-469e-9c97-fd35a36c08f0"^^stg:RhinoID . 667
inst:GEOM-de47a3d6-426d-4579-acc2-d6af08f640a0 stg:hasLOA inst:LOA-de47a3d6-426d-4579-acc2-d6af08f640a0 ; 668
stg:hasRhinoID "de47a3d6-426d-4579-acc2-d6af08f640a0"^^stg:RhinoID . 669
inst:_3_FrontWindow-upper1_frame geo:hasGeometry inst:GEOM-fa4d4e7c-c452-4887-b5a8-8147a2a7765c ; 670
a bot:Element , stgp:WindowFrame . 671
inst:GEOM-fa4d4e7c-c452-4887-b5a8-8147a2a7765c stg:hasLOA inst:LOA-fa4d4e7c-c452-4887-b5a8-8147a2a7765c ; 672
stg:hasRhinoID "fa4d4e7c-c452-4887-b5a8-8147a2a7765c"^^stg:RhinoID . 673
inst:_3_FrontWindow-upper1_pane geo:hasGeometry inst:GEOM-dddf6781-d2bb-4810-aab5-c7c6c27f1eec ; 674
a bot:Element , stgp:WindowPane . 675
inst:GEOM-dddf6781-d2bb-4810-aab5-c7c6c27f1eec stg:hasLOA inst:LOA-dddf6781-d2bb-4810-aab5-c7c6c27f1eec ; 676
stg:hasRhinoID "dddf6781-d2bb-4810-aab5-c7c6c27f1eec"^^stg:RhinoID . 677
inst:_4_FrontWindow-upper2_frame geo:hasGeometry inst:GEOM-05c24c87-6807-48b0-b584-8789edacb700 ; 678
a bot:Element , stgp:WindowFrame . 679
inst:GEOM-05c24c87-6807-48b0-b584-8789edacb700 stg:hasLOA inst:LOA-05c24c87-6807-48b0-b584-8789edacb700 ; 680
stg:hasRhinoID "05c24c87-6807-48b0-b584-8789edacb700"^^stg:RhinoID . 681
inst:_4_FrontWindow-upper2_pane geo:hasGeometry inst:GEOM-f2659e80-4ea2-406d-badb-799b253077e1 ; 682
a bot:Element , stgp:WindowPane . 683
inst:GEOM-f2659e80-4ea2-406d-badb-799b253077e1 stg:hasLOA inst:LOA-f2659e80-4ea2-406d-badb-799b253077e1 ; 684
stg:hasRhinoID "f2659e80-4ea2-406d-badb-799b253077e1"^^stg:RhinoID . 685
inst:_5_SideWindow1_frame geo:hasGeometry inst:GEOM-2fc1aa0b-a6e1-471e-bb10-f120697464fa ; 686
a bot:Element , stgp:WindowFrame . 687
inst:GEOM-2fc1aa0b-a6e1-471e-bb10-f120697464fa stg:hasLOA inst:LOA-2fc1aa0b-a6e1-471e-bb10-f120697464fa ; 688
stg:hasRhinoID "2fc1aa0b-a6e1-471e-bb10-f120697464fa"^^stg:RhinoID . 689
inst:_5_SideWindow1_pane geo:hasGeometry inst:GEOM-c241d074-1b10-474b-9a51-e64f3996f114 ; 690
a bot:Element , stgp:WindowPane . 691
inst:GEOM-c241d074-1b10-474b-9a51-e64f3996f114 stg:hasLOA inst:LOA-c241d074-1b10-474b-9a51-e64f3996f114 ; 692
stg:hasRhinoID "c241d074-1b10-474b-9a51-e64f3996f114"^^stg:RhinoID . 693
inst:_6_SideWindow2_frame geo:hasGeometry inst:GEOM-080570ee-839a-4172-8355-e559d7a31003 ; 694
a bot:Element , stgp:WindowFrame . 695
inst:GEOM-080570ee-839a-4172-8355-e559d7a31003 stg:hasLOA inst:LOA-080570ee-839a-4172-8355-e559d7a31003 ; 696
stg:hasRhinoID "080570ee-839a-4172-8355-e559d7a31003"^^stg:RhinoID . 697
inst:_6_SideWindow2_pane geo:hasGeometry inst:GEOM-32a2c04f-4c63-4934-a0f9-f5bbf5dc00a9 ; 698
a bot:Element , stgp:WindowPane . 699
inst:GEOM-32a2c04f-4c63-4934-a0f9-f5bbf5dc00a9 stg:hasLOA inst:LOA-32a2c04f-4c63-4934-a0f9-f5bbf5dc00a9 ; 700
stg:hasRhinoID "32a2c04f-4c63-4934-a0f9-f5bbf5dc00a9"^^stg:RhinoID . 701
121
inst:_7_SideWindow3_frame geo:hasGeometry inst:GEOM-87a9e3fa-7500-46c9-807e-03727f4f3373 ; 702
a bot:Element , stgp:WindowFrame . 703
inst:GEOM-87a9e3fa-7500-46c9-807e-03727f4f3373 stg:hasLOA inst:LOA-87a9e3fa-7500-46c9-807e-03727f4f3373 ; 704
stg:hasRhinoID "87a9e3fa-7500-46c9-807e-03727f4f3373"^^stg:RhinoID . 705
inst:_7_SideWindow3_pane geo:hasGeometry inst:GEOM-c178e09a-5f71-4de8-ae17-92ab85bb0f68 ; 706
a bot:Element , stgp:WindowPane . 707
inst:GEOM-c178e09a-5f71-4de8-ae17-92ab85bb0f68 stg:hasLOA inst:LOA-c178e09a-5f71-4de8-ae17-92ab85bb0f68 ; 708
stg:hasRhinoID "c178e09a-5f71-4de8-ae17-92ab85bb0f68"^^stg:RhinoID . 709
inst:_8_Column1_Acanthus a bot:Element , stgp:Acanthus . 710
inst:_8_Column1_Base geo:hasGeometry inst:GEOM-604cd4cb-9b68-4c26-b572-f838b2796017 ; 711
a bot:Element , stgp:Base . 712
inst:GEOM-604cd4cb-9b68-4c26-b572-f838b2796017 stg:hasLOA inst:LOA-604cd4cb-9b68-4c26-b572-f838b2796017 ; 713
stg:hasRhinoID "604cd4cb-9b68-4c26-b572-f838b2796017"^^stg:RhinoID . 714
inst:_8_Column1_Capital geo:hasGeometry inst:GEOM-6b645576-18a9-4ed9-a945-faca6f36da85 ; 715
a bot:Element , stgp:Capital ; 716
product:aggregates inst:_8_Column1_Acanthus . 717
inst:GEOM-6b645576-18a9-4ed9-a945-faca6f36da85 stg:hasLOA inst:LOA-6b645576-18a9-4ed9-a945-faca6f36da85 ; 718
stg:hasRhinoID "6b645576-18a9-4ed9-a945-faca6f36da85"^^stg:RhinoID . 719
inst:_8_Column1_Shaft geo:hasGeometry inst:GEOM-b6835a59-4a8f-4816-a9ce-a656f2216f23 ; 720
a bot:Element , stgp:Shaft . 721
inst:GEOM-b6835a59-4a8f-4816-a9ce-a656f2216f23 stg:hasLOA inst:LOA-b6835a59-4a8f-4816-a9ce-a656f2216f23 ; 722
stg:hasRhinoID "b6835a59-4a8f-4816-a9ce-a656f2216f23"^^stg:RhinoID . 723
inst:_9_Column2_Acanthus a bot:Element , stgp:Acanthus . 724
inst:_9_Column2_Base geo:hasGeometry inst:GEOM-f5281de0-e6f4-483b-b995-f24d4ef2d46f ; 725
a bot:Element , stgp:Base . 726
inst:GEOM-f5281de0-e6f4-483b-b995-f24d4ef2d46f stg:hasLOA inst:LOA-f5281de0-e6f4-483b-b995-f24d4ef2d46f ; 727
stg:hasRhinoID "f5281de0-e6f4-483b-b995-f24d4ef2d46f"^^stg:RhinoID . 728
inst:_9_Column2_Capital geo:hasGeometry inst:GEOM-84bf7526-d97b-4341-9446-4ea65a7324ee ; 729
a bot:Element , stgp:Capital . 730
inst:GEOM-84bf7526-d97b-4341-9446-4ea65a7324ee stg:hasLOA inst:LOA-84bf7526-d97b-4341-9446-4ea65a7324ee ; 731
stg:hasRhinoID "84bf7526-d97b-4341-9446-4ea65a7324ee"^^stg:RhinoID . 732
inst:_9_Column2_Shaft geo:hasGeometry inst:GEOM-2073b362-3817-499b-8120-36329a678354 ; 733
a bot:Element , stgp:Shaft . 734
inst:GEOM-2073b362-3817-499b-8120-36329a678354 stg:hasLOA inst:LOA-2073b362-3817-499b-8120-36329a678354 ; 735
stg:hasRhinoID "2073b362-3817-499b-8120-36329a678354"^^stg:RhinoID . 736
inst:PC_column_COLUMN_Column1_PresenceChamber stg:hasLocalVersion 737
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/column_COLUMN_Column1_PresenceChamber.e57" ; 738
stg:usedEquipment "Leica Scan Station P30" . 739
inst:PC_column_COLUMN_Column2PresenceChamber stg:hasLocalVersion 740
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/column_COLUMN_Column2PresenceChamber.e57" ; 741
stg:usedEquipment "Leica Scan Station P30" . 742
inst:PC_wall-SOLIDWALL_SouthernWall1_PresenceChamber stg:hasLocalVersion 743
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/wall-SOLIDWALL_SouthernWall1_PresenceChamber.e57" ; 744
stg:usedEquipment "Leica Scan Station P30" , "DJI Phantom 4 Pro" , "Canon EOS 5D (lens: SIGMA 24-70mm; aperture 745
F2.8" . 746
inst:PC_DiscreteAccessory-BRACKET_console10_PresenceChamber stg:hasLocalVersion 747
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console10_PresenceChamber.e57" ; 748
stg:usedEquipment "Leica Scan Station P30" . 749
inst:PC_DiscreteAccessory-BRACKET_console2_PresenceChamber stg:hasLocalVersion 750
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console1_PresenceChamber.e57" , 751
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console2_PresenceChamber.e57" ; 752
stg:usedEquipment "Leica Scan Station P30" . 753
inst:PC_DiscreteAccessory-BRACKET_console3_PresenceChamber stg:hasLocalVersion 754
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console3_PresenceChamber.e57" ; 755
stg:usedEquipment "Leica Scan Station P30" . 756
inst:PC_DiscreteAccessory-BRACKET_console4_PresenceChamber stg:hasLocalVersion 757
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console4_PresenceChamber.e57" ; 758
stg:usedEquipment "Leica Scan Station P30" . 759
inst:PC_DiscreteAccessory-BRACKET_console5_PresenceChamber stg:hasLocalVersion 760
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console5_PresenceChamber.e57" ; 761
stg:usedEquipment "Leica Scan Station P30" . 762
inst:PC_DiscreteAccessory-BRACKET_console6_PresenceChamber stg:hasLocalVersion 763
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console6_PresenceChamber.e57" ; 764
stg:usedEquipment "Leica Scan Station P30" . 765
inst:PC_DiscreteAccessory-BRACKET_console7_PresenceChamber stg:hasLocalVersion 766
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console7_PresenceChamber.e57" ; 767
stg:usedEquipment "Leica Scan Station P30" . 768
inst:PC_DiscreteAccessory-BRACKET_console8_PresenceChamber stg:hasLocalVersion 769
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console8_PresenceChamber.e57" ; 770
stg:usedEquipment "Leica Scan Station P30" . 771
122
inst:PC_DiscreteAccessory-BRACKET_console9_PresenceChamber stg:hasLocalVersion 772
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/DiscreteAccessory-BRACKET_console9_PresenceChamber.e57" ; 773
stg:usedEquipment "Leica Scan Station P30" . 774
inst:PC_ThronePlace_ThronePlace_PresenceChamber stg:hasLocalVersion 775
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/ThronePlace_ThronePlace_PresenceChamber.e57" . 776
inst:GEOM-7b3a6f90-f037-4357-8902-bc047ffd97d3 stg:hasLOA inst:LOA-7b3a6f90-f037-4357-8902-bc047ffd97d3 ; 777
stg:hasRhinoID "7b3a6f90-f037-4357-8902-bc047ffd97d3"^^stg:RhinoID . 778
inst:GEOM-e23bb912-d4f9-4941-802c-6411a972a315 stg:hasRhinoID "e23bb912-d4f9-4941-802c-6411a972a315"^^stg:RhinoID ; 779
stg:hasOcclusion inst:OCCLUSION_553ee962-abb7-4f20-87f7-84adc5bfcacd . 780
inst:PC_Slab-Floor_Floor_BedroomOfTheCount_subsampled_OCTREE_LEVEL_10_SUBSAMPLED stg:hasLocalVersion 781
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Slab-782
Floor_Floor_BedroomOfTheCount.subsampled_OCTREE_LEVEL_10_SUBSAMPLED.e57" ; 783
stg:usedEquipment "Leica BLK360" . 784
inst:PC_Vault_Vault1_PresenceChamber stg:hasLocalVersion 785
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Vault_Vault1_PresenceChamber.e57" ; 786
stg:usedEquipment "Leica Scan Station P30" . 787
inst:PC_Window-WINDOW_FrontWindow-upper1_PresenceChamber stg:hasLocalVersion 788
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_FrontWindow-upper1_PresenceChamber.e57" ; 789
stg:usedEquipment "Leica Scan Station P30" . 790
inst:PC_Window-WINDOW_FrontWindow-upper2_PresenceChamber stg:hasLocalVersion 791
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_FrontWindow-upper2_PresenceChamber.e57" ; 792
stg:usedEquipment "Leica Scan Station P30" . 793
inst:PC_Window-WINDOW_SideWindow1_PresenceChamber stg:hasLocalVersion 794
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_SideWindow1_PresenceChamber.e57" ; 795
stg:usedEquipment "Leica Scan Station P30" . 796
inst:PC_Window-WINDOW_SideWindow2_PresenceChamber stg:hasLocalVersion 797
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_SideWindow2_PresenceChamber.e57" ; 798
stg:usedEquipment "Leica Scan Station P30" . 799
inst:PC_Window-WINDOW_SideWindow3_PresenceChamber stg:hasLocalVersion 800
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_SideWindow3_PresenceChamber.e57" ; 801
stg:usedEquipment "Leica Scan Station P30" . 802
inst:PC_wall-SOLIDWALL_EasternWall1_PresenceChamber stg:hasLocalVersion 803
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/wall-SOLIDWALL_EasternWall1_PresenceChamber.e57" ; 804
stg:usedEquipment "Leica Scan Station P30" , "DJI Phantom 4 Pro" , "Canon EOS 5D (lens: SIGMA 24-70mm; aperture 805
F2.8" . 806
inst:PC_wall-SOLIDWALL_InternalWall1_PresenceChamber stg:hasLocalVersion 807
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/wall-SOLIDWALL_InternalWall1_PresenceChamber.e57" ; 808
stg:usedEquipment "Leica Scan Station P30" , "DJI Phantom 4 Pro" , "Canon EOS 5D (lens: SIGMA 24-70mm; aperture 809
F2.8" . 810
inst:PC_wall-SOLIDWALL_WesternWall1_PresenceChamber stg:hasLocalVersion 811
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/wall-SOLIDWALL_WesternWall1_PresenceChamber.e57" ; 812
stg:usedEquipment "Leica Scan Station P30" , "DJI Phantom 4 Pro" , "Canon EOS 5D (lens: SIGMA 24-70mm; aperture 813
F2.8" . 814
inst:PC_Door-DOOR_InternalDoor1_PresenceChamber-Dungeon stg:hasLocalVersion 815
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Door-DOOR_InternalDoor1_PresenceChamber-Dungeon.e57" ; 816
stg:usedEquipment "Leica Scan Station P30" . 817
inst:PC_Window-WINDOW_FrontWindow-low_PresenceChamber stg:hasLocalVersion 818
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/Window-WINDOW_FrontWindow-low_PresenceChamber.e57" ; 819
stg:usedEquipment "Leica Scan Station P30" . 820
inst:PC_door-DOOR_FrontDoor_PresenceChamber stg:hasLocalVersion 821
"C:/STG/CaseStudy_PresenceChamber_nogeom/PointClouds/door-DOOR_FrontDoor_PresenceChamber.e57" ; 822
stg:usedEquipment "Leica Scan Station P30" . 823
824
123
124
Linking Data: Semantic enrichment of the existing building
geometry
Jeroen Werbrouck
Supervisors: Prof. Pieter Pauwels, Willem Bekers
Counsellor: Mathias Bonduel
Master's dissertation submitted in order to obtain the academic degree of
Master of Science in de ingenieurswetenschappen: architectuur
Department of Architecture and Urban Planning
Chair: Prof. dr. ir. Arnold Janssens
Faculty of Engineering and Architecture
Academic year 2017-2018