Asphalt concrete reconstruction and computational ...
Transcript of Asphalt concrete reconstruction and computational ...
Page 1 de 35
Asphalt concrete reconstruction and
computational mechanical modelling using 3D
grid division
By:
Simón Espinosa León
Thesis advisor:
Silvia Caro Spinel, PhD.
Thesis Co-advisor:
Daniel Castillo, PhD.
Faculty of Engineering
Department of Civil Engineering
Bogotá, Colombia
August 2020
Page 2 de 35
Table of Contents Table of Contents .......................................................................................................................................... 2
Figure list ....................................................................................................................................................... 3
Table list ........................................................................................................................................................ 4
Abstract ......................................................................................................................................................... 5
Resumen ........................................................................................................................................................ 6
1. Introduction ........................................................................................................................................... 7
1.1. Past work and motivation ............................................................................................................. 8
1.2. AC specimen description ............................................................................................................... 8
1.3. General Objective ........................................................................................................................ 10
1.4. Specific Objectives ....................................................................................................................... 10
2. Methodology ....................................................................................................................................... 11
2.1. X-Ray CT scans reconstruction .................................................................................................... 11
2.2. Grid generation and volume fraction .......................................................................................... 13
2.3. Cutting planes .............................................................................................................................. 15
2.4. FE model partitions and material assignment from interpolation rule ...................................... 15
2.5. FE simulation under axial compression ....................................................................................... 18
2.6. The results of the response force of the AC specimen are analyzed and compared to those
obtained using the different grid divisions and material assignation rules. ........................................... 19
3. Results and analysis ............................................................................................................................. 20
3.1. Stress-Strain curve ....................................................................................................................... 20
3.2. Force and elastic modulus at the end of simulation ................................................................... 22
3.2.1. Force at the end of simulation ............................................................................................ 22
3.2.2. Elastic modulus at the end of simulation ............................................................................ 23
3.3. Computational cost ..................................................................................................................... 24
4. Conclusions .......................................................................................................................................... 26
5. Acknowledgements ............................................................................................................................. 28
6. References ........................................................................................................................................... 29
Appendix 1. Partition plane generator (.m) ............................................................................................ 31
Appendix 2. Specimen reconstruction and volume fraction calculation (.m) ......................................... 32
Appendix 3. Abaqus simulation script (.py) ............................................................................................. 33
Appendix 4. Abaqus results processor (.py) ............................................................................................ 35
Page 3 de 35
Figure list Figure 1. X-ray CT scans a) Img 75 b) Img 90 c) Img 110 ............................................................................. 10
Figure 2. Average volume fraction distribution .......................................................................................... 12
Figure 3. 2D Image grey-level histogram..................................................................................................... 12
Figure 4. Image reconstruction, a) Original, b) Color fix, c) Phase division ................................................. 13
Figure 5. Sample grid division n = 2 ............................................................................................................. 13
Figure 6. Group of cell reconstruction view with different number of divisions a) n=2, b) n=4, c) n=6, d) n=8,
e) n=15, f) n=30, g) n=40, h) n=70 ............................................................................................................... 14
Figure 7. 3 points for cutting plane ............................................................................................................. 15
Figure 8. FE model a) Partitions, b) Material assignment ........................................................................... 16
Figure 9. Volume fraction for a specific image (1 means aggregate, 0 means asphalt mortar) ................. 17
Figure 10. Interpolation rules to assign the mechanical properties of a cell a) 50/50 rule, b) 75/25 rule . 18
Figure 11. FE model constraints a) n=2, b) n=10, c) n=40 ........................................................................... 18
Figure 12. Stress-Strain curve for the specimen a) 50/50 rule, b) 72/25 rule ............................................. 21
Figure 13. Rule Stress-Strain comparison.................................................................................................... 21
Figure 14. Force at the end of the simulation a) 50/50 rule b) 75/25 rule ................................................. 22
Figure 15. Force at end of the simulation combined .................................................................................. 23
Figure 16. Young’s modulus at end of simulation ....................................................................................... 23
Figure 17. Computational cost to create the models a) Normal scale, b) Log scale ................................... 24
Figure 18. Computational cost to run the models ...................................................................................... 25
Page 4 de 35
Table list Table 1. Asphalt mixture design data ............................................................................................................ 8
Table 2. Coarse aggregate angularity and texture properties ...................................................................... 9
Table 3. Aggregate physical properties ......................................................................................................... 9
Table 4. Cube dimensions respective to n................................................................................................... 14
Table 5. Elements of the Prony series (normalized values), mortar viscoelastic mechanical properties. .. 17
Page 5 de 35
Abstract This study introduces a three-dimensional (3D) grid division approach to account for the internal
composition of asphalt concrete (AC) when performing finite element (FE) computational mechanics
simulations. This methodology, which is extension of an existing two-dimensional (2D) technique, initiates
by processing Computed Tomography (CT) scan data, to create a simplified reconstruction of an asphalt
concrete (AC) specimen. Once the sample has been reconstructed, it is divided into two main phases: i)
the coarse aggregate, and ii) the asphalt mortar. Then, the specimen is partitioned in homogenized cubic
cells to approximate the mechanical response of the sample. In this study, 70 consecutive images were
used to reconstruct a 70x70x70 mm AC sample. A grid was used to divide the sample into cubic cells and
approximate the aggregate fraction within the cell. Using two binary interpolation rules (50/50 and 75/25),
each cell was assigned a phase: either aggregate or asphalt mortar, depending on the limit value of each
rule. This procedure was done for 10 different partitions; (2, 4, 6, 8, 10, 15, 20, 25, 30, 40); the number of
cells corresponded to the number of partitions cubed, so there was a maximum of 64,000 cells, each
2.8x2.8x2.8 mm in size. To estimate a mechanical response from the simplified reconstructed specimen,
the AC model was then subjected to a constant displacement using the FE software Abaqus®. It was found
that the 50/50 rule converges with 8 divisions, and the overall stress reported in the specimen tends to
decrease as the number of divisions increase. For the 75/25 rule, the opposite was found, as the number
of divisions increase the recorded overall stress increased; also, the results for this rule did not converge
and the end measured stress where one order of magnitude smaller than those of the 50/50 rule. Finally,
the computational cost to build the finite element models follows an exponential increase in time and
required a very significant RAM capacity, while the computational cost and resources to run the AC model
was not significant.
Page 6 de 35
Resumen Esta investigación desarrolla una estrategia simple de particiones en tres dimensiones (3D) para describir
la composición interna de una mezcla asfáltica para simulaciones computacionales de elementos finitos.
Esta metodología proviene de un estudio en dos dimensiones. Inicia con la reconstrucción de tomografía
computarizada (CT) para obtener un espécimen de mezcla asfáltica. Posterior a la reconstrucción, se divide
en dos fases principales: i) el agregado grueso y ii) la matriz asfáltica. El espécimen se divide en celdas
cúbicas homogenizadas para aproximar la respuesta mecánica de la muestra. Para esto, se tienen 70
imágenes consecutivas utilizadas para recrear un espécimen de 70x70x70 mm de mezcla asfáltica. Este
cubo se pasa por una grilla cúbica y se calcula la fracción de agregado dentro de cada grilla. Para la
asignación de la fase a cada celda específica se tienen dos reglas de interpolación binaria, la regla 50/50 y
la regla 75/25. A partir de la regla de interpolación binaria seleccionada se le asigna a cada celda las
propiedades de un material que representa una de las dos fases. Este procedimiento se realiza para 10
particiones: (2, 4, 6, 8, 10, 15, 20, 25, 30, 40) y el número de celdas generadas es el número de particiones
al cubo para un máximo de 64,000 celdas creando cada celda de 2.8x2.8x2.8mm. La respuesta mecánica
se calcula tras simular la muestra bajo un desplazamiento constante en el programa de elementos finitos
Abaqus®. Se encontró que la regla de interpolación de 50/50 converge rápidamente, a partir de las 8
divisiones los cambios no son significativos. Adicionalmente, a medida que el número de divisiones
aumenta, el esfuerzo disminuye. Lo contrario sucede con la regla de 75/25, esta no converge y a medida
que el número de divisiones aumenta el esfuerzo aumenta también. Finalmente, el costo computacional
asociado a la creación de los modelos tiene un aumento exponencial en el tiempo requerido.
Adicionalmente, requiere de una capacidad de RAM muy alta para general los modelos mientras el costo
computacional para correr el modelo no es significativo.
Page 7 de 35
1. Introduction Asphalt concrete (AC) is a highly complex heterogeneous material composed of three main phases;
the coarse aggregate, the asphalt mortar or asphalt matrix (i.e., blend of asphalt binder and the fine
portion of the aggregates) and air voids. For the study, only the first two phases will be considered, to
reduce the complexity of the multiphase material. The properties of the AC are dependent on the
proportion of each phase and the disposition of the microstructure. Therefore, analyzing an asphaltic
sample at a microstructural level in three dimensions carries an extremely high computational cost mainly
due to all the information it contains. Therefore, we seek to find a binary method of interpolation between
both phases that reduces the computational cost and efficiently analyze the overall response of a three-
dimensional asphalt sample.
For some years, there has been specialized software for the computational representation of
composite materials. An example of this work is the one developed by Castillo et al. (2015), where the
effect of particle size and air voids on the initiation and propagation of the fracture is investigated. In other
application, through the generation of probable microstructures, Castillo and Al-Qadi (2019) presented
the computational model of a two-dimensional asphalt sample, exploring potential interpolations of the
properties of the asphalt matrix and aggregates to achieve an acceptable computational result in the
mechanical response of the mix at a lower computational cost than when individually modeling the
aggregate particles and the asphalt matrix through grid divisions. Although this model did not include air
voids, the results were very good compared with the expected results. Furthermore, since the generation
of the samples is digital, this model allows various simulations to be done at a very low cost.
Another work in this direction is that conducted by Chen et al (2017). The authors generated random
structures of the aggregates in two and three dimensions in which they accounted for a fine matrix, the
particle size distribution of the aggregates, the air voids, among other properties. This method has the
advantage of giving high flexibility in numerical simulation as it is of digital origin. In addition, it allows
relationships to be identified by varying one of the components of the mixture. Additionally, Chen et al.
(2017) investigated the dynamic modulus of the asphalt sample while Castillo and Al-Qadi (2019)
investigated the mechanical response of a progressive load.
However, although the mentioned studies are based on the digital generation of an AC sample, the
idea in this project is to study a real sample, as Dai (2011) did. The benefit of using an actual three-
dimensional (3D) sample is the possibility of characterizing the actual internal structure of an asphalt mix
and not only random predictions, as in the cases mentioned. However, a disadvantage of having a real
sample is that, unless the work includes multiple different specimens, the model will be limited to the
response of the analyzed sample.
Another disadvantage of using an actual sample is the reliance on the sample 3D reconstruction
process based on individual two-dimensional (2D) images obtained through X-Ray Computed Tomography
(CT) techniques, because the final quality of the computational simulations is based on what it is
reconstructed. Once the reconstruction of the sample has been done, it is assumed that it is the best
representation of the scanned AC specimen. Everything done after the reconstruction is done on the
reconstructed information, so if this lacks any data or has incorrect information the error will be carried
forward during the study.
Page 8 de 35
1.1. Past work and motivation
This project seeks to improve and complement the research carried out by Castillo and Al-Qadi (2019),
through the generation of 3D models (not 2D as the reference) with binary interpolated properties and
based on CT scans of a real AC sample. The objective is to find a reasonable trade-off between a more
accurate representation of the internal distribution of phases of the AC sample and a lower computational
cost. During the study, the computational cost will be considered to be proportional to the number of grid
divisions (n) used to represent the internal phases of the AC.
During the research, only two phases will be taken into account: i) the coarse aggregate, and ii) the
asphalt mortar, also called the fine aggregate matrix; i.e., combination of asphalt binder and aggregates
passing the sieve No. 16 (1.18 mm). The way to differentiate the phases is a pixel threshold. By dividing
the model using a 3D grid and estimating a unique property within every cell/grid element, the complexity
of the model is reduced. Finite Element (FE) methods will be used to process the reconstructed specimen
with the assigned properties. To do so, the commercial FE software Abaqus® was used to re-construct the
model and run the simulations. The simulation will be done with a controlled displacement and the mesh
size was be the same as the grid which is a compromise between accuracy and computational cost that
Castillo and Al-Qadi (2019) did not take. The decision to have the mesh size the same size as the grid was
because it was thought that the computational cost would be too high otherwise, Castillo and Al-Qadi
(2019) had a 2D sample so they did not have to compromise accuracy for computational cost.
1.2. AC specimen description
The original specimen used for this study was a fine-graded soft limestone superpave type C. The
asphalt mixture design data may be observed in Table 1. From the percent passing it is observed that the
maximum aggregate size is 1 in or 25mm. Table 2 shows the angularity and texture properties for the
coarse aggregate used. Finally, Table 3 identifies the aggregates physical properties.
Table 1. Asphalt mixture design data
Parameter Sieve Size
Sieve No. (mm)
Percent
Passing (%)
Pb (%) 5.2 1 in (25) 100
Va (%) 6.7 3/4 in (19) 99
VMA (%) 13.7 1/2 in (12.5) 95
VFA (%) 70.9 3/8 in (9.5) 92.5
Gmm 2.515 No. 4 (4.75) 77.5
Gsb 2.653 No. 8 (2.36) 43
Binder
Grade PG 76-22
No. 16 (1.18) 30
No. 30 (0.600) -
Binder
Specific
Gravity
1.02
No. 50 (0.300) -
No. 200 (0.075) 6
Page 9 de 35
Table 2. Coarse aggregate angularity and texture properties
Aggregate Type Angularity Texture Remark
Soft Limestone (SL) 2195 80 Sub-rounded and polished aggregate
Angularity Index Range:
Rounded (0-2100), Sub-rounded (2100-4000),
Sub-angular (4000-5400), and Angular (5400-10,000)
Texture Index Range:
Polished (0-165), Smooth (165-275), Low Roughness (275-350),
Moderate Roughness (350-460), and High Roughness (460-1,000)
Table 3. Aggregate physical properties
Aggregate Tests Soft Limestone
(SL)
Test
Method
Los Angeles % Wt.
Loss-Bituminous 34 Tex-410-A
Mg Soundness-Bituminous1 41 Tex-411-A
Mg Soundness-Stone2 29 Tex-411-A
Polish Value 25 Tex-438-A
Micro-Deval % Wt. Loss-
Bituminous 19.7 Tex-461-A
Fine Aggregate Acid
Insolubility 2 Tex-612-J
Micro-Deval %Wt. Loss 20.4 Tex-461-A
1Using HMAC Application Sample Size Fractions 2Using Other Applications Sample Size Fractions
The actual dimensions of the scanned specimen are: 148mm high and 100mm diameter.
Additionally, it contains 7% of air voids, 25% of mastic and 68% of aggregate. The CT scanned image
resolution was 512 x 512 pixels with an equivalent of 100 x 100mm image and a 1mm interval between
images. Figure 1 shows an example of the CT scans on three different locations and how the actual
specimen looks when digitally sliced.
Page 10 de 35
a) b) c)
Figure 1. X-ray CT scans a) Img 75 b) Img 90 c) Img 110
1.3. General Objective
Explore the use of grid divisions on a 3D AC specimen reconstructed from a real sample using X-ray CT
scans and obtain a reasonable trade-off between accuracy and computational cost.
1.4. Specific Objectives
1. Find the most appropriate pixel threshold to reconstruct the asphalt concrete specimen using
a grid division technique.
2. Understand the behavior on the mechanical response of the computational models when
using a binary interpolation rule for assigning the type of material within each cell of the
specimen.
3. Find the most appropriate range of divisions (n) where the trade-off between result accuracy
and computational cost is the lowest.
Page 11 de 35
2. Methodology For the study, five steps were followed to accomplish the result and obtain a response of the AC specimen.
The steps are the following;
1. The sample is reconstructed from 2D images obtained through CT-Scans. Its individual pixels
are analyzed and the phase is assigned depending on the pixel value. This step was done in
MatLab and the code was developed inhouse and can be found in Appendix 2.
2. A grid is superimposed on the reconstructed sample. The sample is divided into cubic cells and
the volume fraction between phases is calculated. This was done in the same MatLab code
explained in the previous step. This algorithm generates an output text file with a coordinate
for each cell and the respective volume fraction.
3. Cutting planes are generated to form the cell partitions for the FE model prior to the material
assignation. Each cutting plane is defined by three points, this is done with a MatLab algorithm
that can be found in Appendix 1. A text file is generated with all three coordinates for each
plane.
4. Create an FE model, generate the partitions using the cutting planes, and assign a material to
each cell depending on the volume fraction and the specific material interpolation rule chosen.
This is can be seen in a Python code in Appendix 3.
5. The FE AC model is subjected to axial compression loading, and the reaction force on top of
the specimen is calculated. This is within the same code as the previous step.
6. The results of the response force of the AC specimen are analyzed and compared to those
obtained using the different grid divisions and material assignation rules.
The mechanical properties for each cell are assigned using a binary interpolation rule in Step 4. The
reason the interpolation rule was chosen to be binary was to reduce computational cost and simplify the
material property to one of the two selected phases (i.e., a grid is considered to be aggregate or mortar,
depending on the aggregate fraction and the assignation rule). Castillo and Al-Qadi (2019) showed
promising results with a binary rule but it required some adjustments to improve the results. Each step
and interpolation rules will be explained in depth in this section.
2.1. X-Ray CT scans reconstruction
To accomplish the sample reconstruction, it was important to find the most appropriate way to
reconstruct the images. The decision was to reconstruct the scanned AC specimen using a threshold value
of pixel color. The criteria to decide the best threshold range was qualitative, because to the naked eye it
is very easy to differentiate between phases. Therefore, the threshold range was slowly changed and the
image reconstructions where compared. Once a range was decided, a total volume fraction of aggregates
within the specimen was calculated, and it was expected to be between in an interval of 60 and 70% (i.e.,
60 to 70% of the total volume of the specimen consists of aggregates). In a previous study, Zelelew and
Papagiannakis (2011) found that the threshold for coarse aggregate should be values above 158 in a grey
scale (0-255). However, such a value should not be taken as an exact indication. Many variables will affect
the pixel values and frequencies (i.e. the histogram) of a given image, including the technique of image
acquisition, quality of lightning, resolution, and previous image manipulation. The overall volume fraction
Page 12 de 35
calculated for the analyzed AC specimen was 61.67%, and the distribution between the scanned 2D slices
can be observed in Figure 2.
Figure 2. Average volume fraction distribution
The selected threshold range for aggregate was between 165 and 215. Figure 3 shows the complete
histogram for a 2D sample image. The upper limit was set to avoid white values that represent glares
within the scans. In Figure 4 the original image is observed, followed by a color equalization to better
observe the phases and the reconstruction with two material phases. The pixels in yellow are considered
to be part of an aggregate, and assigned the value 1, while the rest of the pixels have the value 0 to
facilitate the future steps of calculating the volume fraction and reduce complexity. Appendix 2 contains
the full written code in MatLab to adjust and reconstruct the images.
Figure 3. 2D Image grey-level histogram
Page 13 de 35
a) b) c)
Figure 4. Image reconstruction, a) Original, b) Color fix, c) Phase division
2.2. Grid generation and volume fraction
For this step, the idea is to impose a grid into the reconstructed volume and obtain an even number
of cells. It was decided to use cubic cells, to simplify the problem and reduce the computational cost. Each
cell with the same number of pixels in average. The grids where generated depending on the number of
partitions and the selected number of partitions (n) was (2, 4, 6, 8, 10, 15, 20, 25, 30, 40). The number of
cells is calculated as n3. Notice that as n increases, the n values are more distanced to reduce the
computational cost and because it is expected to have a lower change in the results with higher n’s while
with lower n’s the change is expected to be more significant.
The size of each cell is calculated as the total number of pixels (359 on each edge) divided by the
number of partitions. That result is rounded down and considered the standard width of the cell, then, the
remainder of the division is calculated and those are considered additional pixels. Additional pixels are
divided between the first cells creating slightly wider cells. The same procedure is done with the number
of 2D images to give a thickness and the initial cells will be slightly thicker, each image is assumed to be 1
mm thick. The procedure is considered appropriate as the width of a pixel is less than 0.2 mm. Once all
the cells have been generated, the values are added and then divided by the total number of pixels. This
will generate the fraction of the cell that corresponds to the aggregate phase as those pixels had a 1
assigned. A coordinate to the center of the cell is calculated and those values are recorded. Figure 5 shows
an example of how the sample is observed after it is reconstructed and a grid is generated (n=2, in this
case).
Figure 5. Sample grid division n = 2
Page 14 de 35
A specific image may not be compared with a reconstructed group of cells, as they do not represent a
2D image but a 3D stack of images. When the number of divisions is 2, a 2D view will show a stack of 35
original 2D scanned images, when n is 30, a 2D view will represent a stack of two scanned 2D images.
Figure 6 shows how the 2D view of groups of cells change as n changes. The initial images, before Figure
6f, do not have a proper representation of the microstructure. Figure 6h shows a better representation of
the microstructure as it only has one image and it is not a stack of consecutive images.
a) b) c) d)
e) f) g) h)
Figure 6. Group of cell reconstruction view with different number of divisions a) n=2, b) n=4, c) n=6, d) n=8, e) n=15, f) n=30, g) n=40, h) n=70
Table 4 presents the average length of each edge of a cell for the divisions presented in Figure 6. It is
evident that as the number of divisions increases, the reconstructed model is clearer as the cell is also a
better representation of the internal microstructure of the AC sample. The maximum size for a coarse
aggregate for the specimen used is 1 in or 25mm, so as cells are smaller the shape is better described and
not only the tendency to show the general location of the aggregates. An additional challenge is that the
cells contain more than one image, so it is not possible to make a proper comparison between
reconstructed images with different number of divisions or with the actual scan.
Table 4. Cube dimensions respective to n
n Average cube length [mm]
2 35
3 23.33
6 11.67
8 8.75
15 4.67
30 2.33
40 1.75
70 1
Page 15 de 35
2.3. Cutting planes
The FE model will create a LxLxL cube that must be partitioned. To do so, it requires 3 points that
reference a partition plane. Partition planes are used to create the cubic cells in the FE model previous to
the material assignment. The reference points are defined as;
1. The first point is on one of the axes at a multiple of the distance L/n where L is the length of the
cube (70mm) and n is the number of partitions. i.e. (0, i*L/n, 0)
2. The second point is directly above the first point. It is on the same axes but it has an offset of L on
another axis. i.e. (0, i*L/n, L)
3. Finally, the third point has an offset from the second point on the other perpendicular axes and it
marks the third coplanar point of the partition plane. i.e. (L, i*L/n, L)
All three points may be observed on Figure 7 where ‘i’ is a number between 1 and n-1. The generation of
these points is within a cycle that can be seen in Appendix 1. Each time the cycle ends, i increases its value
creating the next set of reference points with an offset of L/n from the previous set, until it reaches the
desired number of sets (n-1). Then, it starts generating reference points on a perpendicular plane until all
three reference plains (xy, xz and yz) have n-1 sets of reference points. A minimum of 3 partition planes
are needed to make cubic cells, in Figure 5 3 planes are used each placed perpendicular to a reference
plane (xy, xz, yz).
Figure 7. 3 points for cutting plane
2.4. FE model partitions and material assignment from interpolation rule
The FE model is a 3D 70x70x70mm deformable body cube in Abaqus that will be assigned a material
(aggregate or asphalt mortar) on each partition, depending on the aggregate volume fraction of each cell
(calculated on Step 2). The partitions are creating by cutting the cube through n-1 number of planes on
each axis. The cutting planes are defined by three points that where calculated on the previous step. Figure
8 shows the FE model where Figure 8a has the raw partitions that generate 8 cells, followed by Figure 8b
that has the material assigned to all 8 cells (the color represents a material has been assigned). It is
important to take into account that all the cells are the same size, they don’t have the problem associated
with the additional pixels mentioned on Step 2.
Page 16 de 35
a) b)
Figure 8. FE model a) Partitions, b) Material assignment
The cell material assignment depends on the chosen interpolation rule. For the study, two binary rules
were chosen, the 50/50 rule and the 75/25 rule. The idea behind each rule is to set a volume fraction limit
(50 and 75 respectively): a cell with aggregate fraction below that limit will be assigned as asphalt mortar,
and if the value is higher than the limit it will be taken as aggregate. The aggregate volume fractions for
each cell are known at this point as they were calculated previously on Step 2. For the 50/50 rule, the limit
of the aggregate fraction per cell is 0.50, while for the 75/25 rule the limit is 0.75. In the first case, the rule
means that if the aggregate volume fraction is lower than 0.5, then asphalt mortar is assigned, while if it
is higher than 0.5, the cell is assigned aggregate properties. In the second case, cells with aggregate fraction
values lower than 0.75 are assigned asphalt mortar properties, while cells with fraction values larger than
0.75 are assigned aggregate properties. These rules were chosen due to the promising results published
by Castillo and Al-Qadi (2019). They found that this type of true-false criterion shows good results when
modeling a low-density AC and proper results when modeling a high-density AC. This study seeks the most
appropriate limit, so it evaluates two values.
The materials used for the FE simulation are the same used by Castillo and Al-Qadi (2019). The coarse
aggregate is an elastic material with a Young’s modulus of 25 GPa and a Poisson’s coefficient of 0.16. The
asphalt mortar has an elastic instantaneous axial Young’s modulus of 112 Pa and a Poisson’s coefficient of
0.4. It also has a viscoelastic behavior, described using a Prony series, some of which parameters are
normalized for their use in Abaqus, as presented in Table 5. The viscoelastic properties were obtained at
25°. Details on the testing and further information can be found in Kim et al (2005).
Eq 1 shows the Prony series, nevertheless, Abaqus uses 𝑔𝑖, that is defined as 𝐺𝑖/𝐺0. More information
about viscoelastic materials and its modeling in Abaqus can be found at in the Abaqus manual (Abaqus
documentation, 2020).
𝐺(𝑡) = 𝐺0 − ∑ 𝐺𝑖 [1 − exp (−𝑡
𝜏𝑖)]
𝑛
𝑖=1
, and 𝐺∞ = 𝐺0 − ∑ 𝐺𝑖
𝑛
𝑖=1
(Eq 1)
Page 17 de 35
Table 5. Elements of the Prony series (normalized values), mortar viscoelastic mechanical properties.
i g1 Prony ki Prony1 τi Prony
1 0.653333 0 0.0015716
2 0.2277331 0 0.0163883
3 0.0968486 0 0.0994745
4 0.0185536 0 0.7358037
5 0.0029643 0 6.7740964
6 0.0004768 0 65.786517
7 7.54E-05 0 643.36493
8 1.19E-05 0 6514.2857
9 2.25E-06 0 80976.19
Figure 9 displays the aggregate fraction with n = 70 for the image shown in Figure 4. The color bar on
the right describes the color code used, where 1 corresponds to complete aggregate and 0 is no aggregate
(asphalt mortar). Figure 10 shows how the interpolation rules work. Figure 10a represents the 50/50 rule
while Figure 10b has the 75/35 rule. In addition, both plots have an example with a volume fraction of 0.25
and another volume fraction of 0.6. It may be appreciated that for both cases the 0.25 volume fraction will
be assigned the mortar phase while the 0.6 volume fraction will have an aggregate phase in the first rule
and a mortar phase with the second rule. Appendix 3 has the full code to generate the partitions from the
points calculated in the previous step and then to assign the material to each individual cell depending on
the volume fraction calculated on Step 2 and the chosen binary rule.
Figure 9. Volume fraction for a specific image (1 means aggregate, 0 means asphalt mortar)
1 For 3D simulations it is recommended to use bulk properties (𝑘𝑖) identical to 𝑔𝑖. Nevertheless, as all the results where
calculated with no bulk properties. It was tested for 8 divisions and the results showed a maximum difference under 0.0005%.
Page 18 de 35
a) b)
Figure 10. Interpolation rules to assign the mechanical properties of a cell a) 50/50 rule, b) 75/25 rule
2.5. FE simulation under axial compression
Once all the previous steps have been done, the model is created in Abaqus® and ready to enter the
simulation. Two simulations are carried out for each n: one for each material assignation rule. Therefore,
a total of 20 simulations were carried out. The computation mechanical test corresponds to the application
of an axial compression load, under displacement-controlled conditions. The compression is at constant
vertical displacement during 0.5s with a total displacement of 0.1mm. The base has a full displacement
constraint and the top is only free to move vertically. The superior face is free to move vertically as the
controlled displacement is carried out in this face. A rigid body was not used to generate a distributed load
as the FE software did not allow the assembly of a rigid 3D plate with the 3D deformable body, for that
reason it was used a constraint instead and it gives the same results. In that way, the movement is
controlled, and the simulation is comparable with an axial load test with the superior face glued to the top
and the inferior face glued at the bottom. Figure 11 shows the mentioned boundary conditions for the
model with n = 2. The mesh size is the same as the cell size for each case.
a) b) c)
Figure 11. FE model constraints a) n=2, b) n=10, c) n=40
All the results are measured on each node of the superior face. The displacement is linear and identical
for all nodes as that is the controlled variable, then the reaction force differs on each node.
Page 19 de 35
2.6. The results of the response force of the AC specimen are analyzed and compared
to those obtained using the different grid divisions and material assignation
rules.
Using the code in Appendix 4 the results are compiled and then opened in MatLab to be processed.
The reaction force for every node are added for each time of the time interval displacement to end up
with only one reaction force result. That force is then divided by the area of the superior face (70x70 mm)
to calculate the stress in the superior face at each interval of the controlled displacement. This was done
for each grid division and binary rule. During this step, it would have been important to compare the
calculated stress results with a laboratory testing of the specimen stress.
Page 20 de 35
3. Results and analysis The results and analysis will be divided into three sections, the first section is the Stress-Strain curves
where the two rules are compared, and the tendencies are described. The second section is the force and
elastic modulus at the end of the simulation for both rules, here it will be observed the convergence of the
end results and the change in the elastic modulus. Finally, the third section will show the time
computational cost by comparing the time to create the model and to process it. All the measured results
were taken on the nodes of the superior face.
As explained before, the mechanical simulation was done in Abaqus CAE®, and it consisted performed
by applying a controlled compressive displacement of 0.1 mm during a 0.5 s interval, the reaction force
was calculated by adding the force on each node within the superior face. The reported stress was
calculated by adding all the reaction forces from the superior face and dividing it by the total area of that
face. The strain is the displacement divided by the initial height of the specimen.
3.1. Stress-Strain curve
The Stress-Strain curves were constructed using as input the applied displacement and the sum of the
reaction force registered at all nodes of the superior face. The resulting curves for the first rule (50/50)
may be seen in Figure 12a, while the results for the second rule (75/25) are observed in Figure 12b.
Opposite tendencies are observed in Figure 12. Each time n increases on the 50/50 rule, the stress line
tends to decrease, as seen in Figure 12a. On the contrary, while n increases on the 75/25 rule, the stress
line tends to increase, as appreciated on Figure 12b. The first trend was expected because when n is low
the number of cells is low, and each cell should have the average aggregate fraction (61%), making all of
the cells be simulated as aggregate. Then, as the number of divisions increase, more cells start to take the
properties of the asphalt mortar, generating the stress to fall. The contrary happens for the second rule,
low number of divisions makes all the cells take the properties of the mortar and, as the divisions increase,
the stress increases as more cells take the properties of aggregate.
It is clear that the stress in the 50/50 rule converges when the number of divisions reaches 8, with 512
cells. After this value, the stress results start to have less significant changes (i.e., less than 3% between
consecutive n values); this is evident as the curves get closer together. This is different to what seems to
happen with the 75/25 material assignation rule, where results seem to converge near 6 or 8 divisions but
then it disperses again, and the stress results are significantly different from the next one (i.e. more than
34% between consecutive curves). The reason for the 75/25 rule to have such an unexpected behavior
could be due to the uneven relationship between phases, as the number of divisions increases and the
cells are divided, the individual cell can change the phase and cause the mentioned behavior.
Page 21 de 35
a) b) Figure 12. Stress-Strain curve for the specimen a) 50/50 rule, b) 72/25 rule
When comparing the results between the two rules, a significant discrepancy may be observed.
Figure 13 presents the results from Figure 12 within the same figure, and it is clear that the order of
magnitudes differ and the trends are different, they don’t seem to converge. It may also be seen that as
the number of divisions increase, the results of both rules tend towards each other, the second rule faster
than the first rule as the first rule is converging. Also, it is clear that the behavior observed seems more
elastic than viscoelastic, Figure 12a have all the curves seem completely elastic while Figure 12b have a
more viscoelastic behavior but still very elastic.
Figure 13. Rule Stress-Strain comparison
If the binary interpolation rules are defined as X/(100-X), where X is the limit aggregate fraction, it
could be hypothesized that when chosen an X near the mean aggregate fraction (61% for this study) the
variability would be very low and it would converge with a small number of divisions. This does not mean
it is the correct interpolation rule, to decide which is the most appropriate rule it is important to test inside
Page 22 de 35
a laboratory the real specimen after it is scanned. This was not possible for the study as the CT images
were provided by Michigan State University.
3.2. Force and elastic modulus at the end of simulation
3.2.1. Force at the end of simulation
The convergence and trend of a simulation can be analyzed by taking a look at the results at the end
of the simulation. The computational trade off will be examined as well as it can be seen how results
change with the number of divisions. The results seen in Figure 14a show how the change in the force is
very low for the material assignation 50/50 rule after n is 8, and almost insignificant between 30 and 40
divisions (i.e., less than 4%). The behavior is very similar to that in Figure 8 from Castillo and Al-Qadi (2019)
study. The results have converged and an increase in divisions does not have a significant change in the
force.
Figure 14b shows a different tendency. First, it does not show a clear convergence as it did on the
previous figure. Second, the force tends to increase and not to decrease, also there is a peak around 8
divisions and then the force decreases until 15 divisions, and then the force starts increasing again. More
simulations are required to find the convergence and analyze the behavior. It is interesting to see how on
Figure 15 the end force for both interpolation rules would seem to be between them, as n increases even
when the convergence has not been met for the second interpolation rule.
a) b)
Figure 14. Force at the end of the simulation a) 50/50 rule b) 75/25 rule
Page 23 de 35
Figure 15. Force at end of the simulation combined
To get an actual comparison it is important to test the real specimen under the same simulated
conditions to have an understanding of what the behavior should look like. The test must be done at 250
and with a constant displacement with the bottom and top face glued to the testing machine to get the
same constraints as the simulated ones. The results from the simulations will be slightly more rigid as the
simulation does not include air voids and the real specimen does.
3.2.2. Elastic modulus at the end of simulation
The elastic modulus is calculated from the values observed in Figure 15. As expected, the modulus
results follow the same trend as the force at the end of the simulation from Figure 15 as they are all divided
by the same strain. Conventional AC have Young’s Modulus between 2,500 and 4,000 MPa and as observed
in Figure 16 when the number of divisions is 40 the modulus is around 5,000 MPa. The fact that no air
voids are considered could cause de increase in the rigidity of the sample. Also, it shows the same results
as section 3.1 where the modulus with n = 2 is that of an aggregate. The modulus was calculated as the
response was almost linear as seen in Figure 13.
Figure 16. Young’s modulus at end of simulation
Page 24 de 35
3.3. Computational cost
It is expected to see an increase in computational time as the number of divisions increase, that is
what is observed on Figure 17. It is clear that the increase in time is not significant until the number of
divisions reaches 20; as appreciated in Figure 17a, the increase from 20 to 25 is very substantial (i.e. more
than 300%) and each increase after that is more considerable. Thus, Figure 17b shows the same values
using a logarithmic scale and it is now evident that it describes the tendency much better. For that reason,
is important to find the most appropriate number of divisions because the computational time and
resources needed increase exponentially as the number of divisions increase.
For this study, it was planned to have 3 additional divisions, 50, 60 and 70. However, it was not possible
to generate the model for those divisions and in the process, the RAM (Random Access Memory) in use
for the generation of the model without the use of a GUI (Graphical User Interphase) was more than 80Gb.
The resources needed for the generation of the models are beyond those of a conventional computer and
the virtual machine used had not been able to complete the missing models for the time of this document.
This is similar to the computational challenge described by You et al (2012).
a) b)
Figure 17. Computational cost to create the models a) Normal scale, b) Log scale
The computational cost to run the models was not significant compared to the time required to
create the models. It is seen in Figure 18 that the run time is under 20 minutes for all but one simulation.
The last simulation for the first rule had an unexpected peak with a measured time slightly over one hour,
while the corresponding simulation with the second rule did not exceed 20 minutes. That increase in time
can be attributed to a wide variety of factors, like a parallel process or an internal error of the virtual
machine or other reasons that will not be discussed here. However, it is important to mention that it is an
outlier (over 87% difference) and not the expected value or trend (around 16%) between the two rules for
the same number of divisions. This is expected, as the procedure is the same and the model should be
identical with different material assignment rules. Once the model has been created, the computational
cost is not significant, and the resources required are those of a conventional computer.
Page 25 de 35
Figure 18. Computational cost to run the models
The research vice-president of Universidad de los Andes conceded a virtual machine with 256 Gb of
RAM and 24 Cores of CPU. The model for 50 and 60 divisions has been running for over 14 days and it has
not finished, while the model with 70 division crashes after 5 days. The CPU required is not significant
compared with the RAM to create the models. It would be interesting to see if the results change if the
mesh used is not the same size as the cells but much smaller, theoretically the precision should increase
and the computational cost to run as well and as mentioned, the computational cost to run is not
significant as initially hypothesized.
Page 26 de 35
4. Conclusions An AC specimen scanned using X-Ray CT techniques was digitized using a threshold for the pixels and
then divided into grids to determine the mechanical response of each cell or division. Only two main
phases where considered for the AC specimen: coarse aggregate and asphalt mortar. The reconstructed
cube was divided into cubic cells and each cell was assigned a phase depending on the aggregate fraction
measured during the digitalization and the interpolation rule. The number of divisions to create the cells
where (2, 4, 6, 8, 10, 15, 20, 25, 30, 40) and the number of cells is the number of divisions to the cube. The
interpolation rules chosen followed a Boolean logic, a different limit was chosen for each rule, 50% for the
first rule and 75% for the second rule.
Binary interpolation rules were chosen as they require low computational cost to run due to the
simplification in the material properties. Therefore, the computational challenge is associated with the
creation of the model and with the generation of all the required partitions. This challenge is associated
with all grid division methodology. The increase in the number of divisions produces as a result an
exponential increase in the time to create the model. Thus, a powerful machine is required to generate
the models due to the requirement of over 80Gb of RAM for the program to run. Consequently, the
compromise between the results and computational cost is very important, as the computational cost
increases rapidly with small changes in the number of divisions.
After the cells were assigned with a material property, the model was implemented in FE model and
a displacement-controlled axial compression load was applied on the surface of the specimen. The stress
measured on the top face as a result of the applied loading condition was analyzed, as well as the reaction
force measured on the top face at the end of the simulation and the computational time to create and run
each model.
The results show that 50/50 material assignation rule converges rapidly with increase in the number
of grid divisions, opening the possibility to make decisions on the trade-off between the results and the
computational cost. On the contrary, the 75/25 rule did not converge and show a trend of rapid increase
in the reaction force. Due to this behavior it is not possible to make a decision of compromise between
results and computational cost, thus, more results are required to find a convergence. It was observed
that the end reaction force of the actual specimen should be between 7 and 40kN as those are the limits
imposed by the two rules.
Although the two interpolation rules did not converge to the same stress or cross each other, they
showed a tendency in the way the curves move as the number of divisions increase and the microstructure
is better represented. It was found that the 50/50 interpolation rule does not give enough weight to the
aggregate, as the stress decreases with the increase in divisions, contrary to what happened with the 75/25
rule that gave too much weight to the aggregate and not enough to the mortar phase, that made the stress
increase as the divisions increased. The results show that the correct interpolation rule must be one in
between the two analyzed rules to best describe the microstructure. Nevertheless, it is necessary to have
the real response of the specimen obtained from the laboratory or from 3D simulations using actual
aggregate particles, to accurately choose the interpolation rule.
Even when the results only show an approximation to the actual response, it is important to see the
capacity of the binary interpolations to converge and reduce its computational cost, not only by describing
the multiphase material using grid divisions but also buy using a low number of divisions to get an
Page 27 de 35
appropriate result. The benefits of using binary interpolation rules facilitated the model creation as only
two materials and two sections had to be created and not a specific material for each cell.
Even though grid divisions and binary interpolation rules reduce computational time, the creation of
the models can take several tens of thousands of minutes to generate and require highly capable computer
with an important capacity to successfully accomplish the job. The two most important aspects of grid
divisions are:
7. A small n can be identified which represents the real/expected behavior of a specimen.
8. Better understand the aggregate distribution to then generate random specimens with a small
number of divisions.
Recommendations for future studies on this topic include:
1. Get the response of a real specimen in the laboratory.
2. Find the best binary interpolation rule to assign the properties.
3. Understand the aggregate distribution
4. Implement a program that can position the material stochastically in 3D
Page 28 de 35
5. Acknowledgements The author would like to thank Dr Silvia Caro and Dr Daniel Castillo for the immense support towards
the development of this study and the generosity sharing their knowledge and experience. The author
would like to thank the Universidad de Los Andes research vice-president for the conceded resources and
all the computational power required for the study. Finally, to Dr. Emin Kutay from Michigan State
University for providing the raw CT-images and allowing its use.
Page 29 de 35
6. References Abaqus documentation. (2020). Time domain viscoelasticity.
Caro, S., Castillo, D., & Masad, E. (2015). Incorporating the heterogeneity of asphalt mixtures in flexible
pavements subjected to moisture diffusion . International Journal of Pavement Engineering .
Caro, S., Castillo, D., & Silva, M. S. (2014). Methodology for Modeling the Uncertainty of Material
Properties in Asphalt Pavements. Journal of Materials in Civil Engineering.
Caro, S., Castillo, D., Darabi, M., & Masad, E. (2016). Influence of different sources of microstructural
heterogeneity on the degradation of asphalt mixtures. International Journal of Pavement
Engineering.
Caro, S., Masad, E., Bhasin, A., & Little, D. (2010). Coupled Micromechanical Model of Moisture-Induced
Damage in Asphalt Mixtures. Journal of Materials in Civil Engineering.
Caro, S., Masad, E., Silva, M. S., & Little, D. (2010). Stochastic micromechanical model of thed eterioration
of asphalt mixtures subject to moisture diffusion processes. International Journal for Numerical
and Analytical Methods in Geomechanics.
Castillo, D., & Al-Qadi, I. L. (2019). Mechanical modelling of asphalt concrete using grid division.
International Journal of Pavement Engineering.
Castillo, D., & Caro, S. (2014). Effects of air voids variability on the thermomechanical response of asphalt
mixtures . International Journal of Pavement Engineering.
Castillo, D., & Caro, S. (2014). Probabilistic modeling of air void variability of asphalt mixtures in flexible
pavements. Construction and Building Materials.
Castillo, D., Caro, S., Darabi, M., & Masad, E. (2015). Studying the effect of microstructural properties on
the mechanical degradation of asphalt mixtures. Construction and Building Materials.
Castillo, D., Caro, S., Darabi, M., & Masad, E. (2016). Modelling moisture-mechanical damage in asphalt
mixtures using random microstructures and a continuum damage formulation. Road Materials
and Pavement Design .
Castillo, D., Caro, S., Darabi, M., & Masad, E. (2017). Influence of aggregate morphology on the mechanical
performance of asphalt mixtures. Road Materials and Pavement Design.
Chen, J., Wang, H., & Li, L. (2017). Virtual testing of asphalt mixture with two-dimensional and three-
dimensional random aggregate structures. International Journal of Pavement Engineering .
Chen, J., Wang, H., Dan, H., & Xie, Y. (2018). Random Modeling of Three-Dimensional Heterogeneous
Microstructure of Asphalt Concrete for Mechanical Analysis. Journal of Engineering Mechanics.
Dai, Q. (2011). Two- and three-dimensional micromechanical viscoelastic finite element modeling of stone-
based materials with X-ray computed tomography images. Construction and Building Materials.
Fenton, G. A., & Griffiths, D. (2008). Risk Assessment in Geotechnical Engineering. Hoboken, N.J.: John
Wiley & Sons.
Kim, Y.-R., Allen, D. H., & Little, D. N. (2005). Damage-induced modeling of asphalt mixtures through
computational micromechanics and cohesive zone fracture. Journal of Materials in Civil
Engineering.
Page 30 de 35
Manrique-Sanchez, L., & Caro, S. (2019). Numerical assessment of the structural contribution of porous
friction courses (PFC). Construction and Building Materials.
Masad, E., Muhunthan, B., Shashidhar, N., & Harman, T. (1999). Internal Structure Characterization of
Asphalt Concrete Using Image Analysis. Journal of Computing in Civil Engineering.
Wang, H., Huang, Z., You, Z., & Chen, Y. (2014). Three-dimensional modeling and simulation of asphalt
concrete mixtures based on X-ray CT microstructure images. Journal of Traffic and Transportation
Engineering.
Wills, J., Caro, S., & Braham, A. (2017). Influence of material heterogeneity in the fracture of asphalt
mixtures. International Journal of Pavement Engineering.
You, T., Al-Rub, R. K., Darabi, M. K., Masad, E. A., & Little, D. N. (2012). Three-dimensional microstructural
modeling of asphalt concrete using a unified viscoelastic–viscoplastic–viscodamage model.
Construction and Building Materials.
Zelelew, H., & Papagiannakis, A. (2011). A volumetrics thresholding algorithm for processing asphalt
concrete X-ray CT images. International Journal of Pavement Engineering.
Page 31 de 35
Appendix 1. Partition plane generator (.m)clc
clear all
l=70; %Cube length
ni=[2,4,6,8,10,15,20,25,30,40,50,60,70]; %Number of cuts
for i=1:13
B=[];
n= ni(i)
% Planes in z
for z=1:n-1
p1=[0,0,z*l/n]; %First coordinate on axis
p2=[0,l,z*l/n]; %Second coordinate on axis at the
end of y
p3=[l,l,z*l/n]; %Thrid coordinate on axis at the end
of x and y
B=[B;p1;p2;p3]; %Planes
end
% Planes in x
for x=1:n-1
p1=[x*l/n,0,0];
p2=[x*l/n,l,0];
p3=[x*l/n,l,l];
B=[B;p1;p2;p3];
end
% Planes in y
for y=1:n-1
p1=[0,y*l/n,0];
p2=[0,y*l/n,l];
p3=[l,y*l/n,l];
B=[B;p1;p2;p3];
End
fileName = strcat('cc',num2str(n),'.txt')
dlmwrite(fileName,B)
end
Page 32 de 35
Appendix 2. Specimen reconstruction and volume fraction calculation
(.m) clc clear all nii=[2,4,6,8,10,15,20,25,30,40,50,60,70]; %Cuts per axis for ii=1:13 n= nii(ii); ni=30; %First image nf=100; %Last image j=0; %Counts registered images THsup=215; %Upper Threshold THinf=165; %Lower Threshold for i=1:150 D = strcat(num2str(i),'.jpg'); %Image name if exist(D, 'file') j = j+1; I = imread(D); %Import image I = I(77:435,77:435); %Cut image into square for x=1:length(I) for y=1:length(I) if I(x,y)>=THinf && I(x,y)<=THsup I(x,y)=1; %Aggregate takes number 1 else I(x,y)=0; %Not aggregate number 0 end end end S{j} = I; %Cell with each image end end [volFrac,gridCoord] =VolFrac(S,n); %3D image is sent and
number of cuts, I get the % volume fraction
and coordinate
vf = [gridCoord volFrac]; %Output matrix sum(volFrac)/n^3 figure imagesc(ima{50}) figure imagesc(S{50}) fileName = strcat('cm',num2str(n),'.txt') %dlmwrite(fileName,vf) end
function [volFrac,gridCoord,imagenn] = VolFrac(mat,grids) %Function to calculate the volume fraction between two
material phases within a grid l = length(mat{1}); %Pixels wide n = floor(l/grids); %Pixels per grid nr = rem(l,grids); %Additional pixels ni = 25; %First image of interest nf = ni+69; %Last image of interest (70mm) k = 0; %Image counter p = l/70; %Pixel size (≈5.12) nn = 70/grids;
% Reduce sample to 70 images for j=ni:nf k=k+1; %Images Im{k}=mat{j}; %Save image end ns = floor(k/grids); %Grid thickness nsr = rem(k,grids); %Additional thickness g=0; %Grid counter gz=0; %Grid counter in z for i=1:grids if i==1 nss = 1; %Value initial image else nss = z+1; %First image end
gz = gz+1; %Grid z +1 gx = 0; %Restart grid x counter xf = 0; %Restart final image in x nrx = nr; %Restart additional pixels in x % Iterate in x for x=1:grids gx = gx+1; %Grid in x +1 gy = 0; %Restart grid in y xi = xf+1; %Lower value in grid %Add additional pixels if necessary if nrx > 0 xf = xi+n; %Upper grid value nrx = nrx-1; %One additional pixel less else xf = xi+n-1; %Upper pixel in the grid end xt = xf-xi; %Grid with yf = 0; %Restart final image y nry = nr; %Restart additional pixel y %Iterate in y for y=1:grids gy = gy+1; %Grid count y +1 yi = yf+1; %Lower grid value %Add additional pixels if necessary if nry > 0 yf = yi+n; %Upper grid value +1 nry = nry-1; %Additional pixels less else yf = yi+n-1; %Upper grid value end yt = yf-yi; %Grid with g = g+1; %Grid counte vol{g}=0; %Add additional pixel if necessary if i<= nsr %Pass through image for z=nss:nss+ns vol{g} = vol{g}+sum(Im{z}(xi:xf,yi:yf));
%Accumulate pixels within grid end else %Pass through images for z=nss:nss+ns-1 vol{g} = vol{g}+sum(Im{z}(xi:xf,yi:yf));
%Accumulate pixels within grid end end zt = z-nss+1; %Images within grid t = (xt+1)*(yt+1)*zt; %Total pixels in grid volS = sum(vol{g}); %Pixels with aggregate volF{g} = volS/t; %Volume fraction c = [(2*x-1)*nn/2,(2*y-1)*nn/2,(2*i-
1)*nn/2];%Coordinate in center of grid gridC{g} = c; %Save grid coordinates imagen{i}(x,y)=volF{g}; end end end %Output format as matrix volFrac = cell2mat(volF)'; gridC = cell2mat(gridC); gridC = reshape(gridC,3,[])'; gridCoord = gridC; imagenn = imagen; end
Page 33 de 35
Appendix 3. Abaqus simulation script (.py) """
@author: Simon Espinosa
"""
from abaqus import *
from abaqusConstants import *
import __main__
import section
import regionToolset
import displayGroupMdbToolset as dgm
import part
import material
import assembly
import step
import interaction
import load
import mesh
import optimization
import job
import sketch
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
import connectorBehavior
import os
import time
from numpy import array # for python
# display function
def disp(line): # print in terminal and ABAQUS python
console
print line
print >> sys.__stdout__, line
l = 70.0 # Dim of cube
ni = array([2,4,6,8,10,15,20,25,30,40,50,60,70]) #Num of
grids
for i in xrange(0,1): #13
tStart = time.clock()
#Creat model
Mdb()
#Model name
n = ni[i]
nn = str(n)
modelName = 'Model-'+nn
mdb.models.changeKey(fromName='Model-1',
toName=modelName)
print >> sys.__stdout__, nn
print(modelName)
# Creat cube
s =
mdb.models[modelName].ConstrainedSketch(name='__profile__',
sheetSize=400.0)
g, v, d, c = s.geometry, s.vertices, s.dimensions,
s.constraints
s.setPrimaryObject(option=STANDALONE)
s.rectangle(point1=(0.0, 0.0), point2=(l, l))
session.viewports['Viewport:
1'].view.setValues(nearPlane=462.133,
farPlane=679.652, width=455.141, height=248.429,
cameraPosition=(
20.3919, 101.652, 570.892), cameraTarget=(20.3919,
101.652, 0))
p = mdb.models[modelName].Part(name='Cubo',
dimensionality=THREE_D,
type=DEFORMABLE_BODY)
p = mdb.models[modelName].parts['Cubo']
p.BaseSolidExtrude(sketch=s, depth=l)
s.unsetPrimaryObject()
p = mdb.models[modelName].parts['Cubo']
session.viewports['Viewport:
1'].setValues(displayedObject=p)
del mdb.models[modelName].sketches['__profile__']
# Path to data points
path = 'C:\Users\...\cc'+nn+'.txt'
datalist = []
# Import data into a matrix
with open(path, "rb") as fp:
for row in fp.readlines():
tmp = row.split(",")
try:
datalist.append((float(tmp[0]),
float(tmp[1]), float(tmp[2])))
except:pass
numcor = len(datalist) # Number of data points
for i in xrange(0,numcor):
p = mdb.models[modelName].parts['Cubo']
# Creat datumPoint from coordinates importet fro
cc.txt
p.DatumPointByCoordinate(coords=datalist[i])
for ii in xrange(2,numcor,3):
p = mdb.models[modelName].parts['Cubo']
c = p.cells
pickedCells = c
v1, e1, d2 = p.vertices, p.edges, p.datums
# Creat partitions using points from cc.txt
p.PartitionCellByPlaneThreePoints(cells=pickedCells,
point1=d2[ii],
point2=d2[ii+1],
point3=d2[ii+2])
# Create materials
p = mdb.models[modelName]
p.Material(name='Agg')
mt1 = p.materials['Agg']
mt1.Elastic(table=((25000.0, 0.16), ))
p.Material(name='Mat')
mt2 = p.materials['Mat']
mt2.Elastic(moduli=INSTANTANEOUS, table=((112.0, 0.4),
))
mt2.Viscoelastic(domain=TIME,
time=PRONY, table=((0.653333017, 0.0, 0.001571618),
(0.22773307, 0.0,
0.016388301), (0.096848609, 0.0, 0.099474509),
(0.018553647, 0.0,
0.735803657), (0.002964298, 0.0, 6.774096386),
(0.000476788, 0.0,
65.78651685), (7.53574e-05, 0.0, 643.3649289),
(1.1875e-05, 0.0,
6514.285714), (2.25001e-06, 0.0, 80976.19048)))
p.HomogeneousSolidSection(name='Agg', material='Agg',
thickness=None)
p.HomogeneousSolidSection(name='Mat', material='Mat',
thickness=None)
#Save
mdb.saveAs( pathName='C:/Users/.../'+modelName+'.cae')
elapsed = (time.clock()-tStart)
disp('Total time for creat'+modelName+',
'+str(elapsed/60)+' minutes.')
#Interpolation rule
vfi = array([0.5,0.75])
for ivf in xrange(0,2):
#Creat model
Mdb()
# Open model
openMdb(pathName='C:/Users/.../'+modelName+'.cae')
#Model name
vfrule = vfi[ivf] # The volume fraction rule ej 50-
50
# Path to data points
pathcm = 'C:\Users\...\cm'+nn+'.txt'
datalistcm = []
datalistccm = []
# Import data into a matrix
with open(pathcm, "rb") as fp:
for row in fp.readlines():
tmp = row.split(",")
try:
datalistcm.append((float(tmp[0]),
float(tmp[1]), float(tmp[2])))
datalistccm.append(float(tmp[3]))
except:pass
numcorcm = len(datalistcm) # Number of data points
for i in xrange(numcorcm):
# Part, cells
p = mdb.models[modelName].parts['Cubo']
c = p.cells
# Select cell and apply material
cells = c.findAt(((datalistcm[i]), ))
names = 'Set-' + str(i)
region = p.Set(cells=cells, name=names)
# Check what material it is
if datalistccm[i] >= vfrule:
p.SectionAssignment(region=region,
sectionName='Agg')
sectionName='Agg'
else:
Page 34 de 35
p.SectionAssignment(region=region,
sectionName='Mat')
sectionName='Mat'
# Create amplitud for load
mdb.models[modelName].TabularAmplitude(name='Amp',
timeSpan=STEP,
smooth=SOLVER_DEFAULT, data=((0.0, 0.0), (0.5, -
0.1)))
# Create the instance for the cube
a = mdb.models[modelName].rootAssembly
session.viewports['Viewport:
1'].setValues(displayedObject=a)
session.viewports['Viewport:
1'].assemblyDisplay.setValues(
optimizationTasks=OFF,
geometricRestrictions=OFF, stopConditions=OFF)
a = mdb.models[modelName].rootAssembly
a.DatumCsysByDefault(CARTESIAN)
p = mdb.models[modelName].parts['Cubo']
a.Instance(name='Cubo', part=p, dependent=OFF)
# Mesh
a = mdb.models[modelName].rootAssembly
partInstances =(a.instances['Cubo'], )
a.seedPartInstance(regions=partInstances, size=l/n,
minSizeValue=l/n/10)
a = mdb.models[modelName].rootAssembly
partInstances =(a.instances['Cubo'], )
a.generateMesh(regions=partInstances)
# Step
session.viewports['Viewport:
1'].assemblyDisplay.setValues(
adaptiveMeshConstraints=ON)
mdb.models[modelName].ViscoStep(name='Step-1',
previous='Initial',
timePeriod=0.5, maxNumInc=10000,
timeIncrementationMethod=FIXED,
initialInc=0.01, cetol=0.0)
session.viewports['Viewport:
1'].assemblyDisplay.setValues(step='Step-1')
# Constraint
mdb.models[modelName].FieldOutputRequest(name='F-
Output-2',
createStepName='Step-1', variables=('E', 'S',
'U','RF'))
session.viewports['Viewport:
1'].assemblyDisplay.setValues(step='Initial')
session.viewports['Viewport:
1'].assemblyDisplay.setValues(loads=ON, bcs=ON,
predefinedFields=ON, connectors=ON,
adaptiveMeshConstraints=OFF)
session.viewports['Viewport:
1'].assemblyDisplay.setValues(step='Step-1')
a = mdb.models[modelName].rootAssembly
f1 = a.instances['Cubo'].faces
for i in xrange(0,n):
for j in xrange(0,n):
datalist = ((2*i+1)*l/n/2 ,l ,(2*j+1)*l/n/2
)
if i==0 and j==0 :
faces1 = f1.findAt((datalist,),)
else:
faces1 += f1.findAt((datalist,),)
region = regionToolset.Region(faces=faces1)
mdb.models[modelName].DisplacementBC(name='BC-3',
createStepName='Step-1',
region=region, u1=UNSET, u2=1.0, u3=UNSET,
ur1=UNSET, ur2=UNSET,
ur3=UNSET, amplitude='Amp', fixed=OFF,
distributionType=UNIFORM,
fieldName='', localCsys=None)
#BCSup
a = mdb.models[modelName].rootAssembly
a.regenerate()
session.viewports['Viewport:
1'].setValues(displayedObject=a)
a = mdb.models[modelName].rootAssembly
f1 = a.instances['Cubo'].faces
for i in xrange(0,n):
for j in xrange(0,n):
datalist = ((2*i+1)*l/n/2 ,l ,(2*j+1)*l/n/2
)
if i==0 and j==0 :
faces1 = f1.findAt((datalist,),)
else:
faces1 += f1.findAt((datalist,),)
region = regionToolset.Region(faces=faces1)
mdb.models[modelName].DisplacementBC(name='BC-Sup',
createStepName='Initial',
region=region, u1=SET, u2=UNSET, u3=SET,
ur1=UNSET, ur2=UNSET,
ur3=UNSET, amplitude=UNSET,
distributionType=UNIFORM, fieldName='',
localCsys=None)
#BCInf
a = mdb.models[modelName].rootAssembly
a.regenerate()
session.viewports['Viewport:
1'].setValues(displayedObject=a)
a = mdb.models[modelName].rootAssembly
f1 = a.instances['Cubo'].faces
for i in xrange(0,n):
for j in xrange(0,n):
datalist = ((2*i+1)*l/n/2 ,0 ,(2*j+1)*l/n/2
)
if i==0 and j==0 :
faces1 = f1.findAt((datalist,),)
else:
faces1 += f1.findAt((datalist,),)
region = regionToolset.Region(faces=faces1)
mdb.models[modelName].DisplacementBC(name='BC-Inf',
createStepName='Initial',
region=region, u1=SET, u2=SET, u3=SET,
ur1=UNSET, ur2=UNSET,
ur3=UNSET, amplitude=UNSET,
distributionType=UNIFORM, fieldName='',
localCsys=None)
#Save
mdb.saveAs(pathName='C:/Users/.../'+modelName+'-
'+str(vfrule)+'.cae')
disp(modelName+'-'+str(vfrule))
elapsed = (time.clock()-tStart)
#Display time to creat model with material
disp('Total time for '+modelName+'-'+str(vfrule)+',
'+str(elapsed/60)+' minutes.')
#Job
jobName = 'Job-'+nn
mdb.Job(name=jobName, model=modelName, description='',
type=ANALYSIS,
atTime=None, waitMinutes=0, waitHours=0, queue=None,
memory=90,
memoryUnits=PERCENTAGE, getMemoryFromAnalysis=True,
explicitPrecision=SINGLE,
nodalOutputPrecision=SINGLE, echoPrint=OFF,
modelPrint=OFF, contactPrint=OFF, historyPrint=OFF,
userSubroutine='',
scratch='', resultsFormat=ODB,
multiprocessingMode=DEFAULT, numCpus=1,
numGPUs=0)
# alt A - Write inp
#disp('Submitting...')
#mdb.jobs[jobName].writeInput(consistencyChecking=OFF)
# alt B - Create odb
disp('Submitting...')
mdb.jobs[jobName].submit(consistencyChecking=OFF)
disp('Waiting...')
mdb.jobs[jobName].waitForCompletion()
# print elapsed time
elapsed = (time.clock()-tStart)
disp('Total time for '+modelName+', '+str(elapsed/60)+'
minutes.')
#Finish
Mdb()
Page 35 de 35
Appendix 4. Abaqus results processor (.py) from abaqus import * # for abaqus
from abaqusConstants import *
import __main__
import section
import regionToolset
import displayGroupMdbToolset as dgm
import part
import material
import assembly
import optimization
import step
import interaction
import load
import mesh
import job
import sketch
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
import connectorBehavior
#import abq_ExcelUtilities.excelUtilities
from numpy import array # for python
import time
# display function
def disp(line): # print in terminal and ABAQUS python
console
print line
print >> sys.__stdout__, line
### Extract and report vertical reaction force and
displacement of rigid body
#nStart = 1
#nEnd = 10
#nRange = xrange(nStart,nEnd+1)
ni = array([2,4,6,8,10,15,20,25,30,40]) #Num of grids
# 1. Create xy data from surface
print >> sys.__stdout__, ''
for i in xrange(0,10):
vfi = array([0.5,0.75])
for ivf in xrange(0,2):
tStart = time.clock()
### ODB name
n = ni[i]
nn = vfi[ivf]
jobName = 'Job-'+str(n)+'-'+str(int(nn*100))
odbName = jobName+'.odb'
disp(odbName)
odb = session.openOdb(name=odbName, readOnly=False)
session.viewports['Viewport:
1'].setValues(displayedObject=odb)
xydata = session.xyDataListFromField(odb=odb,
outputPosition=NODAL, variable=(('RF', NODAL, ((COMPONENT,
'RF2'), )), ('S', INTEGRATION_POINT, ((COMPONENT, 'S22'),
)), ), nodeSets=('_PICKEDSET5', ))
z = session.xyDataObjects.keys()
disp(z)
lz = len(z)
lz2 = len(z)/2
disp(lz2)
reportName = 'rf'+str(n)+'-'+str(int(nn*100))+'.txt'
report2Name = 's'+str(n)+'-'+str(int(nn*100))+'.txt'
x = []
y = []
for j in xrange(0,lz2):
disp(j)
x.append(session.xyDataObjects[z[j]])
y.append(session.xyDataObjects[z[j+lz2]])
x = tuple(x)
y = tuple(y)
session.writeXYReport(fileName=reportName, xyData=x)
session.writeXYReport(fileName=report2Name,
xyData=y)
for k in xrange(0,lz2):
disp(k)
del session.xyDataObjects[z[k]]
del session.xyDataObjects[z[k+lz2]]
odb.close()
x = []
y = []
z = []
# print elapsed time
elapsed = (time.clock()-tStart)
disp('Total time for '+jobName+',
'+str(elapsed/60)+' minutes.')