Advanced Computational Methodologies for Fetus …tavares/downloads/publications/...Advanced...

44
adfa, p. 1, 2011. © Springer-Verlag Berlin Heidelberg 2011 Faculty of Engineering University of Porto Advanced Computational Methodologies for Fetus Face Reconstruction and Analysis from Obstetric Ultrasound Research Planning Final Report Doctoral Program in Informatics Engineering Supervisor: João Manuel Tavares (FEUP) Co-supervisor: Alexandra Matias Macedo (FMUP) Margarida Igreja Gomes July 2013

Transcript of Advanced Computational Methodologies for Fetus …tavares/downloads/publications/...Advanced...

adfa, p. 1, 2011.

© Springer-Verlag Berlin Heidelberg 2011

Faculty of Engineering University of Porto

Advanced Computational Methodologies for Fetus Face

Reconstruction and Analysis from Obstetric Ultrasound

Research Planning Final Report

Doctoral Program in Informatics Engineering

Supervisor: João Manuel Tavares (FEUP)

Co-supervisor: Alexandra Matias Macedo (FMUP)

Margarida Igreja Gomes

July 2013

Índex 1. Introduction .......................................................................................... 3

2. Project Problems ................................................................................... 4

3. Research Areas ...................................................................................... 4

4. State of the Art ...................................................................................... 5

5. Research Groups ................................................................................. 39

6. Conferences ........................................................................................ 40

7. Journals ............................................................................................... 41

8. Reviewers ............................................................................................ 42

9. Research Hypothesis ........................................................................... 42

10. Methodology ................................................................................... 42

11. Work Plan ........................................................................................ 43

12. Conclusions ..................................................................................... 44

1. Introduction

This final report was conducted within the course of Research Planning,

taught by Professor Eugenio Oliveira and Professor Augusto Sousa, from Doc-

toral Program in Informatics Engineering at Faculty of Engineering University

of Porto.

Aims to summarize all the work done up to date related to the thesis having

as focus the state of the art. This work has focused on the segmentation of

fetal faces in ultrasound images. With the purpose of tackling this problem

systematically it was divided into sub-problems. The first was the ultrasonog-

raphy, with their pros and cons, their modes and a reference to one of its

characteristics, the speckle noise. Then a research on image segmentation

was done, including prior information, relevant papers and methods applied

to ultrasound in particular. A third group highlighted the fetal ultrasound

imaging, distinguishing the various available dimensions. After, segmentation

of fetal faces in ultrasound images was analysed. The last of papers and

methods studied identified a typical fetal face anomaly, the fetal cleft lip and

palate.

This document is organized in the following parts. Begins with the project

problem, continues with the research areas, following the state of the art.

Then the research groups, conferences and reviews. After the research hy-

pothesis is presented, as well as the proposed methodologies and the work

plan. To finalize the key findings of this work are properly summarized.

2. Project Problems

The main project problem is in the following research question: How to au-

tomatically identify a cleft lip and palate from ultrasound images?

However, some other possible intermediate questions may occur and be

useful for this particular investigation as for the research is in general. As-

suming that this issue is likely to have a response and given the ultrasound

imaging limitations, among others, the following intermediate tasks are then

presented.

One of them is the identification of all the members that make up the face,

that is to say, eyes, nose, mouth, chin, etc.

Another question is the semi-automatic measurement of typically performed

distances, during a routine pregnancy examination, for instance, inter orbital

distance.

The computational processing and analysis of obstetric facial in ultrasound

images is understudied and therefore little documented, so that the search,

retrieval, and creating a database able to meet the requirements will also be

a problem.

Meanwhile many other questions will certainly arise.

3. Research Areas

The research areas of this investigation are image acquisition, computational vision, i.e. image processing and analysis, medical imaging, and artificial intel-ligence. In respect to computational vision area the main topics to be discussed are image segmentation, tracking, Image registration and object and artificial intelligence and matching using its methods and techniques in biomedical and biometrics applications. For the second field it is expected that image acquisition technology is analysed, as well as, core image and video pro-cessing algorithms, artificial intelligence algorithms.

4. State of the Art

Segmentation of fetal faces in ultrasound images: A survey

Margarida Igreja Gomes1, João Manuel R. S. Tavares1, and Alexandra Matias2

1Faculty of Engineering of the University of Porto

{pro002, tavares}@fe.up.pt 2 Faculty of Medicine of the University of Porto

[email protected]

Abstract. With improved techniques for medical imaging such as Ultrasound Obstetric, the capacity and fidelity of image diagnosis were extended. A set of medical images can benefit from techniques of image processing and analysis to classify images and allow the recognition of patterns. Segmentation is a technique used in this type of application because it allows you to isolate areas of the image that have characteristics in common, helping to classify them ac-cording to the structures that compose it. The purpose of this paper is to com-pile the objectives, methods, results, techniques of scientific work carried out under segmentation of fetal faces in ultrasound images. Keywords: ultrasound, segmentation, fetus face, computational vision, image analysis and processing, obstetrics

1 Introduction

Ultrasonography or diagnostic sonography is an ultrasound-based diagnostic

imaging technique used for visualizing subcutaneous body structures for pos-

sible pathology or lesions. Ultrasonography is widely used in medicine. It is

possible to perform both diagnosis and therapeutic procedures, using ultra-

sound to guide interventional procedures. Sonographers typically use a

hand-held probe, a transducer, which is placed directly on and moved over

the patient. Medical sonography is used in the study of many different sys-

tems, such as, obstetrics. Obstetric ultrasonography is the application of

medical ultrasonography to obstetrics, in which sonography is used to visual-

ize the embryo or fetus in its mother's uterus [1].

Ultrasound image segmentation is strongly influenced by the quality of the

input data. There are characteristic artifacts which make the segmentation

task complicated such as attenuation, speckle, shadows, and signal dropout;

due to the orientation dependence of acquisition that can result in missing

boundaries. Further complications arise as the contrast between areas of

interest is often low. The non-invasive nature of ultrasound is a strong argu-

ment for its use in obstetrics and gynecology. In obstetrics, segmentation

provides valuable measurements in order to assess the growth of the fetus

and in diagnosis of fetal malformation. Most analysis is based on 2-D scans.

Standard measurements include the biparietal diameter, head circumfer-

ence, length of the fetal femur, the abdominal circumference and the amni-

otic fluid volume. There is often a sharp contrast between the face of a fetus

and the surrounding amniotic fluid, allowing automatic boundary detection.

Hence, obstetrics is a potential field of application for volume rendering and

visualization [2].

Manual extraction of contours in medical images requires expert knowledge

and is a tedious and time-consuming task. In addition, manual contour ex-

traction is influenced by the variability of the human observer, which limits

its reliability and reproducibility. The development of automatic techniques

for the extraction of contours of fetal anatomic structures, can in principle,

eliminate the variability introduced by the human operator, contributing to

reliable and reproducible measurements. In the development of such tech-

niques, important factors determining its acceptance by clinicians are accu-

racy, robustness, reliability, reproducibility and applicability [3].

2 Ultrasonography

Ultrasound (US) is an oscillating sound pressure wave with a frequency

greater than the upper limit of the human hearing range. Ultrasound is used

in many different fields. Industrially, ultrasound is used for cleaning and for

mixing, and to accelerate chemical processes [4].

The creation of an image from sound is done in various levels, as can be seen in Figure 1. These levels consist of producing a sound wave, receiving echoes, forming and displaying the image.

Fig. 1. Levels of processing in ultrasonic image formation (Source: [5]).

A sound wave is typically produced by a transducer encased in a housing which can take a number of forms. Strong, short electrical pulses from the ultrasound machine make the transducer ring at the desired frequency. The frequencies can be anywhere between 2 and 18 MHz’s. The sound is focused either by the shape of the transducer, a lens in front of the transducer or a complex set of control pulses from the ultrasound scanner machine. This focusing produces an arc-shaped sound wave from the face of the transduc-er. The wave travels into the body and comes into focus at a desired depth. The sound wave is partially reflected from the layers between different tis-sues. Specifically, sound is reflected anywhere there are density changes in the body.

The return of the sound wave to the transducer results in the same process that it took to send the sound wave, except in reverse. The return sound wave vibrates the transducer; the transducer turns the vibrations into elec-trical pulses that travel to the ultrasonic scanner where they are processed and transformed into a digital image.

The sonographic scanner can locate which pixel in the image to light up and to what intensity and at what hue if frequency is processed. Images from the sonographic scanner can be displayed, captured, and broadcast through a computer using a frame grabber to capture and digitize the analogue video signal. The captured signal can then be post-processed on the computer it-self [5].

2.1 Ultrasonography Pros and Cons

As with all imaging modalities, ultrasonography has its positive and negative

attributes.

Its strengths include, the possibility to images muscle, soft tissue, and bone

surfaces very well and being particularly useful for delineating the interfaces

between solid and fluid-filled spaces; it renders live images, where the oper-

ator can dynamically select the most useful section for diagnosing and doc-

umenting changes, often enabling rapid diagnoses; it has no known long-

term side effects and rarely causes any discomfort to the patient; the equip-

ment is widely available and comparatively flexible; it is small, easily carried

scanners are available; examinations can be performed at the bedside and is

relatively inexpensive compared to other modes of investigation, such as,

magnetic resonance imaging.

About their weaknesses the following are considered, sonographic devices

have trouble penetrating bone; it performs very poorly when there is a gas

between the transducer and the organ of interest, due to the extreme differ-

ences in acoustic impedance; image quality and accuracy of diagnosis is lim-

ited with obese patients, overlying subcutaneous fat attenuates the sound

beam and the method is operator-dependent. A high level of skill and expe-

rience is needed to acquire good-quality images and make accurate diagno-

ses.

2.2 Ultrasonography modes

Several modes of ultrasound are used in medical imaging. These are: A, B, C, M, Doppler, Pulse Inversion and Harmonic modes.

A-mode (amplitude mode) is the simplest type of ultrasound. A single trans-ducer scans a line through the body with the echoes plotted on screen as a function of depth. Therapeutic ultrasound aimed at a specific tumour or cal-culus is also A-mode, to allow for pinpoint accurate focus of the destructive wave energy.

In B-mode (brightness mode) ultrasound, a linear array of transducers simul-taneously scans a plane through the body that can be viewed as a two-dimensional image on screen [6].

A C-mode image is formed in a plane normal to a B-mode image. A gate that selects data from a specific depth from an A-mode line is used; then the transducer is moved in the 2D plane to sample the entire region at this fixed

depth. When the transducer traverses the area in a spiral, an area of 100 cm2 can be scanned in around 10 seconds.

In M-mode (motion mode) ultrasound, pulses are emitted in quick succes-sion – each time, either an A-mode or B-mode image is taken. Over time, this is analogous to recording a video in ultrasound. As the organ boundaries that produce reflections move relative to the probe, this can be used to deter-mine the velocity of specific organ structures [7].

Doppler mode makes use of the Doppler Effect in measuring and visualizing blood flow. This mode is classified in Colour, Continuous, Pulsed Wave (PW) and Duplex Doppler. In Colour Doppler, velocity information is presented as a color-coded overlay on top of a B-mode image. In Continuous Doppler, Dop-pler information is sampled along a line through the body and all velocities detected at each time point is presented. In Pulsed wave Doppler, Doppler information is sampled from only a small sample volume and presented on a timeline. In Duplex, a common name for the simultaneous presentation of 2D and PW Doppler information [6].

In Pulse Inversion mode two successive pulses with opposite sign are emitted and then subtracted from each other. This implies that any linearly respond-ing constituent will disappear while gases with non-linear compressibility stand out.

In Harmonic mode a deep penetrating fundamental frequency is emitted into the body and a harmonic overtone is detected. This way noise and artefacts due to reverberation and aberration are greatly reduced. Some also believe that penetration depth can be gained with improved lateral resolution [7].

2.3 Speckle noise

Ultrasound machine uses high frequency sound waves to acquire pictures. The coherent nature of ultrasound imaging, results in the formation of a mul-tiplicative noise called speckle noise. Speckle noise appears as a granular pattern which varies depending upon the type of biological tissue. The inter-ference of backscattered signals result in speckle noise and its apparent reso-lution is beyond the functionalities of the imaging system. Noise content is usually stronger than the microstructure of tissue parenchyma and reduces the visibility and masks the tissue under investigation. Therefore, the main

challenge in despeckling is to filter the noise content without affecting the microstructures and edges. Speckle is a form of multiplicative noise that affects the quality of ultrasound images. In ultrasound imaging the tissue under examination is a sound ab-sorbing medium containing scatters. The inhomogeneity of the tissue and the small size of image detail than the wavelength of the ultrasound results in the scattering of signals and lead to the formation of a granular pattern called speckle noise. Better image quality helps in easy and accurate diagnos-tic decision making. The widespread use of ultrasound imaging necessitates the need for developing despeckling filters for reducing noise. Despeckling can be done in two ways. It can apply directly on phase of RF (Radio Frequency) signal and it can also be used as post processing tech-niques applied on images. Different methods like linear and non-linear filter-ing, wavelet based despeckling have been proposed to reduce the noise [8]. The analysis of this signal-dependent effect has been a major subject of in-vestigation in the medical ultrasound imaging community. Speckle can be undesirable and hence is seen as noise to be reduced, or, as signal carrying some information about the observed tissues. Thus, from an image segmen-tation perspective, you may choose to remove it or utilize it for the infor-mation it contains. Speckle has a random and deterministic nature as it is formed from backscattered echoes of randomly or coherently distributed scatters in the tissue. It has been shown that the statistical properties of the received signal, and thus of the echo envelope, depend on the density and the spatial distribution of the scatters [2]. The formation process of an ultrasound image involves different types of

perturbations, such as the displaying of non-structural echoes, removal of

real structural echoes and displacement and distortion of echoes. In addition

to these types of artefacts we must also take into account speckle, which is

inherent to ultrasound images. This type of “noise” is Rayleigh-distributed (in

the case of fully-developed speckle), multiplicative and degrades the image

by hiding thin structures and reducing the signal to noise ratio (SNR) [3].

3 Image segmentation

Image segmentation is the process of partitioning a digital image into multi-

ple segments (sets of pixels, also known as super pixels). The goal of segmen-

tation is to simplify and change the representation of an image into some-

thing that is more meaningful and easier to analyse. Image segmentation is

typically used to locate objects and boundaries (lines, curves, etc.) in images.

The result of image segmentation is a set of segments that collectively cover

the entire image, or a set of contours extracted from the image (edge detec-

tion). Each of the pixels in a region is similar with respect to some character-

istic or computed property, such as colour, intensity, or texture. Adjacent

regions are significantly different with respect to the same characteristic(s).

When applied to a stack of images, typical in medical imaging, the resulting

contours after image segmentation can be used to create 3D reconstructions

with the help of interpolation algorithms like marching cubes [9].

3.1 Ultrasound segmentation prior information

Here, we focus on reviewing papers which have that have developed seg-mentation solutions that used prior information, like, image features (grey level distributions, intensity gradient, phase and texture measures), shape and temporal models. In what concerts to image features we include at first the grey level distribu-

tion. The Rayleigh model of speckle has proved a popular choice. A Rayleigh

distribution was used in the anisotropic diffusion edge detection method of

and in statistical segmentation methods. Other grey level distribution models

have also been used in the ultrasound segmentation literature: for instance,

the Gaussian, exponential, Gamma, and Beta distributions [10].

The motivation for using intensity gradient as a feature comes from the computer vision literature where based on a photometric model, high inten-sity gradients or equivalently intensity step changes/discontinuities in inten-sity are frequently associated with edges of objects. In ultrasound segmenta-tion, it is, therefore, appropriate to use intensity gradient as a segmentation constraint if the goal is to find acoustic discontinuities.

The local phase provides an alternative way to characterize structure in an image which has been used. Measuring local phase, or rather phase congru-ency over spatial scales, provides a way to characterize different intensity features in terms of shape of the intensity profile rather than the intensity derivative magnitude. Generally, phase is estimated using quadrature filter banks. Thus, there is a link between phase-based methods and other wavelet methods. Although, ultrasound texture patterns are intrinsically dependent on the imaging system, they characterize the microstructure of the tissues being imaged. The distribution (coherent and/or random) of scatters and their rela-tive sizes to the wavelength of the incident ultrasound pulse, produces dif-ferent texture patterns independent of the physics of the imaging system. Thus, in segmentation, where the aim is often to characterize the imaged object rather than necessarily characterize its true physical properties, tex-ture analysis methods have proved successful. Edge cues and region information are often not sufficient for a reliable and accurate segmentation. In this case, shape constraints are often found to effectively improve results. Probably the most classical shape constraint involves boundary regulariza-tion, say as in the choice of the internal terms in an active contour. A second way to impose a shape constraint is by using a parametric shape and a pre-ferred shape can be imposed, for example, in a probabilistic framework, us-ing the learned distributions of the shape parameter over a set of training examples. Finally, we would like to emphasise the fact that a shape model is only as good as the training samples from which it was built and the chosen shape-space model framework. An important issue, still open to our knowledge, concerns the aptitude of shape constraints to handle disease cases. Ultrasound is often employed because it is real time and, thus, the data available for segmentation is a temporal image sequence rather than a static frame. Therefore, it is sometimes useful to employ temporal priors in seg-mentation, i.e., consider segmentation as a spatial-temporal process. The most obvious example application is cardiac segmentation where the object moves in a periodic pattern [2].

3.2 Ultrasound segmentation selected papers

We have selected five influential papers in the ultrasound segmentation lit-erature. These are probably not the best but have been selected based on criteria of the ultrasound-specific model they have employed and whether evaluation has been performed on a reasonable number of clinical datasets. Abolmaesumi et al. presents a 2-D contour segmentation approach where two contour models are combined in order to achieve smooth results and allow rapid changes in the boundary. This could be seen as an adaptive regu-larization of the boundaries because the models have different noise/smoothness trade-off [11]. Bosch et al. concerns the application and clinical validation of active appear-ance and active appearance motion modelling (intensity, shape, and motion priors) [12]. Mignotte et al. presents a good illustration of employing an imaging physics prior (a shifted Rayleigh distribution to model grey level statistics), and a shape prior (a deformable template) to solve boundary estimation via a mul-tiscale minimization. (Grey level distribution and shape prior, probabilistic framework) [13]. Mulet–Parada et al. proposes an intensity-invariant image feature (local phase) for acoustic boundary and displacement estimation as an alternative to intensity derivatives (image prior) [14]. Xie et al. demonstrates an ultrasound segmentation method that combines texture and shape prior information in a level set framework (texture and shape prior) [15].

3.3 Ultrasound Segmentation methods

Some of the applications of image segmentation are in medical imaging, to diagnosis, study of anatomical structure, to locate tumours and other pa-thologies or to measure tissue volumes [16]. There are many methodologies to approach the image segmentation prob-lem. Each of the approaches presents its own advantages and drawbacks, they can be used isolated or combined in any convenient manner to explore the complementary properties of each method or they can be unsupervised

without any user intervention or interactive as often required by medical imaging applications. These segmentation methods are often classified in three categories, namely feature domain, image domain and methods that use a combination of these cooperative methods, as it can be seen in Figure 2 [17].

Fig. 2. An overview of images segmentation approaches (Source: [17]).

With respect to the Feature domain a number of approaches to segmenta-tion are based on finding compact clusters in some feature space. In this technique, a vector of local features is computed at each pixel and then mapped into the feature space. Feature such as intensity and texture are the commonly studied parameters. The feature space is then clustered and each pixel is labelled with the cluster that contains its feature vector. Clusters in feature space can then be used for image segmentation, typically by fitting a parametric model to each cluster and then labelling the pixels whose feature vectors lie in the cluster with the parameters. The common techniques in-clude histogram thresholding, clustering and graphs [17]. First considering the algorithms based on Clustering techniques. Clustering is a process whereby a data set is replaced by clusters, which are collections of data points that belong together. The specific criterion to be used depends

on the application. Pixels may belong together because they have the same colour, texture and so on [18]. As structures in medical images can be treated as patterns, techniques from pattern recognition fields can be used to perform the segmentation. Two main types of these techniques are: supervised classification algorithms and unsupervised classification algorithms. These can be supervised if samples of each area to be classified are provided, so that the system “knows” a priori what the regions are, or unsupervised, if we allow the system to try to find which the different kind of areas is by itself. Examples of supervised classifi-cation techniques include k-nearest neighbour (kNN) classifiers, maximum likelihood (ML) algorithms, supervised artificial neural networks (ANN), sup-port vector machines (SVM), active shape models, and active appearance models (AAM). Unsupervised clustering techniques include the fuzzy K-means (FKM), the ISODATA and the unsupervised neural networks [19]. The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level or a threshold value to turn a grey-scale image into a binary image. The key of this method is to select the threshold value or values when multiple-levels are selected. Thresholds used in these algorithms can be selected manually or automatically. Manual selec-tion needs a priori knowledge and sometimes trial experiments to find the proper threshold values while the latter way combines the image infor-mation to get the adaptive threshold values automatically. Based on the in-formation used to define the local threshold values, the segmentation algo-rithms can be classified as: region-based, edge-based or hybrid [20]. With respect to the Image domain, the aim of Region-based techniques is partitioning the image domain by progressively fitting statistical models to the intensity, colour, texture or motion in each set of regions. These tech-niques rely on the assumption that adjacent pixels in the same region have similar visual features. Boundary-based methods aim to segment an image from the edges of each region by locating the pixels where the intensity changes when compared to the pixels of its surroundings [19]. A region-based method usually proceeds as follows: the image is partitioned into connected regions by grouping neighbouring pixels of similar intensity levels. Adjacent regions are then merged under some criterion involving per-haps homogeneity or sharpness of region boundaries. Over stringent criteria

create fragmentation; lenient ones overlook blurred boundaries and over merge [21]. Image segmentation based on Deformable models has been considered one of the main successes in Computational Vision, over the last decades, mainly on the medical imaging field. They are more flexible and can be used for more complex segmentations. These algorithms treat the structure boundary as the final status of the initial chosen contours. Deformable models are geometrically or parametrically defined curves or surfaces that move under the influence of forces, which have two compo-nents: internal and external forces [20]. An Image segmentation method based on Edge detection attempts to resolve image segmentation by detecting the edges or pixels between different re-gions that have rapid transition in intensity are extracted and linked to form closed object boundaries. The result is a binary image. Based on theory there are two main edge based segmentation methods- grey histogram and gradi-ent based method [22]. In addition to the methods included in the Figure 2 there are others. Among them is partial differential equation (PDE) based method. Solving the PDE equation by a numerical scheme, one can segment the image. Curve propa-gation is a popular technique in this category, with numerous applications to object extraction, object tracking and stereo reconstruction. The central idea is to evolve an initial curve towards the lowest potential of a cost function, where its definition reflects the task to be addressed. As for most inverse problems, the minimization of the cost functional is non-trivial and imposes certain smoothness constraints on the solution, which in the present case can be expressed as geometrical constraints on the evolving curve. The evo-lution of a given curve, surface or image is handled by PDEs and the solution of these PDEs is what we look forward to various methods for image seg-mentation are snake (or active contour models), level set and Mumford shah model [22].

4 Fetal ultrasound imaging

In obstetrics, measurements based on echographic images play a key role as

an accurate means for fetal age estimation. Several parameters are used as

age and development indicators, the most important being biparietal diame-

ter (BPD), occipital-frontal diameter (OFD), head circumference (HC) and

femur length (FL). Each of these parameters provides, through a specific

mathematical expression, estimates of the gestational age (GA), given in

weeks (w) and days (d) [3].

Several recent papers describe new methods for segmentation of fetal ana-

tomic structures from echographic images.

4.1 2-D Fetal ultrasound images

In the last three decades, the use of ultrasound has increased in the Mater-

nal-Fetal Medicine field. For that contributed the on-going technological evo-

lution with a resulting improvement in the quality of the equipment and the

obtained images, as well as, the clinical impact of this technique. Over the

last 20 years, ultrasound imaging has earned its place as a routine examina-

tion in certain periods of gestation. The effectiveness of ultrasound imaging

is today devoted, among others, in date of pregnancy, screening for chromo-

somal abnormalities, antennal diagnosis of birth defects and abnormalities of

fetal growth and fetal biophysical evaluation.

There are two possible avenues of approach in obstetric ultrasound imaging:

the trans-vaginal route, preferred in the 1st trimester of pregnancy, and the

trans-abdominal route. The use of B-mode (two-dimensional) allows obtain-

ing the fetus anatomical image in two dimensions, whereas the M-mode

(motion) is mainly used for the fetal heart rate evaluation and the heart cavi-

ties’ dimensions.

The second trimester of pregnancy has always been chosen as the ideal peri-

od for conducting a routine ultrasound imaging exam, especially in view of

the opportunity, in parallel, can be estimated with some accuracy, the time

of pregnancy and were met eco-anatomical the best conditions for the diag-

nosis of fetal structural anomalies. Usually, this exam is recommended in

between the 18 and 22 weeks [23].

Beginning with the 2-D fetal ultrasound images studies.

One of these methods estimates and measures the contours of the femur

and of cranial cross-sections of fetal bodies. Contour estimation is formulat-

ed as a statistical estimation problem (or likelihood function), where the ob-

servation model relates the observed image with the underlying contour.

This function is derived from a region-based image model, as explained in

previous section. The contour and the observation model parameters are

estimated according to the maximum likelihood criterion via deterministic

iterative algorithms. Noble’s paper describes an approach to unsupervised

contour estimation in fetal ultrasound images based on a maximum likeli-

hood formulation of deformable parametric models. Some examples of cra-

nial cross-session contour estimation on real images can be seen in Figure 3.

All the experiments reported were obtained using Matlab 6.5 R13 implemen-

tations of the algorithms [2].

Fig. 3. Left column: ultrasound image of cranial cross-section. Middle column: Automatic con-tour extraction. Right column: Manual delineation of the object (Source: [2]).

Subramanian’s paper explored the use of two methods, one the region grow-

ing and another, a variant of split and merge algorithms for segmentation

sequences of fetal ultrasound images. They describe an interactive system

that can rapidly process and segment an arbitrary number of features. The

user interface was built using the Tcl/Tk toolkit, which is publically available.

The biggest weakness of the system is the lack of effective measures that can

evaluate the accuracy of the segmentation. It also calls for techniques that

are more tolerant of the noise and artefacts because region growing algo-

rithm is highly sensitive to the local neighbourhood. Another possibility

would be characterizing of the boundary using Binary Space Partitioning tree

(BSP) [24].

Yu’s study describes a semi-automated fetal ultrasound image segmentation system developed to improve the estimation of fetal weight (EFW). Four

standardized fetal parameters are measured by the proposed system: bipari-etal diameter, head circumference, abdominal circumference and femur length. The EFW based on computerized measurements and manual meas-urements are compared by using regression analysis, artificial neural net-work and support vector regression. Figure 3 summarizes the implementa-tion procedure of segmentation algorithms for head and abdomen meas-urements. Ultrasound measurements were carried out using the commercial 2-D ultrasound scanner EnVisor 2540A (PHILIPS, ShenYang, China) with a 3.5 MHz trans-abdominal probe. All images were stored in Microsoft Bitmap (BMP) format of a size 800 9 564 with 24 bit per pixel. Figure 4 presents a sequence of steps to represent the algorithms for head and abdomen meas-urements [25].

Fig. 4. Flowchart of segmentation algorithms for head and abdomen measurements (Source: [25]).

Shrimali’s paper had as main objective to obtain a time-efficient morpholo-gy-based algorithm to recognize femur contour in fetal ultrasound images, refine its shape for automatic length measurement, and thus, attaining accu-racy and reproducibility of measurement. The images obtained from the sub-jects were initially processed using morphological operators to remove the background from the image. Thereafter, to refine the shape of the femur, the images were metamorphosed, using the morphological operators, till a single pixel –wide skeleton of the femur was available in the most time-effective manner. The skeleton-end-points are assumed to be the femur end- points, and the femur length is calculated as the distance between the end-points to estimate gestational age. The proposed algorithm has been tested on real clinical images, and has shown that the measurements made by the proposed method are consistent and in good agreement with the conven-tional manual method of measurement. The proposed algorithm also pro-

vides a possible time-efficient solution to the current inconsistency, difficul-ty, and subjectivity of fetal ultrasound measurement [26]. Further studies have proposed a Conditional Random Field (CRF) based framework to handle challenges in segmenting fetal ultrasound images. The proposed CRF framework uses wavelet based texture features for represent-ing the ultrasound image and Support Vector Machines (SVM) for initial label prediction. This approach requires a learning process to train the model. In order to capture the stochastic nature of the images they have used texture features. The proposed methodology was tested on only two fetal images. So the method could further be evaluated on a larger dataset. It could also be evaluated various forms of CRF such as Tree CRFs or even combining both [27].

4.2 3-D Fetal ultrasound images

The use of 3-D ultrasound data has several advantages over 2-D ultrasound

for fetal biometric measurements, such as considerable decrease in the ex-

amination time, possibility of post-exam data processing by experts and the

ability to produce 2-D views of the fetal anatomies in orientations that can-

not be seen in common 2-D ultrasound exams [28].

It is now of unquestionable value of 3D technology in the diagnostic accuracy

of ultrasound imaging. Viewing images in three dimensions that correspond

to three-dimensional structures, simultaneously showing the organs anato-

my and their spatial relationships, facilitates the recognition of some anoma-

lies that can be diagnosed with a single 3D image. To achieve an approximate

result with 2D technology is necessary to make several data selections, visu-

alize different sections and reconstruct the image at the head of the opera-

tor, which depends on the individual capacity for abstraction. With 3D tech-

nology, everything becomes more obvious, both for operators or the preg-

nant women family members that attend the examinations. The three-

dimensional ultrasound imaging also has its limitations. It is based on B-

mode and hence all physical aspects that interfere with the ultrasound sig-

nals are applicable here. Even furthermore, fetal movements are a source of

distortion of the images, reducing its definition. On the other hand, the re-

construction of the fetal anatomy is aesthetic and the image frozen, not giv-

ing any information about the fetal dynamic changes [23].

The pixel (picture element) is the smallest element of a 2D image, and the

voxel (volume element) the smallest unit of information 3D image. The 3D

reconstruction is performed from 2D images. New probes and new digital

technology of forming the ultrasonic beam on acquisition using the intersec-

tion of the beams to a greater detail contour (CrossBeam) and new algo-

rithms for real-time processing to enhance the contrast between structures

and reduce intrinsic noise in the ultrasound physics (SRI-Speckle Reduction

Imaging) have contributed to improve the quality of two-dimensional imag-

ing with repercussions in 3D ultrasound imaging acquisitions. Then it implies

that before the acquisition of the search volume the numbers of foci get ad-

justed, their position, depth, frequency and gain, so that the best 2D images

are obtained, while minimizing the artefacts that will be transmitted to the

3D images. The 3D ultrasound is the acquisition of sequences of two-

dimensional planes in a controlled manner, the result of which overlap vol-

umes containing anatomical structures to be examined. There are three

methods available for volumetric acquisition: a lens blur (Pseudo 3D Lens),

acquiring "free hand" (Freehand 3D) using the two-dimensional probes. The

equipment allows the accumulation in the digital memory of the plans that

result from manual scavenging. A computer program then allows the analysis

and visualization of data [23].

One of the most popular protocol is the 3D freehand system because it is the

cheapest and the most flexible, since the probe is moved by hand in an arbi-

trary manner. Its main drawbacks are the difficulty to avoid motion artifacts

and the need to attach a position sensor to the probe. Motion artifacts can

be caused by movements of the patient or an irregular pressure of the probe

during the manual sweeping. The attachment of an external position sensor

to the probe is not desirable because it disturbs the clinical routine. Moreo-

ver, most position sensors are magnetic devices that are sensitive to metallic

objects [6].

The multiplanar display is to have simultaneously three orthogonal planes at

the image display. The position of these planes is defined by the user by ma-

nipulating the coordinate axes associated with buttons on the console ultra-

sound system, thus allowing the study of the region of interest (ROI). The

reconstruction prospective (or rendering) shows the outer surface of the

structures in three-dimensional perspective and allows the assessment of

surface texture and lighting direction. Different modes are available, includ-

ing the radiographic mode or transparency with variants that can enhance

the hiperecogenitas structures (maximum mode) or hypoechoic (minimum

mode) [23].

3D medical ultrasound systems offer a number of advantages: volumes can

be resliced at new planes that are normally inaccessible due to physical re-

strictions of the scanning process; the rendering of 3D surfaces and volumes

may reveal pathologies that are hard to see in 2D imaging; it enables much

more accurate quantification of volume than 2D techniques; 3D ultrasound

systems are much cheaper than other 3D imaging modalities, like Computer

Tomography (CT), Magnetic Resonance Imaging (MRI) or Positron Emission

Tomography (PET); it is not ionizing nor invasive and the risk to the patient is

very small; and it allows better documentation of the examination. Despite

its advantages, 3D ultrasound diagnosis is not yet a common technique in

routine clinical practice. There are some issues that have to be further im-

proved, in particular: the inability to acquire large volumes; the sensitivity of

the 3D sensors to metallic objects; long processing times; and demanding

protocols for scanning. Nevertheless, it is believed that this picture will

change in the near future [6].

Continuing with the 3-D fetal ultrasound images studies.

Segmentation of a fetal head from three-dimensional (3-D) ultrasound imag-

es is a critical step in the quantitative measurement of fetal craniofacial

structure. However, two main issues complicate segmentation, including

fuzzy boundaries and large variations in pose and shape among different

ultrasound images. Chen et al. proposes a new registration-based method

for automatically segmenting the fetal head from 3-D ultrasound images that

begins detecting the eyes based on Gabor features to identify the pose of the

fetus image. Continues with a reference model, which is constructed from a

fetal phantom and contains prior knowledge of head shape, it is aligned to

the image via feature-based registration and finishes with a 3-D snake de-

formation that is used to improve the boundary fitness between the model

and image. A representation of measuring the craniofacial parameters is il-

lustrated in Figure 5 [29].

Fig. 5. Illustration of measuring the craniofacial parameters; (a) inter-orbital diameter (IOD) and bilateral orbital diameter (BOD); (b) occipital frontal diameter (OFD) and bilateral parietal diameter (BPD) (Source: [29]).

Anquez et al. proposed, in 2008, a statistical variational framework for the fetus and uterus segmentation in ultrasound images. Bothe Rayleigh and exponential distributions are used to model the pixel intensity. An energy is derived to perform an optimal partition of the 3D data into two classes cor-responding to these two distributions, in a Bayesian MAP framework. Some numerical difficulties are raised by the combination of heterogeneous distri-butions in a variation level-set formulation however it is shown that assum-ing different distributions provides better results than with the sole Rayleigh distribution [30]. In 2013, the same authors developed an original method for the segmenta-tion of the utero-fetal unit (UFU) from 3D ultrasound volumes, acquired dur-ing the first trimester of gestation. UFU segmentation is required for a num-ber of tasks, such as precise organ delineation, 3-D modelling, quantitative measurements, and evaluation of the clinical impact of 3-D imaging. The segmentation problem was formulated as the optimization of a partition of the image into two classes of tissues: the amniotic fluid and the fetal tissues. A Bayesian formulation of the partition problem integrates statistical models of the intensity distributions in each tissue class and regularity constraints on

the contours. An energy functional is minimized using a level set implemen-tation of a deformable model to identify the optimal partition. This time there is a combination of Rayleigh, Normal, Exponential, and Gamma distri-bution models to compute the region homogeneity constraints. The imple-mentation was in Matlab without code optimization. The 3-D reconstructions of fetal tissues accomplished are illustrated in Figure 6. The arrows corre-spond to the fetus skull, torso, legs, and arms [31].

Fig. 6. Illustration of the 3-D reconstructions of fetal tissues corresponding to (a) segSD 19 and (b) segGG 18 for anatomical modeling of pregnant women ( Source: [30]).

Carneiro et al. introduced a novel principled probabilistic model that com-bines discriminative and generative classifiers with contextual information and sequential sampling. A system is implemented based on this model, where the user queries consist of semantic keywords that represent anatom-ical structures of interest. After queried, the system automatically displays standardized planes and produces biometric measurements of the fetal anatomies. 200 volumes were used for training and 40 for testing. This ap-proach that is capable of automatically indexing 3-D ultrasound volumes of fetal heads using semantic keywords, which represent fetal anatomies. The automatic index involves the display of the correct standard plane for visual-izing the requested anatomy and the biometric measurement according to the guidelines of the International Society of Ultrasound in Obstetrics and Gynaecology. In Figure 7 lateral ventricles detection and annotation is show on different planes [28].

Fig. 7. – Fetal ventricles detection in blue (a) and ground truth annotations in red (b) shown on the transverse plane. The same on the sagittal (c) and coronal planes (d) (Source: [28]).

4.3 2-D and 3-D Fetal ultrasound images

Ending up with a combination of 2-D and 3D images.

In addition to treating images of 2D and 3D ultrasound separately the two forms got joined and got what was best for each. Feng proposed a learning-based approach which combines both 3D and 2D information for automatic and fast fetal face detection from 3D ultrasound volumes. The technique constrained marginal space learning for 3D face mesh detection, and com-bines a boosting-based 2D profile detection to refine 3D face pose. He uses constrained marginal space learning for 3D face surface detection and a boosting approach for 2D profile detection. To enhance the 3D fetal face rendering, a carving algorithm is exposed to remove all obstructions in front of the face based on the detection results. The experiments were performed on a 3D ultrasound data set containing 1010 fetal volumes and shoed excel-lent detection accuracy and fast speed of the system on a large fetus data set. Figure 8 illustrates the detected 3D mesh and the detected 2D profile [32].

Fig. 8. Illustration of the detected 3D mesh from the MSL and the detected 2D profile for re-finement (Source: [32]).

4.4 4-D Fetal ultrasound images

3D and 4D US have become important tools in obstetric imaging providing

additional and more precise information about the presence and severity of

facial defects. They have also allowed the establishment of a closer attach-

ment between parents and a future child. These ultrasound modalities are

complimentary tools to 2DUS, rather than a replacement [33].

The 4D ultrasound imaging technique brought the possibility of realizing

three-dimensional ultrasound imaging in real time, allowing us to understand

some of the movement morphology, such as, suction, yawn or blink. Despite

all these potentials, it is important to notice that 4D ultrasound is still a com-

plement and not an alternative to 2D ultrasound imaging in the field of pre-

natal diagnosis. The 4D ultrasound is the ability to acquire and store sequen-

tially complete volumes and present these data in near real time, allowing

monitor fetal movements [23].

4.5 Segmentation of fetal face ultrasound images

The three standard measures derived from the fetal head are: the biparietal

diameter (BPD), the corrected biparietal diameter (DBPC) and the head pe-

rimeter (HP). The section of the fetal head to be used in the measurement

should include as eco-anatomical references, the sickle, linear and interrupt-

ed by the thalamus and the cavum septum pellucidum.

The first ultrasound imaging parameter used was the BPD, which does not

take into account the shape of the head. The occipital-frontal diameter (DOF)

is measured along the longest axis of the head between the frontal and oc-

cipital region, and is used in conjunction with the biparietal diameter to cal-

culate the corrected biparietal diameter. Since the BPD can be misleading

and DBPC has a prepared calculation, using the head circumference parame-

ter appears to be the ideal and more comprehensive assessment of the fetal

head. It can be calculated automatically using an ellipse that combines the

BPD and DOF parameters [23].

The fetal face is an essential source of clinical information. Its evaluation

makes it possible to diagnose several fetal diseases and syndromes. For its

correct assessment, the obstetrician must properly differentiate the normal

face from the dysmorphic face and search for associated anomalies when

facial malformation is detected. Planning the birth in a unit capable of

providing early and appropriate care should be considered in order to pro-

mote the reduction of perinatal mortality when a defect is detected. 3DUS,

during the postnatal period, can be used to show the details of the anomaly

to the surgical team who will be responsible for its correction. Fetal facial

features can be identified in different orthogonal planes, such as, midsagittal,

parasagittal, transverse and coronal. Figure 9 shows a normal fetal face pro-

file across midsagittal perspective [33].

Fig. 9. Midsagittal view of a normal fetal face profile in 2DUS (Source: [33]).

McGahan et. all developed a technique using a multislice display to specifi-cally differentiate the maxilla (primary palate) from the mandible and to dis-play the orbits in a single image in fetuses with normal anatomy and cleft lip/palate. As methods he used three-dimensional ultrasonographic volumes of the fetal face acquired in 142 patients. The best interslice distance was determined and image quality was assessed. One of the results was that all cases of cleft lip with or without cleft palate were correctly identified retro-spectively. They could conclude that multislice 3DUS evaluation of the fetal face can be performed successfully with high image quality. This technique can be used to consistently and accurately differentiate the fetal primary palate and mandible [34].

5 Fetal cleft lip and/or palate

The anatomical changes of the fetus can be classified as malformations. Mal-

formations are defects in a particular part of the body that result either from

an intrinsic problem occurred during development or loss of integrity of a

previously normal tissue. The term syndrome identifies a set of malfor-

mations associated with a single cause. About 40% of birth defects have

been caused by chromosomal abnormalities. Make the diagnosis of a con-

genital anomaly in the prenatal period, regardless of prognosis, acquires par-

amount importance today, not least because it allows the couple to decide

on the course of pregnancy and accept the child's situation.

The diagnosis of a congenital anomaly depends on the visualization and de-

tection of the defect and such modern obstetrics uses two-dimensional real-

time ultrasound imaging, the chosen method for the routine evaluation of

fetal morphology, associating the tridimensional methodology to better

characterization of some situations. The sensitivity of ultrasound imaging in

prenatal detection of congenital anomalies differs depending on the system

and examined fetal anomaly evaluated.

The anomalies of the face are those in which the rate detection is lower. It

can be argued that the sensitivity of ultrasonography in the detection of con-

genital abnormalities increases with the experience of the operator, with the

quality of the image obtained with the number of tests performed, with the

gestational age at which the morphological study is done and the experience

risk factors for a given anomaly. The image quality depends on the technolo-

gy of the device used, the fetal position and maternal body mass index. Since

fatness hinders the penetration of the ultrasound beam, in obese pregnant

women obtaining fetal sufficiently detailed images can be very difficult. Alt-

hough the vast majority of congenital anomalies have their genesis in the 1st

trimester of pregnancy, often the only sonographic findings are observed in

the 2nd or 3rd trimesters. Hence, the sensitivity in the detection of congeni-

tal malformations is consistently lower in ultrasound imaging exams per-

formed in the 1st trimester (12-14 weeks) than in the 2nd trimester (20-22

weeks), and it is agreed that the routine morphological study of the fetus

should be accomplished in this last stage of pregnancy [23].

5.1 Face

The prenatal diagnosis of face anatomical changes is important for several

reasons. The changes may be associated with anomalies of other organs and

systems and indicate chromosomal syndromes or complexes. The face has a

unique impact: psychologically, emotionally and socially. Parents have time

to seek counselling with pediatricians and plastic surgeons and to prepare

psychologically for the birth and posterior treatment steps. The effects of

face diagnosed by ultrasound imaging are those involving the jaw, mouth,

nose, orbits and eyes and forehead. For the detection of the effects of the

face, in the realization of the ultrasound can be used three classic planes.

The coronal and axial, useful for the evaluation of the orbits and inter-orbital

distances, the crystalline lip and palate, the nose and nostrils. The sagittal (or

profile), essential for the evaluation of the curve of the forehead, chin and

nose. The 3D/4D ultrasound imaging, because it allows working the images in

various ways and plans, has advantages both in the identification and charac-

terization of the extent of this type of fault on the part of the sonographer

and the parents understanding [23].

The assessment of facial anomalies can be grouped into clefts, facial profile,

nose, mandible, ear, orbit and bonding [33].

Cleft lip and palate.

Cleft lip and palate are the most common congenital craniofacial anomalies

treated by plastic surgeons. Successful treatment of these children requires

technical skill, in-depth knowledge of the abnormal anatomy and apprecia-

tion of 3D facial aesthetics [35].

Epidemiology and etiopathogenesis .

Among the cleft lip and palate population, the most common diagnosis is cleft lip and palate (CLP) at 46%, followed by isolated cleft palate (CP) at 33%, then isolated cleft lip (CL) at 21%. Males are predominant in the cleft lip and palate population, whereas isolat-ed cleft palate occurs more commonly in females. Both environmental tera-togens and genetic factors are implicated in the genesis of cleft lip and pal-ate. Intrauterine exposure to the anticonvulsant phenytoin is associated with a 10-fold increase in the incidence of cleft lip. Maternal smoking during pregnancy doubles the incidence of cleft lip [35].

Classification.

Ideally, the new born infant with a cleft is evaluated by the cleft team in the first weeks of life. The increasing number of clefts detected by prenatal imag-ing allows early preparation of the family and introduction to the treatment plan. Patients with cleft lip and/or palate are not a homogenous group. The cleft lip deformity is typically divided into unilateral or bilateral, and then subdivided into complete, incomplete, or microform. If a cleft palate is pre-sent, it is surgically classified as unilateral, bilateral, or sub mucous [35].

Isolated Cleft Palate.

The infant with isolated cleft palate is examined carefully to ascertain if there are manifestations of the Pierre Robin sequence (micrognathia, glossoptosis, and airway obstruction).The etiopathogenesis of the cleft palate in the Pierre Robin sequence is thought to be obstruction of the palatal shelves as they swing from a vertical to horizontal orientation during palate fusion. The mi-crognathia and associated glossoptosis causes this obstruction, resulting in the characteristic wide “horseshoe” cleft palate associated with this se-quence. If the Pierre Robin sequence is present, appropriate measures are instituted, the mainstay of which is prone positioning. In severe cases, treatment may include around-the-clock prone positioning, nasopharyngeal airway protection, gavage feedings, and apnea monitoring. Very few of these patients will require temporary endotracheal intubation or tongue–lip adhe-sion. In Pierre Robin patients, palatoplasty may be delayed for several months, compared to other cleft palate closures, to ensure adequacy of the airway [35].

Submucous Cleft Palate.

The submucous cleft palate is traditionally defined by a triad of deformities:

a bifid uvula, a notched posterior hard palate, and muscular diastasis of the

velum. Submucous clefts vary considerably, however, and muscular diastasis

can occur in the absence of a bifid uvula. The majority of patients with sub-

mucous cleft palate are asymptomatic, although approximately 15% will de-

velop velopharyngeal insufficiency (VPI). VPI correlates with short palatal

length, limited mobility, and easy fatigability of the palate. Because the ma-

jority of patients with submucous cleft palate remain asymptomatic, a non-

operative approach is recommended until speech can be adequately evalu-

ated [36].

The images of Figure 10 illustrate various combinations of clefting of the lip,

alveolar ridge, and hard and soft palates.

Fig. 10. View of the hard and soft palates looking from the chin toward the nose (Source: [36]).

5.2 Fetal palate examination and diagnosis

According to Maarse et. all paper’s about a systematic review on the diag-

nostic accuracy of second-trimester transabdominal ultrasound in detecting

prenatal cleft lip and palate to compare two-dimensional (2D) with three-

dimensional (3D) ultrasound techniques from the 451 citations identified,

there was diversity in the gestational age at which the ultrasound examina-

tion was performed and there was considerable variety in the diagnostic ac-

curacy of 2D ultrasound in the low-risk women, with prenatal detection rates

ranging from 9% to 100% for cleft lip with or without cleft palate, 0% to 22%

for cleft palate only and 0% to 73% for all types of cleft. 3D ultrasound in

high-risk women resulted in a detection rate of 100% for cleft lip, 86% to 90%

for cleft lip with palate and 0% to 89% for cleft palate only. From this survey

it is possible to conclude that 2D ultrasound screening for cleft lip and palate

in a low-risk population has a relatively low detection rate. 3D ultrasound can

achieve a reliable diagnosis, but not of cleft palate only [37].

Faure et. all wrote a paper, published in 2007, whose aim was to describe a novel three-dimensional (3D) ultrasound rendering technique to examine the

normal fetal posterior palate and to assess its correspondence with the real fetal anatomy. The methods used were included a prospective longitudinal study conducted from January to October 2005 including 100 fetuses in a low-risk population. The fetal ultrasound examinations were performed at 17, 22, 27 and 32 weeks’ gestation to determine the normal 3D ultrasound view of the fetal palate. The ultrasound scans were performed using the strict anterior axial plane of the starting reconstruction volume and the un-derside 3D view of the fetal palate. The 3D view of the fetal palate was com-pared with the normal anatomical view of the fetal palate obtained by surgi-cal fetopathological examination of fetuses at the same gestational ages. In all cases a 3D ultrasound view of the fetal maxilla and secondary palate was obtained at each period of gestation and corresponded well to the fetal ana-tomical specimens. It was concluded that this technique of anterior axial 3D view reconstruction of the fetal palate seen by an underside view can pro-vide unique diagnostic information on the integrity of the secondary palate. This technique may become the reference of the fetal palate, and should be of value in diagnosing isolated secondary cleft palate or palatal involvement when cleft lip and alveolus are diagnosed [38]. One year later, in 2006, Faure et. all publishes other study to describe a

three-dimensional (3D) ultrasound technique for assessing the fetal soft pal-

ate. Methodologically, it was also a prospective study conducted from April

to December 2006 including 87 fetuses in a low risk population. Fetal ultra-

sound scans were performed between 21 and 25 weeks of gestation to de-

termine the normal 3D ultrasound view of the fetal soft tissues of the palate.

The sonographers used a 30◦inclined axial 3D view of the fetal palate. Ultra-

sound images obtained in this view were compared with fetopathological

specimens of the same gestational age by two observers, both pediatric sur-

geons. Each observer indicated whether they thought that the uvula or the

velum could be detected, and the differences in responses between the ob-

servers were assessed. Results The frequencies of detection of the uvula and

velum of each observer varied between 80% and 90%. It was possible to con-

clude that a 30◦ inclined axial 3D ultrasound view seems to be effective in

assessing the integrity of the fetal soft palate [39].

Platt et. all had present that both cleft lip and palate remain a diagnostic challenge for the sonographer because of the variable size of the defects as well as their location. Though demonstrated an improved method called the

“reverse face” view, which appears to assist in the diagnosis of clefts involv-ing the palate. To perform this technic the fetal face had to initially be exam-ined with the fetus in the supine position. Then using a 3-dimensional sonog-raphy, was acquired a static volume. Following the acquisition of the vol-ume, it was rotated 90° so that the cut plane was directed in a plane from the chin to the nose. The volume cut plane was then scrolled from the chin to the nose to examine in sequential order the lower lip, mandible, and alve-olar ridge; tongue; upper lip, maxilla, and alveolar ridge; and hard and soft palates. From this paper it was possible to identify the full length and width of the structures of the mouth and palates and allows the examiner to identi-fy normal anatomy as well as clefts of the hard and soft palates. This way the fetal hard and soft palates of the mouth could be accessed [36]. From Figure 11 it is possible to notice that the surface smooth and surface rendering filters provide the greatest detailed images of the tissues at this level in the fetal head.

Fig. 11. Rendered image at the level of the cleft of the alveolar ridge near the maxillary bone using different filters (Source: [36]).

Rotten et all presented a paper whose objectives were to describe the so-nographic appearance of cleft lip with or without cleft palate (CLP) using two-dimensional and three-dimensional (3D) ultrasound imaging. But also, to evaluate the accuracy of ultrasound to delineate with precision the bony extent of facial clefts, i.e. to differentiate clefts limited to the lips, or extend-ing to the alveolus/premaxilla or the secondary palate. The study was based

on the examination of fetuses diagnosed with an isolated CLP. Clefts were characterized by their precise anatomical location and extent. The defect could include a cleft lip (CL), a cleft alveolus (CA), or a cleft of the secondary palate (CSP). After the analysis the results include the following: the so-nographic appearance of CL, CA, and CSP was depicted. Strict concordance of the sonographic report with the anatomical defect was present in 87.5%. It was possible to conclude that systematic screening with sonography to de-tect prenatally CLP requires the imaging of at least the mid-sagittal and the anterior coronal ‘nose-mouth’ views. Once the presence of a facial cleft is suspected, the three reference orthogonal planes are imaged in order to characterize the anatomical defect, and for each plane, the serial scans are thoroughly examined. This protocol allowed to precise delineation of the defect [40]. Martinez‐Ten et all determined whether systematic examination of primary and secondary palates using three-dimensional (3D) ultrasound aids in the identification of orofacial clefts in the first trimester. 3D datasets were ac-quired prospectively from women undergoing first-trimester ultrasound screening for aneuploidy. Multiplanar mode display was used for offline analysis of the primary palate in the coronal plane at the base of the retrona-sal triangle and the secondary palate by virtual navigation in the axial plane. Using 3D offline analysis, the primary palate was classified as intact in 95%, cleft in 4% and indeterminate in 1%. The secondary palate was classified as intact in 90%, cleft in 3% and indeterminate in 7%. Clefts of the secondary palate were confirmed in all six suspected cases and missed in one, which was diagnosed at 16 weeks. In this study all cases of clefting of the primary palate and 86% of cases involving the secondary palate were visualized using 3D ultrasound. Virtual navigation of the fetal palate using the multiplanar mode display seems to be useful in the diagnosis of clefting in the first tri-mester [41].

6 References

1. Rocha, R., Campilho, A., Silva, J., Azevedo, E., & Santos, R. (2010). Segmentation of the carotid intima-media region in B-mode ultrasound images. Image and Vision Computing.

2. Noble, J. A., & Boukerroui, D. (2006). Ultrasound image segmentation: a survey. Medical Imaging, IEEE Transactions on, 25(8), 987-1010.

3. Jardim, S. M., & Figueiredo, M. A. (2005). Segmentation of fetal ultrasound images. Ul-trasound in medicine & biology, 31(2), 243-250.

4. Novelline, R. A., & Squire, L. F. (2004). Squire's fundamentals of radiology. Belknap Press.

5. Bamber, J. C. (2002). Image formation and image processing in ultrasound. Joint De-partment of Physics, Institute of Cancer Research and The Royal Marsden NHS Trust, Downs Road, Sutton, Surrey, SM2 5PT, UK.

6. Rocha, R. A. H. F. (2007). Image segmentation and reconstruction of 3D surfaces from ca-rotid ultrasound images (Doctoral dissertation, Universidade do Porto).

7. Cobbold, R. S. (2007). Foundations of biomedical ultrasound. Oxford University Press on Demand.

8. Joseph, S., Balakrishnan, K., Nair, M. B., & Varghese, R. R. (2013). Ultrasound Image Des-peckling using Local Binary Pattern Weighted Linear Filtering. International Journal of In-formation Technology and Computer Science (IJITCS), 5(6), 1.

9. Shapiro, L., & Stockman, G. C. (2001). Computer Vision. 2001.

10. Sarti, A., Corsi, C., Mazzini, E., & Lamberti, C. (2005). Maximum likelihood segmentation of ultrasound images with Rayleigh distribution. Ultrasonics, Ferroelectrics and Frequen-cy Control, IEEE Transactions on, 52(6), 947-960.

11. Abolmaesumi, P. and Sirouspour, M. R. (2004). “An interacting multiple model probabil-istic data association filter for cavity boundary extraction from ultrasound images,” IEEE Trans. Med. Imag., vol. 23, no. 6, pp. 772–784.

12. Bosch, J. G. , Mitchell, S. C. , Lelieveldt, B. P. F. , Nijland, F. , Kamp, O., Sonka, M. and Reiber, J. H. C. (2002). “Automatic segmentation of echocardiographic sequences by ac-tive appearance motion models,” IEEE Trans. Med. Imag., vol. 21, no. 11, pp. 1374–1383.

13. Mignotte, M. , Meunier, J. and Tardif, J.-C. (2001). “Endocardial boundary estimation and tracking in echocardiographic images using deformable template and markov random fields,” Pattern Anal. Appl., vol. 4, no. 4, pp. 256–271.

14. Mulet-Parada, M. and Noble, J. A. (2000). “ 2D+T acoustic boundary detection in echo-cardiography,” Med. Image Anal., vol. 4, no. 1, pp. 21–30.

15. Xie, J. , Jiang, Y. and Tsui, H.-T. (2005). “Segmentation of kidney from ultrasound images

based on texture and shape priors,” IEEE Trans. Med. Imag., vol. 24, no. 1, pp. 45

16. Pham, D. L., Xu, C., & Prince, J. L. (2000). Current methods in medical image segmenta-tion 1. Annual review of biomedical engineering, 2(1), 315-337.

17. Monteiro, F. C. (2008). Region-based spatial and temporal image segmentation (Doctoral dissertation, Universidade do Porto).

18. Ponce, J., Forsyth, D., Willow, E. P., Antipolis-Méditerranée, S., d'activité-RAweb, R., In-ria, L., & Alumni, I. (2011). Computer vision: a modern approach. Computer, 16, 11.

19. Silva, P. F., Ma, Z., & Tavares, J. M. R. (2011). Image Segmentation Algorithms on Female Pelvic Ultrasound Images. Computational Vision and Medical Image Processing: VipIMAGE 2011.

20. Ma, Z., Tavares, J. M. R., Jorge, R. N., & Mascarenhas, T. (2010). A review of algorithms for medical image segmentation and their applications to the female pelvic cavity. Com-puter Methods in Biomechanics and Biomedical Engineering, 13(2), 235-246.

21. Saraf, Y. (2006). Algorithms for Image Segmentation (Doctoral dissertation, Birla Institute of Technology and Science).

22. Dass, R., & Priyanka, S. D. (2012). Image segmentation techniques. IJCET VOL3, (1).

23. Graça, L. M. (2005). Medicina Materno-Fetal (4ª edição). LIDEL_edições técnicas. Capitu-lo, 43.

24. Subramanian, K. R., Lawrence, D. M., & Mostafavi, M. T. (1997, April). Interactive seg-mentation and analysis of fetal ultrasound images. In 8 th EG Workshop on ViSC, Bou-logne sur Mer.

25. Yu, J., Wang, Y., & Chen, P. (2008). Fetal ultrasound image segmentation system and its use in fetal weight estimation. Medical & biological engineering & computing, 46(12), 1227-1237.

26. Shrimali, V., Anand, R. S., & Kumar, V. (2009, September). Improved segmentation of ul-trasound images for fetal biometry, using morphological operators. In Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE (pp. 459-462). IEEE.

27. Gupta, L., Sisodia, R. S., Pallavi, V., Firtion, C., & Ramachandran, G. (2011, August). Seg-mentation of 2D fetal ultrasound images by exploiting context information using condi-tional random fields. In Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE (pp. 7219-7222). IEEE.

28. Carneiro, G., Amat, F., Georgescu, B., Good, S., & Comaniciu, D. (2008, June). Semantic-based indexing of fetal anatomies from 3-D ultrasound data using global/semi-local con-text and sequential sampling. In Proc. IEEE Conf. Computer Vision and Pattern Recogni-tion.

29. Chen, H. C., Tsai, P. Y., Huang, H. H., Shih, H. H., Wang, Y. Y., Chang, C. H., & Sun, Y. N. (2012). Registration-based segmentation of three-dimensional ultrasound images for quantitative measurement of fetal craniofacial structure. Ultrasound in medicine & biol-ogy, 38(5), 811-823.

30. Anquez, J., Angelini, E. D., & Bloch, I. (2008, May). Segmentation of fetal 3D ultrasound based on statistical prior and deformable model. In Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on (pp. 17-20). IEEE.

31. Anquez, J., Angelini, E., Grange, G., & Bloch, I. (2013). Automatic segmentation of ante-natal 3D ultrasound images.

32. Feng, S., Zhou, S. K., Good, S., & Comaniciu, D. (2009, June). Automatic fetal face detec-tion from ultrasound volumes via learning 3D and 2D information. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 2488-2495). IEEE.

33. Andresen, C., Matias, A., & Merz, E. (2012). Fetal Face: The Whole Picture. Ultraschall in der Medizin-European Journal of Ultrasound, 33(05), 431-440.

34. McGahan, M. C., Ramos, G. A., Landry, C., Wolfson, T., Sowell, B. B., D’Agostini, D., ... & Pretorius, D. H. (2008). Multislice display of the fetal face using 3-dimensional ultraso-nography. Journal of Ultrasound in Medicine, 27(11), 1573-1581.

35. Thorne, C. (2007). Grabb and Smith's plastic surgery (p. 297). Wolters Kluwer Health/Lippincott Williams & Wilkins.

36. Platt, L. D., DeVore, G. R., & Pretorius, D. H. (2006). Improving Cleft Palate/Cleft Lip An-tenatal Diagnosis by 3-Dimensional Sonography The “Flipped Face” View. Journal of ul-trasound in medicine, 25(11), 1423-1430.

37. Maarse, W., Bergé, S. J., Pistorius, L., Van Barneveld, T., Kon, M., Breugem, C., & Mink van der Molen, A. B. (2010). Diagnostic accuracy of transabdominal ultrasound in detect-ing prenatal cleft lip and palate: a systematic review. Ultrasound in Obstetrics & Gyne-cology, 35(4), 495-502.

38. Faure, J. M., Captier, G., Bäumler, M., & Boulot, P. (2007). Sonographic assessment of normal fetal palate using three‐dimensional imaging: a new technique. Ultrasound in ob-stetrics & gynecology, 29(2), 159-165.

39. Faure, J. M., Bäumler, M., Boulot, P., Bigorre, M., & Captier, G. (2008). Prenatal assess-ment of the normal fetal soft palate by three‐dimensional ultrasound examination: is there an objective technique?. Ultrasound in Obstetrics & Gynecology, 31(6), 652-656.

40. Rotten, D., & Levaillant, J. M. (2004). 2D and 3D sonographic assessment of the fetal face. 2. Analysis of cleft lip, alveolus and palate. Ultrasound in obstetrics & gynecology

41. Martinez‐Ten, P., Adiego, B., Illescas, T., Bermejo, C., Wong, A. E., & Sepulveda, W. (2012). First‐trimester diagnosis of cleft lip and palate using three‐dimensional ultra-sound. Ultrasound in Obstetrics & Gynecology, 40(1), 40-46.

5. Research Groups

The research groups related to image processing, computer vision and medi-

cal image analysis are shown in Table 1, grouped by institution and country.

Table 1. List of research groups

ACRONYM TITLE INSTITUTION COUNTRY

Bioimaging Biomedical Imaging and Vision Computing Group

Biomedical Engineering Insti-tute (INEB)

PT

VCMI Visual Computing and Machine Intelligence Group

INstituto de Engenharia de Sistemas e Computadores do Porto (INESC)

PT

VIPG Visual Information Processing Universidad de Granada ES

CSPC Communications, Signal Processing and Control University of Southampton UK

CVIP Computer Vision and Image Processing University of Dundee UK

IPI Image Processing and Interpretation Ghent University BE

HCI Heidelberg Collaboratory for Image Processing Universität Heidelberg DE

FIPA Facial Image Processing and Analysis Group Karlsruhe Institute of Technol-ogy

DE

MIA Mathematical Image Analysis Group Saarland University DE

IMAGERS UCLA Image Processing Research Group University of California USA

IPAG Image Processing and Analysis Group Yale School of Medicine USA

6. Conferences

The international conferences related to image processing, computer vision

and medical image analysis are presented in Table 2, in alphabetical order.

The six conferences in relief are the most important.

Table 2. List of international conferences

ACRONYM TITLE

ACPR Asian Conference on Pattern Recognition

CVPR IEEE Conference on Computer Vision and Pattern Recognition

FG IEEE International Conference on Face and Gesture Recognition

GCPR German Conference on Pattern Recognition

IbPRIA Conference on Pattern Recognition and Image Analysis

ICCV International Conference on Comuter Vision

ICIAP International Conference on Image Analysis and Processing

ICIE IAENG International Conference on Imaging Engineering

ICIIP IEEE International Conference on Image Information Processing

ICIP IEEE International Conference on Image Processing

ICPRAM International Conference on Pattern Recognition Applications and Methods

ICSIE International Conference of Signal and Image Engineering

ICSIPA IEEE International Conference on Signal and Image Processing Applications

ICSPIE International Conference on Signal Processing and Imaging Engineering

ICSIP International Conference on Signal and Image Processing

ICVS International Conference on Computer Vision Systems

IVSP International Conference on Image, Video and Signal Processing

MCPR Mexican Conference on Pattern Recognition

MIRAGE International Conference on Computer Vision / Computer Graphics

NSSMIC IEEE Nuclear Science Symposium and Medical Imaging Conference

PREMI International Conference on Pattern Recognition and Machine Intelligence

PRIA International Conference on Pattern Recognition and Image Analysis

SIGGRAPH ACM SIGGRAPH Conference

SSVM Conference on Scale Space and Variational Methods in Computer Vision

VipImage Thematic Conference on Computational Vision and Medical Image Processing

VISAPP International Conference on Computer Vision Theory and Applications

7. Journals

International journals related to image processing, computer vision and med-

ical image analysis are shown in Table 3, in alphabetical order. The eight is-

sues raised are the most important.

Table 3. List of international journals

TITLE

3D Research

American Journal of Obstetrics & Gynecology

European Journal of Obstetrics & Gynecology and Reproductive Biology

Computer Vision and Image Understanding

IEEE Transactions on Image Processing

IET Image Processing

IEEE Transactions on Pattern Analysis and Machine Intelligence

Image and Vision Computing

International Journal of Computer Vision

International Journal of Gynecology & Obstetrics

International Journal of Imaging

Journal of Engineering in Medicine,

Journal of Mathematical Imaging and Vision

Journal of Medical Devices

Journal of Medical Ultrasound

Journal of Visualization

Medical Image Analysis

Pattern Analysis and Applications

Pattern Recognition

Pattern Recognition and Image Analysis

Taiwanese Journal of Obstetrics & Gynecology

The Visual Computer

Ultrasound Clinics

Ultrasound in Medicine & Biology

The International Journal for Numerical Methods in Biomedical Engineering

8. Reviews

Some of the portuguese reviewers are Professor Pedro Quelhas, from FEUP,

Professor Miguel Coimbra, from FCUP. From University of Granada there are

Professor Nicolás Capilla and Professor Rafael Soriano. From Yale School

there are Professors James Duncan, Xenios Papademetris and Hemant Ta-

gare.

9. Research Hypothesis

The search hypothesis consists on processing 2D, 3D and 4D ultrasound images, extracting their features and classifying it enables prenatal abnormalities diagnosis.

10. Methodology

The methodologies to be used are in a first step, the segmentation of ultra-

sound images of fetal face using different approaches, such as Wavelet-

based, fuzzy clustering or Level Set. The idea is to try each method and eval-

uate it in a way to identify the one that brings the best results according to a

consistent dataset.

The next step consists on the 3D reconstruction from medical images. Insofar

as the advent of 3D and 4D come to stay with regard to prenatal diagnosis in

future is expected to 3D reconstruction of the face is not only used as a com-

plementary technique to 2D but as a way of risk facts show an early stage of

pregnancy. One possibility is to use ITK or VTK for this reconstruction.

11. Work Plan

The work plan for the PhD was aligned and is presented in the Table 4. This

plan is organized in five steps. The first is the scope, which is contextualized

and defines the problem. The second consists of the literature review and

summary of research done that somehow relevant. The third is the charac-

terization methodologies to develop. The penultimate is the implementation

of the IT platform and the final consists on documentation of the thesis.

Table 4. Development schedule of the doctoral work

12. Conclusions

This paper seeks to identify, characterize and systematize the tasks per-

formed so far and the steps planned to accomplish this PhD work.

The problem was identified and contextualized. Given its scope was neces-

sary to combine knowledge from different research areas.

The state of the art has been written with the purpose of being published in

a scientific journal. Efforts will be made accordingly.

The study about the research groups, the conferences and reviewers was

performed.

The research hypothesis was raised could be further refined.

The proposed methodologies to address the problem raised had already con-

sidered the techniques used in this field.

Finally, the work schedule has been submitted and is being met by then.

Much work lies ahead in cooperation between the faculties of engineering

and medicine and a first prototype of the pre-processing of images of fetal

faces was carried out to test some of the technologies available as well as

the images provided by our partners.

In terms of completion the original contributions to give for this doctoral

thesis will be firstly, the study of scientific work carried out within the pro-

cessing and analysis of facial obstetric ultrasound images. Then, the creation

of a dataset from scratch, with high quality images in the order of a hundred.

The following will be to select methods for image segmentation that has not

yet been tested and which allows to obtain the best results.

Briefly it is intended to create an innovative project for the analysis and re-

construction of medical images and to develop a tool to segment faces fetal.