Let’s Talk Informatics - Nova Scotia Health Authority...Let’s Talk Informatics Translational...
Transcript of Let’s Talk Informatics - Nova Scotia Health Authority...Let’s Talk Informatics Translational...
Let’s Talk Informatics
Translational Research in Medical Imaging Informatics in Nova Scotia with a focus on Machine Learning
Drs. S. Beyea, S. Clarke and A. Guida
[14-11-2019]
Bethune Ballroom, Halifax, Nova Scotia
Please be advised that we are currently in a controlled vendor environment for the
One Person One Record project.
Please refrain from questions or discussion related to the
One Person One Record project.
Informatics…
utilizes health information and health care technology to enable patients to receive best treatment and best outcome possible.
Clinical Informatics…is the application of informatics and information technology to deliver
health care. AMIA. (2017, January 13). Retrieved from https://www.amia.org/applications-infomatics/clinical-
informatics
Objectives At the conclusion of this activity, participants will be able to…
▫ Identify what knowledge and skills health care providers will need to use information now and in the future.
▫ Prepare health care providers by introducing them to concepts and local experiences in Informatics.
▫ Acquire knowledge to remain current with new trends, terminology , studies, data and breaking news.
▫ Cooperate with a network of colleagues establishing connections and leaders that will provide assistance and advice for business issues, as well as for best-practice and knowledge sharing.
• Session specific objective #1: Have a representative overview of medical imaging informatics research at NSHA
• Session specific objective #2: Understand how machine learning can potentially impact clinical care
• Session specific objective #3: Gain an intuition on how Convolutional Neural Networks work
Conflict of Interest Declaration
• The presenters have received investigator sponsored research funding from GE Healthcare, and in-kind contributions to research from Synaptive Medical
The Biomedical Translational
Imaging Centre (BIOTIC)
Prof. Steven Beyea, Ph.D.
An overview of imaging & informatics research
What is BIOTIC?
BIOmedical Translational Imaging Centre
✔ Research facility of the hospitals with a mandate to translate
medical innovations through clinical and industry partnership
✔ Integrated leadership in both science and innovation/business
development
✔ Straddle the academic, clinical and commercial worlds
✔ An open access centre in which equipment and expertise can be
utilized by researchers across the hospitals and universities
Purposefully crossing the siloes
Embedded in the region’s largest
tertiary care hospitals
Clinical Imaging Research Infrastructure
* Clinical imaging facilities provide access
to non-invasive technologies for brain
and body imaging in humans
* Access to pediatric and adult patient
populations
* Connections with clinicians to explore
clinical research opportunities
* 3T MRI (MR750), 306-channel MEG,
Point of Care MRI (coming soon in
2019!)
Pre-Clinical Research Infrastructure
* Pre-clinical imaging facilities provide
access to non-invasive technologies
for imaging in rodent models
* A fully-equipped biological level 2
lab and an onsite animal care facility
with a special quarantine area
allows for longitudinal studies
* Simultaneous MRI/PET, and
PET/SPECT/CT
Improving Treatment Planning
Studying Health System “Value” of New Tech
* MRI will be installed Fall 2019 - $2.1M
in external funding
* Primarily dedicated to patients with
new onset neurological symptoms
arriving in the ED
* Can a rapid screening MR improve
diagnostic confidence, in particular for
“negative” scans
* 4-year study of “value” to health
system
Diagnostic Biomarkers
Day 0 Day 4 Day 11
Fat Fraction
(TR corrected)
Surrogate
Unsaturation Index
(UIs)
Quantitative Mapping of Fatty
Acid Composition using Free-
Breathing Spectroscopic
Imaging with Blind Compressed
Sensing (accepted to NMR in
Biomedicine)
AI/ML: Computer Assisted Diagnosis
Clinical Motivation for AIProstate Cancer
Sharon Clarke, MD PhD
• 2nd most frequently diagnosed cancer and the 6th leading cause of cancer death among men worldwide
• ~24,000 new cases in 2015; 25% of new cancers in men
• Diagnosis - random biopsy of the prostate gland based on clinical suspicion and/or rising PSA
• Cancers can be missed, resulting in delayed diagnosis, and in some cases, a missed chance for cure; conversely, overtreatment can occur
Scope of the Problem
What is Multi-parametric MRI?
MRI can obtain detailed images of the prostate with several different contrasts
Why Multi-parametric MRI?
• MRI can non-invasively identify suspicious lesions that can subsequently be targeted for biopsy
T2 ADC DCE
Why Multi-parametric MRI?
• Cancer Staging and Localization
Case 1 Case 2
MRI 2017 MRI 2019
Why Multi-parametric MRI?
Active Surveillance
• PROMIS trial - Lancet. 2017 Feb 25;389(10071):815-822.
▫ “.... data provide a strong argument for recommending MP-MRI to all men with an elevated serum PSA before biopsy.”
• PRECISION trial - N Engl J Med. 2018 Mar 18.
▫ “MRI, with or without targeted biopsy, led to fewer men undergoing biopsy, more clinically significant cancers being identified, less over-detection of clinically insignificant cancer… than did TRUS-guided biopsy.”
Paradigm Shift
• PROMIS trial ▫ “there was only moderate agreement of MP-MRI scores between
two independent radiologists.… highlights the necessity for a robust training programme for radiologists.”
• PRECISION trial ▫ …moderate agreement between the site and the central
radiologist… highlights the need for further research regarding improvements to the standardization, reproducibility, and reporting of MP-MRIs.”
Room for improvement
• Evaluation is time-consuming for radiologists, and a natural candidate for machine learning approaches
Detection of Prostate Cancer
T2
ADC
DCE
Machine Learning PCa Detection
• Develop a protocol and validate CAD system for 1.5T MRI
• Scans done elsewhere could be used to further train and improve the
model
• Improve Radiologists’ diagnostic accuracy, efficiency and inter-
observer variability; evaluate changes over time in tumour
• Application to measuring tumour volume over time in other organs
Value and Clinical Impact
Convolutional Neural Networks (CNN)to identify and localize prostate cancer
Alex Guida, Ph.D.
Feature engineering
input
feature
engineeringfeatures classifier prediction
person
domain
knowledge
manual extraction, selection,
mathematical modelling
lines, curves, edges
Feature engineering
input prediction
person
end-to-end learning
(representation learning)
CNN classification
Input: image
Output: object class
The “Anatomy” of an image
black and
white image
0
0 1 1 1 0
0 1 0 0 0
0 1 1 1 0
0 1 0 0 0
0 1 1 1 0
image matrix
pixel
Convolutions
Image matrix
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
0 0 1 1 0
0 1 1 0 0
1 0 1
0 1 0
1 0 1
Filters (also called Kernels or
receptive fields)
1 0 1
0 1 0
1 0 1
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
0 0 1 1 0
0 1 1 0 0
Convolutions
Convolution
Convolution FunctionSummary(element wise image and filter matrix multiplication)
Filters (also called Kernels or
receptive fields)
Image matrix
1*1 + 1*0 + 1*1 +0*0 + 1*1 + 1*0 +0*1 + 0*0 + 1*1 = 4
Convolutions
Convolution
Convolution FunctionSummary(element wise image and filter matrix multiplication)
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
0 0 1 1 0
0 1 1 0 0
1 0 1
0 1 0
1 0 1
Filters (also called Kernels or
receptive fields)
Image matrix
Convolutions
Convolutions
Horizontal line filter
-1 -1 -1
2 2 2
-1 -1 -1
original convolved
Convolutions
Horizontal line filter
0 -1 0
-1 4 -1
0 -1 0
Laplacian operator
-1 -1 -1
2 2 2
-1 -1 -1
original convolved
Convolutions
? ? ?
? ? ?
? ? ?
What filters do we need for
our model?
feature
engineering
domain
knowledge
manual extraction, selection,
mathematical modelling
let the model find the
optimal filter that optimizes
the prediction
CNN classification
CNN classificationCommon CNN deep network architectures:
● LeNet-5
● AlexNet
● VGG
● GoogLeNet
● ResNet
CNN classificationCommon CNN deep network architectures:
● LeNet-5
● AlexNet
● VGG
● GoogLeNet
● ResNet
ImageNet Challenge:
3.2 million labelled images, 5247
categories
From 2010 started with 71.8% score
CNN classificationCommon CNN deep network architectures:
● LeNet-5
● AlexNet - 2012 - 84.6 %
● VGG
● GoogLeNet
● ResNet● ReLU
● 5 convs + maxpool
● 3 Fully connected layers
● Dropout
● Local response normalization
About ~12% discard
over 2nd best
performing model
CNN classificationCommon CNN deep network architectures:
● LeNet-5
● AlexNet - 2012 - 84.6 %
● VGG - 2013 - 92.7 %
● GoogLeNet
● ResNet
● Many small filters instead of few large filters in
the first layers
CNN classificationCommon CNN deep network architectures:
● LeNet-5
● AlexNet - 2012 - 84.6 %
● VGG - 2013 - 92.7 %
● GoogLeNet - 2014 - 93.3 %
● ResNet - 2016 - 96.4%
ImageNet Challenge:
3.2 million labelled images, 5247
categories
From 2010 to 2017, accuracy improved
from 71.8% to 97.3%
2017 - 97.3%
Machine learning tasks
non-cancer
Cancer
Ground-truth
Non prostate tissue
Cancer prediction heatmap
● Build a predictive model capable of identifying, with voxel level resolution, cancer regions in the
prostate
● Integrate the model in the clinical routine to evaluate effectiveness
Semantic Segmentation
Car 60%
Trees 70%
Car 60%
Trees 70%
INFORMATION IS
ENCODED
Car 60%
Trees 70%
INFORMATION IS
ENCODEDINFORMATION IS
ENCODED
Deconvolution..also called “upsampling” or “fractionally strided convolution” and “sub-pixel”, “transposed convolutional layer”
http://warmspringwinds.github.io/tensorflow/tf-slim/2016/11/22/upsampling-and-image-segmentation-with-tensorflow-and-tf-slim/
Low resolution High resolution
image
filter
Semantic Segmentation
what
wherewhere
convolution de-convolution
Biomedical application
2015
Dataset
● 16 subjects
● All subjects underwent radical prostatectomy
● Contrasts: T2, ADC, Ktrans
Dataset
● 16 subjects
● All subjects underwent radical prostatectomy
● Contrasts: T2, ADC, Ktrans
T2
ADC
Ktranst1
T2
ADC
DCE
t2 t3 t4 ...
Acquired contrasts
transformationfrom 4D -> 3D
Preprocessed Dataset16 volumetric images
(512,512,48) with 3 contrast
Dataset
● 16 subjects
● All subjects underwent radical prostatectomy
● Contrasts: T2, ADC, Ktrans
T2
ADC
Ktrans
Preprocessed Dataset16 x 4D images (512,512,48,3)
{ … }
Slicing along the axial plane48 (axial slices) * 16 patients = (512,512,3)
204 images of size (512,512,3) of the prostate
where 3 is the dimension that encodes the T2, ADC
and Ktrans contrasts
Workflow
Our CNN
what
wherewhere
convolution de-convolution
No TTA
Test-Time Augmentation (TTA)
With TTA
Test-Time Augmentation (TTA)
Test-Time Augmentation (TTA)
No TTA with TTA
● regularizes the prediction● improves robustness of the model
Predictions comparison
between different models
Subject S26
Subject S39
Logistic Regression
Random Forest
CNN + TTA + transfer learning
T2 contrast
non-cancer
Cancer
Ground-truth
Non prostate tissue
Next Step - Comparing to human performance
error0%
10%
20%
30%
40%
...
Ground truth
Radiologist panel
Single radiologist
CNN
?
?
?
MRIOrthanc
server
New
Dicom files
preprocessing Prostate
segmentation
Tumor
segmentation
CNN prediction pipeline
radiologist
New images
trigger launch
pipeline
Tumor
predictions as
dicom files
CNN deployed in Production
preprocessing Prostate
segmentation
Tumor
segmentation
CNN prediction pipeline
Next Step - CNN Pipeline in Production
preprocessing Prostate
segmentation
Tumor
segmentation
CNN prediction pipeline
Next Step - CNN Pipeline in Production
Credits
Biotic Team involved in the project
● Peter Lee - coop student● David Hoar - coop student● Alex Guida - Data Scientist ● Steven Beyea - Biotic Scientific Director● Chris Bowen - Biotic Senior Researcher● Sharon Clarke - Project PI
Let’s Talk Informatics has been certified for continuing education credits by;▫ College of Family Physicians of Canada and the
Nova Scotia Chapter for 1 Mainpro+ credit.▫ Digital Health Canada for 1CE hour for each
presentation attended. Attendees can track their continuing education hours through the HIMSS online tracking certification application, which is linked to their HIMSS account.
Thank you for attending this event.