Ensemble-X: Your personal strataGEM to build Ensembled Deep Learning Models for Medical Imaging

Sat September 05, 01:00 PM–01:25 PM • Back to program
Start time 13:00
End time 13:25
Countdown link Open timer

This talk contains medical imagery.

In this talk, we will deep dive into the world of Medical Imaging and Radiology, in particular. We will soar through the mighty oceans of various kinds of diseases and limitations of AI with the prevalent Deep Learning architectures which are at our disposal. At this point, we will also delve into the progress that has been made in the domain of integrative healthcare which is the amalgamation of AI and Medicine (and, pathology).

Almost every individual today in the field of Data Science would know about the concept of Ensemble Learning in ML (ideally, the last chapter we read in the Machine Learning pedagogy). However, it is to also note that very few literatures exists to ensemble Deep Neural Architectures. Therefore, this is where we step forward and propose an approach to solve (almost) any medical imaging problem with the means of our Ensemble approach. Our approach does not just "solve" medical imaging problems but help practitioners to build unique and seamless architectures that almost never goes wrong (at least, not on your good days).

Why use an Ensemble at all?

In this section, I will try to anticipate the common notions one would have while reading this proposal and try to address the same in advance.

Of course, there are brilliant pre-trained architectures available and building up a custom CNN architecture takes seconds today, right? Then, why take the extra headache of combining architectures after all?

I wanted to give you a solid reason why then I thought I'll give you three:

(1) Attaining state-of-the-art accuracy:

To elucidate on this, let's recall a very famous (or, infamous) chapter from our old books on Elementary Statistics that is - Central Limit Theorem. This theorem gives us two very important implications:

There are two big implications of the Central Limit theorem:

Ensembles of many random processes/variables converge to Gaussian distributions. That’s why normal distributions are everywhere. When adding together random numbers, the variance of the sum is the sum of the variances of those numbers. Which, in the essence of Machine/Deep Learning, translates to the fact that when we combine n (number of) architectures together then the combined architecture let's say, x will at all times produce better performance or results on standard metrics that every individual model in the n-cluster of models.

A slight limitation:

Central Limit Theorem also states that no matter how many models one tries to Ensemble one can never reach the accuracy of 1 (or, 100).

But that's alright, I believe. We don't need to attain a full 100% accuracy to prove the reliability of a model, do we?

Furthermore, we have proven this hypothesis of ours of attaining state-of-the-art accuracy in our published works with regards to the same. Kindly, refer.

(2) Model Diversity:

This is arguably the most important contribution of ours out of all three which gives an indication of the reliability of our approach. Our experiments (which are also included in our papers) also goes on to show how well the architecture performs on external unseen examples - images which are not even part of the dataset.

Why is this important?

Elementary. We are dealing with Medical data where there is a plethora of possibilities, complications and unique cases. Hence, we can NEVER be too sure. Therefore, this was one of the most instrumental steps we had to set in motion in order to know if it's actually working or not.

Future tangible implementation scope: Mobile App or Web App to be used in Clinics and Hospitals.

(3) Tackling the problem of Model Over-fitting:

In this contribution, we try to mitigate the problem of over-fitting to the greatest extent possible WITHOUT employing techniques such as Cross-Validation. The reason behind taking the entire dataset at once is very simple: Let's say you are dealing with a Binary-Class data with a huge class imbalance problem. Cross-Validation will not let you manually select the number of images that goes into each split that you make from each individual classes. It is also important to mention that the randomisation process ain't so bad altogether however, there's always room for improvements and alternate doors can also lead to better destinations.

Dipam Paul He/His/Him

Commonly referred to as ‘The Boy from Kolkata’ - Dipam is a senior-year student pursuing Electronics and Telecommunication from KIIT University, India. He has previously presented at three PyCons-

(1) PyCon USA 2019 (Cleveland, Ohio) [Speaker Profile]

(2) PyCon India 2019 (Chennai, India) [Speaker Profile]

(3) PyCon USA 2020 (Pittsburgh, Pennsylvania) [Speaker Profile]

Currently, he is an incoming Research Assitant at Stanford Medicine and will be working on areas of Radiology and Pain operating from the city of California.

He has previously worked in labs at Georgia Institute of Technology, Universidade Federal de Sao Paulo and IIT Bombay in various roles and capacities.

Having always been fascinated by the wonders one can do using Python, his periphery of interest lies in Biomedical-Imaging and NLP. He spends his days toying around with Machine-Learning models and fine-tuning Neural Nets when he is not eating, raconteuring or engaging in a lively debate about Geopolitics or Football!

Alankrita Tewari

I am a final year engineering student from KIIT University, India. I have an inveterate interest in Deep Learning and have been working on the same since the past year. Furthermore, I have presented at PYCON US 2020 on the same. In my free time, I like to read and paint and learn new things. I look forward to the conference, which I am sure would be a delight to be a part of.