Tutorials

T1: Integrating Signal Processing, Machine Learning and Deep Learning

Presented by Kirthi Devleker (Mathworks)

Thursday, November 16, 14:00 - 14:45

Location: Fontaine E

The proliferation of low-cost/high-fidelity sensors and wearable devices has enabled us to acquire and analyze vast amounts of signals. However, developing predictive analytics using machine learning / deep learning algorithms to effectively analyze the collected data to derive meaningful outcomes can be a challenging task due to 2 main reasons:

  • First, the signals are usually corrupted with noise, artifacts etc., and classic signal processing techniques typically fall short in their ability to analyze and pre-process these signals which is an important step for feature engineering
  • Secondly, identifying the right machine learning model for the problem quickly and implementing them can be a daunting task given the breadth of algorithms available today

At this live technical session, we’ll show how MATLAB and toolboxes such as Signal Processing Toolbox, Wavelets, Statistics and Machine Learning, and Neural Networks can not only help you develop predictive models, but also build and implement smart predictive algorithms on embedded processors at a rapid pace compared to traditional approaches.

Using real signals, we’ll demonstrate the following concepts:

  • Using Wavelets in MATLAB for obtaining insightful Time-Frequency Analysis of signals:
  • Use time-frequency analysis techniques to develop predictive models using deep learning workflows for signals
  • Eliminate noise and compensating baseline drift using Wavelet Signal Denoiser App
  • Identify signal features for classification purposes
  • Rapidly Develop and Test Machine Learning algorithms
  • Use the Classification Learner App to quickly evaluate various machine learning models
  • Deploy MATLAB algorithms as standalone code

Kirthi K. Devleker is a Product Manager at MathWorks where he leads product and marketing strategy for the Signal Processing tools including Wavelets Toolbox. Kirthi specializes in helping MATLAB users see the value of advanced Signal Processing and Machine Learning techniques applied to sensor data.

Kirthi has been with MathWorks for 7 years; and has a Masters in Electrical Engineering from San Jose State University.

T2: Deep Learning Tools and Frameworks

Presented by Hamid Palangi (Microsoft)

Thursday, November 16, 14:45 - 15:30

Location: Fontaine E

Deep Learning (DL) is the foundation for many recent breakthroughs in different research areas including computer vision, speech recognition, natural language processing, and many more. Some fundamental ideas in deep learning, e.g., feedforward or convolutional neural networks, back-propagation and recurrent neural networks have been known since 1980s. The main reasons for the recent popularity of these ideas are: (a) the fast and efficient computation using Graphical Processing Unit (GPU) which makes it possible to run experiments in a reasonable amount of time, and (b) the availability of very large datasets. Recent DL frameworks with automatic differentiation make it much easier to build your own DL model and perform experiments on GPUs, without the need to derive gradients by yourself or write CUDA codes for GPUs. One of the important questions to answer before building your DL model is: Which DL framework should I use for my task? TensorFlow or Torch or CNTK or Theano or Caffe or MXNet or …? In this tutorial, we address different aspects of this question.

Hamid Palangi is a member of Deep Learning group at Microsoft Research. His research interests are in the areas of Machine Learning (with focus on Deep Learning), Natural Language Processing, Machine Reading Comprehension and Linear Inverse Problems (with focus on Compressive Sensing). Before joining Deep Learning group, he worked at MSR on deep learning methods for Speech Recognition (2013), Sentence Modelling for Web Search and Information Retrieval (2014), and Image Captioning (2016). Dr. Palangi did his Ph.D. at University of British Columbia where he mainly focused on two directions in his thesis: (a) Deep Learning Methods for Sequence Modelling (b) Bridging the gap between Compressive Sensing and Deep Learning.

T3: Deep Learning Tools and Examples in Video Data Analytics

Presented by Zhenyu Guo (Postmates)

Thursday, November 16, 16:00 - 16:45

Location: Fontaine E

Algorithms based on deep learning have been the go-to methods in many computer vision tasks. This tutorial is about real-world practice of visual analytics using deep learning. We focus on how to build an efficient system to handle near real time applications at scale, such as object detection, tracking, segmentation, human pose estimation and video summarization.

I will compare different deep learning platforms (pytorch, caffe/caffe2, and tensorflow), and will talk about how to choose and design the right algorithms, algorithm structures, and system design to achieve satisfying performance.

In the demo session, I will demonstrate some of the implementations based on pytorch to show the practical tricks and tips for running deep learning experiments.

Zhenyu Guo is the Director of Artificial Intelligence at Postmates. He is leading research and development related to Computer Vision, Deep Learning, and Reinforcement Learning. Before joining Postmates, he was the founding director of the AI research and development department at Sengled, working on bring Computer Vision and Deep Learning technologies to Internet of Things, with a focus on smart lighting. His research including Object classification, detection, tracking, video understanding, signal processing, and reinforcement learning. Dr. Guo obtained his Ph.D. from University of British Columbia in computer vision and machine learning.

T4: Machine Learning for Healthcare

Presented by Amir Tahmasebi (Philips) and Samuel Kadoury (Ecole Polytechnique)

Thursday, November 16, 16:45 - 17:30

Location: Fontaine E

In the last decade, machine learning (ML) has revolutionized a number of domains such as manufacturing, financial trading, and security. One of the domains in which ML has generated a lot of excitement is Healthcare. While to date, ML has demonstrated a great promise in a number of applications such as predictive healthcare and personalized medicine, in some other areas of healthcare, the application and influence of ML is still in doubt. In this tutorial, success and failure of the most recent ML applications in healthcare will be reviewed from both clinical and technical aspects. The main focus of the tutorial will be on ML in Radiology, as one of the most transforming areas in healthcare during the last few years. Through the course of this tutorial, attendees will learn about clinical challenges and technical opportunities in the field of Radiology.

Amir Tahmasebi is currently a project manager and senior research scientist in Clinical Informatics department at Philips Healthcare Research, Cambridge, MA, USA. He is leading a group of activities in the areas of Radiology and Oncology Informatics. His current research is focused on patient context extraction and modeling, outcome analytics and clinical decision support. Dr. Tahmasebi also served as a project leader in Ultrasound Imaging and Intervention department contributing to a number of products such as Uronav and PercuNav. Dr. Tahmasebi is also an Associate Adjunct Professor at Columbia University, NY, USA. Dr. Tahmasebi received his PhD degree in Computer Science from the School of Computing, Queen's University, Kingston, Canada. He is the recipient of the IEEE Best PhD Thesis award and Tanenbaum Post-doctoral Research Fellowship award. He has served as a Program Area Chair for IPCAI conference since 2015. Dr. Tahmasebi has published and presented his work in a number of conferences and journals such as HBM, IEEE TBME, MICCAI, IPCAI and SPIE. He has also been granted four international and US patents.

Samuel Kadoury is an associate professor in the Computer and Software Engineering Department at Polytechnique Montreal, member of the Biomedical Engineering Institute at the University of Montreal and researcher at the Sainte-Justine Hospital and CHUM research centers. He is the director of the Medical Image Computing and Analysis Lab at Polytechnique Montreal and holds the Canada Research Chair in Medical Imaging and Assisted Interventions. He obtained his Bachelors in Computer Engineering, his Masters in Electrical Engineering from McGill and his Ph.D. in biomedical engineering from University of Montreal in 2008, focusing on orthopaedic imaging. He completed a post-doctoral fellowship at Ecole Centrale de Paris and worked as a clinical research scientist for Philips Research North America at the National Institutes of Health, developing image-guided systems for liver and prostate cancer. He served as a Program Area Chair for the MICCAI conference in 2017. Prof. Kadoury has published over 100 peer-reviewed papers in leading journals and conferences in fields such as biomedical imaging, computer vision, radiology, urology and neuroimaging. He holds several patents in the field of image-guided interventions and was co-recipient of the NIH merit award for his work on prostate cancer, as well as the Cum Laude Award from the RSNA for his work in artificial intelligence for liver cancer detection.

T5: Internet of Things

Presented by Yiannis Papanagiotou (Netflix) and Petros Spachos (University of Guelph)

Thursday, November 16, 17:30 - 18:15

Location: Fontaine E

Microlocation, Geofencing and Proximity based Services (PBS) are a set of technologies that will play a key role in the transformation of smart buildings and smart infrastructure. Micro-location is the process of locating any entity with a very high accuracy (possibly in centimeters), while geofencing is the process of creating a virtual fence around a Point of Interest (PoI). PBS is a suite of techniques that can be used to personalize the experience a tenant receives from the surrounding objects. Such technologies require high detection accuracy, energy efficiency, wide reception range, low cost and availability. In this talk, we will provide insights into various micro-location enabling technologies, techniques, and services, and discuss how they can accelerate the incorporation of Internet of Things (IoT) in smart buildings. We will cover the challenges and propose some potential solutions such that micro-location enabling technologies and services can be thoroughly integrated with an IoT equipped smart building. We will demonstrate some simple micro-location scenarios with the use of smartphone devices and beacons.

Ioannis Papapanagiotou is a senior architect at Netflix, a research assistant professor at the University of New Mexico, a graduate faculty at Purdue University, and a mentor at the International Accelerator. He holds a dual Ph.D. degree in Computer Engineering and Operations Research. His main focus is on distributed systems, cloud computing, and the Internet of Things. In the past, Ioannis has served in the faculty ranks of Purdue University (tenure-track) and NC State University, and as an engineer at IBM. He has been awarded the NetApp faculty fellowship and established an Nvidia CUDA Research Center at Purdue University. Ioannis has also received the IBM Ph.D. Fellowship and Academy of Athens Ph.D. Fellowship for his Ph.D. research, and best paper awards in several IEEE conferences for his academic contributions. Ioannis has authored a number of research articles and patents. Ioannis is a senior member of ACM and IEEE.

Petros Spachos is an Assistant Professor at School of Engineering, University of Guelph, Canada. He received the Diploma degree in Electronic and Computer Engineering from the Technical University of Crete, Greece, in 2008, the M.A.Sc. degree in 2010 and the Ph.D. degree in 2014, both in Electrical and Computer Engineering from the University of Toronto, Canada. He was a post-doctoral researcher at University of Toronto from September 2014 to July 2015. His research interests include wireless networking and network protocols with the focus on wireless sensors, smart cities and Internet of Things. He is a member of the IEEE and ACM.