A16 - ASCI Winterschool on Efficient Deep Learning
Date | Nov 28 – Dec 1, 2023 |
ECTS | 5 |
Registration | Maximum number of participation reached |
Slides Tuesday Nov 28, 2023
- Introductie slides Henk Corporaal; Once over lightly
- Slides Jan van Gemert; A shallow introduction to deep learning
- Assignment reproduce a deep learning paper
- Slides dr. A. Balatsoukas-Stimming; Advanced and Model-based Neural Networks
- Slides Federico Corradi; Neuromorphic Systems & Applications
Slides Wednesday Nov 29, 2023
- Slides Guiseppe Sarda; Hardware acceleration of Deep Learning inference
- Slides Dolly Sapra: Hardware-Aware Deep Learning Inference
- Slides Floran de Putter; Optimizing deep learning for inference
- Slides Nishant Saurabh; Inference-Serving
Slides Thursday Nov 30, 2023
- Slides Lydia Chen; Distributed and Federated Learning Systems
- FL_partIII
- Slides Sander M. Bohté; Efficient learning for SNNs
Slides Friday Dec 1, 2023
Course content
Machine learning has numerous important applications in intelligent systems within many areas, like automotive, avionics, robotics, healthcare, well-being, and security. The recent progress in Machine Learning, and particularly in Deep Learning (DL), has dramatically improved the state-of-the-art in object detection, classification and recognition, and in many other domains. Whether it is superhuman performance in object recognition or beating human players in Go, the astonishing success of DL is achieved by deep neural networks. However, the complexity of DL networks for many practical applications can be huge, and their processing may demand a high computing effort and excessive energy consumption. Their training requires huge data sets, making the training even orders of magnitude more intensive than their already very demanding inference phase. A new development is to move intelligence from the cloud to the IoT edge; this further stresses the need to tame the complexity of DL and Deep Neural Networks.
This Winter School treats various topics addressing the complexity reduction of DL, including:
- Architectural and Hardware accelerator support for DL, with emphasis on energy reduction, computation efficiency and/or computation flexibility, both for inference and/or for learning;
- Spiking and brain-inspired neural networks and their implementation;
- Efficient mapping of DL applications to target architectures, including many-core, GPGPU, SIMD, FPGA, and HW accelerators;
- Exploiting temporal and spatial data reuse, sparsity, quantization and approximate computing, dynamic neural networks, and other methods, to decrease the complexity and energy demands of DL.
- Efficient learning approaches, including data reduction, online learning, and quality of learning;
- Tools, Frameworks and High-level programming language support for DL;
- NAS: Neural Architecture Search, including Hardware aware NAS;
- Advanced applications exploiting DL.
Above topics will be treated by experts from the Netherlands and abroad.
Required background: Basic knowledge of deep learning and computer architecture.
Assessment |
ASCI students can get 5 ECTS credits for this course. To get these credits they have to complete a lab/research study related to one or more of the treated topics. |