Lecture 1. Course introduction and motivation |
This lecture introduces the course objectives, course content and structure, and course assessment. We also motivate hardware acceleration for deep learning. |
Visit and review all content in the Canvas course room.
Read slides for Lecture 1 on the Canvas course room.
|
Lecture 2. Linear regression and logistic regression |
This lecture introduces two basic statistical learning models starting from linear regression to logistic regression. |
Pre-review slides for Lecture 2 on the Canvas course room. |
Lecture 3. Perceptron and Multi-Layer Perceptron (MLP) |
This lecture discusses the general concepts of artificial neural networks (ANNs) from perceptron, general neuron model to multi-layer perceptron (MLP), in particular, about network training and inference. |
Pre-review slides for Lecture 3 on the Canvas course room. |
Lecture 4. Lecture 4 CNN (Convolutional Neural Network) |
This lecture presents Convolutional Neural Network as one very successful example of Deep Neural Networks (DNNs). |
Pre-review slides for Lecture 4 on the Canvas course room. |
Lab 1. Hardware Design, Implementation and Evaluation of Artificial Neuron |
In this lab, the tasks are to design three RTL models (three alternative ways) for implementing an N-input artificial neuron. After you verify their correct functionality, you bring the designs for logic synthesis.
|
Try to finish the lab tasks. |
Lecture 5. RNN (Recurrent Neural Network) |
This lecture presents another important category of DNN, namely Recurrent Neural Network (RNN) which considers neuron interactions over time with memory effect. |
Pre-review slides for Lecture 5 on the Canvas course room. |
Lecture 6. Hardware acceleration for deep learning: Challenges and Overview; Model minimization I |
This lecture discusses the efficiency challenges (performance, power/energy, resource) of executing deep learning algorithms on hardware, and opens the problem space for hardware acceleration of deep learning algorithms. We discuss the model minimization issues such as network reduction, data quantization, compression, fixed-point operations etc. for efficient hardware implementations of neural network algorithms. |
Pre-review slides for Lecture 6 on the Canvas course room. |
Lecture 7. Model minimization II |
This lecture continues discussing latest model minimization techniques: Network pruning, Data quantization and approximation, and Network sparsity. |
Pre-review slides for Lecture 7 on the Canvas course room. |
Lecture 8. Hardware Specialization I |
This lecture discusses hardware specializations for neural network algorithms, focusing on digital hardware design and compute system architecture design principles. |
Pre-review slides for Lecture 8 on the Canvas course room. |
Lab 2. Convolutional Neural Networks for Image Classification in PyTorch |
This lab prepares you with necessary skills and knowledge for performing basic deep learning tasks in PyTorch. In particular, you are going to realize CNNs for image classification. |
Try to finish the lab tasks. |
Seminar I. Deep Learning and Minimization of Neural Network Models |
A workshop in a conference setting. Each student group is both a presenter (presenting its assigned paper) and an opponent (asking questions to another group). |
Read the assigned paper, prepare presentation slides as a group, and prepare questions to another group. |
Lecture 9. Hardware specialization II |
This lecture continues to discuss in-depth latest techniques used for hardware acceleration: sparsity computing and ASIP. |
Pre-review slides for Lecture 9 on the Canvas course room. |
Lecture 10. Model-to-Architecture Mapping and EDA |
This lecture discusses model to architecture mapping, its optimization and Electronic Design Automation (EDA). |
Pre-review slides for Lecture 10 on the Canvas course room. |
Lecture 11. Technology-Driven Deep Learning Acceleration and Brain-Like Computing or Invited Lecture |
This lecture gives an outlook of efficient hardware acceleration of neural networks, in particular, with a focus on technology impact such as embedded DRAM, 3D stacking, memresistor etc, and also brain-inspired computer. This lecture may be organized as an invited lecture to cover latest research and development in the field. |
Pre-review slides for Lecture 11 on the Canvas course room. |
Exercise (Student recitation) |
This is a student recitation session. The exercise questions are collected in an exercise compendium. The questions cover all lectures. |
Finish the exercise questions individually before the exercise session. |
Lab 3 |
For this lab, you can choose one of the two tasks: (1) Hardware design, implemenation and evaluation of MLP; (2) Transfer Learning and Network Pruning. |
Try to finish the lab tasks. |
Seminar II. Case studies of deep learning hardware accelerators |
A workshop in a conference setting. Each student group is both a presenter (presenting its assigned paper) and an opponent (asking questions to another group). |
Read the assigned paper, prepare presentation slides as a group, and prepare questions to another group. |