Week 1 |
Introductory lecture
Introduces the module, outlining its relevance to the field and connections to other topics. It provides an overview of the content structure, key references, and assessment details. |
Week 2 |
Neural Network Fundamentals
Build a solid foundation in neural networks to understand why they are the backbone of deep learning. Explore the fundamental concepts of artificial neurons and how they mimic biological neurons. Delve into feedforward networks to see how inputs transform through layers to produce outputs. Learn the importance of the backpropagation algorithm in training networks by minimising error. By grasping these basics, you'll appreciate why neural networks are essential for modelling complex patterns in data. |
Week 3 |
Training Deep Networks
Discover why training deep feedforward networks poses unique challenges. Learn how activation functions like ReLU and Sigmoid influence the learning process and model performance. Understand issues like vanishing and exploding gradients that hinder deep network training. Explore techniques like batch normalisation and dropout to stabilise learning, prevent overfitting, and optimise models. By mastering these methods, you'll overcome the hurdles of training deep networks effectively. |
Week 4 |
Convolutional Neural Networks
Dive into convolutional neural networks to understand why they are transformative for image processing tasks. Learn how convolutional layers detect local patterns and hierarchies in data. Examine architectures like LeNet, VGG, and ResNet to appreciate their innovations and improvements in image classification tasks. By exploring CNNs, you'll see why they are indispensable for applications involving visual data. |
Week 5 |
Recurrent Neural Networks
Understand why recurrent neural networks are essential for modelling sequential data like text or time-series. Explore how they capture temporal dependencies through feedback connections. Learn about LSTM and GRU units that address the vanishing gradient problem, enabling networks to learn long-term dependencies. By studying RNNs, you'll grasp their importance in tasks where order and context are crucial. |
Week 6 |
Attention Mechanisms & Transformers
Discover why attention mechanisms have revolutionised natural language processing and other fields. Learn how self-attention allows models to focus on relevant parts of the input when generating outputs. Explore transformer architectures like BERT and GPT that leverage attention for parallel processing, leading to significant performance gains. By understanding attention, you'll appreciate how models capture complex relationships in data. |
Week 7 |
Autoencoders & Representation Learning
Learn why autoencoders are powerful tools for unsupervised representation learning. Understand how they compress data into lower-dimensional codes and reconstruct inputs. Explore variational autoencoders that introduce probabilistic elements for generating new data samples. See how autoencoders are used in anomaly detection and latent space exploration, enabling models to capture underlying data structures. |
Week 8 |
Generative Models & GANs
Investigate why generative adversarial networks are exciting developments in data generation. Understand the adversarial training process where a generator and discriminator compete, leading to realistic outputs. Explore models like DCGAN and CycleGAN that extend GANs to specific applications like image synthesis and style transfer. By studying GANs, you'll grasp the challenges and solutions in training generative models. |
Week 9 |
Optimisation Techniques in Deep Learning
Grasp why optimisation is crucial for effective neural network training. Learn how optimisers like Adam and RMSProp adapt learning rates during training for better convergence. Understand learning rate scheduling to fine-tune training progress. Explore regularisation methods like weight decay to prevent overfitting. By mastering optimisation techniques, you'll enhance model performance and ensure reliable results. |
Week 10 |
Hyperparameter Tuning & Model Evaluation
Realise the importance of hyperparameter tuning in achieving optimal deep learning models. Learn systematic approaches like grid search and random search to adjust parameters. Understand how to evaluate models using appropriate metrics and validation techniques. Appreciate why rigorous testing and evaluation are essential to validate model effectiveness and generalise to new data. |