Latest Module Specifications
Current Academic Year 2025 - 2026
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Description This module is aimed at Ph.D students that are studying topics related to artificial intelligence and machine learning and need a deep and up-to-date understanding of the most important state-of-the-art in a fast moving field. The module will provide students with a broad understanding of the latest theoretical and experimental results in the field, the ability to mathematically formulate machine learning problems, the foundations needed to develop novel approaches, and experience with the peer-review process, both from the perspective of the author and of the reviewer. This is an advanced module and will require that students have already taken a first course in machine learning. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Learning Outcomes 1. Demonstrate a critical understanding of a wide variety of topics in machine learning research 2. Formulate and analyze machine learning problems in a mathematically rigorous way 3. Develop novel research in machine learning and document the results in a research paper that is to publication standards 4. Critically appraise a research paper and productively engage in the peer review process as both an author and a reviewer | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
All module information is indicative and subject to change. For further information,students are advised to refer to the University's Marks and Standards and Programme Specific Regulations at: http://www.dcu.ie/registry/examinations/index.shtml |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Indicative Content and Learning Activities
ML Foundations and Fundamentals A formal mathematical introduction to the foundations of modern machine learning and neural networks, learn notational conventions, matrix calculus, and stochastic optimization principles. Deep Neural Architectures Provide students with an in-depth understanding of the principles of deep neural architecture engineering, including: vanishing gradients, skip connections, residual connections, separable convolutions, group convolutions, upsampling techniques, neural architecture search, stochastic depth, etc. Neural Network Optimization and Generalization Analysis of current trends in optimization and regularization including transfer learning, domain adaptation, stochastic optimization, optimization techniques, deep network optimization theory and practice. Kernel Methods Discussion of kernel methods and their interaction with modern deep learning. Topics covered will include: the kernel trick, Kernel Principal Component Analysis (KPCA), Kernel Support Vector Machines (SVM), Kernel Ridge Regression, and Neural Tangent Kernels (Jacot et al, 2018) Probabilistic Unsupervised Models Discussion of recent developments in unsupervised probabilistic neural models. Topics covered will include: autoregressive models, Masked Autoencoder for Distribution Estimation (MADE) (Germain et al. 2015), Pixel Recurrent Neural Networks (Pixel-RNN) (Van Den Oord et al. 2016) and Pixel-CNN (Salimans et al. 2016), Variational Autoencoders (Kingma et al. 2014), variational lower bounds, and Normalizing Flows (Kobyzev et al. 2016). Generative Models Overview of the principles and recent developments in generative neural models. Specific topics covered will include: Generative Adversarial Networks (GANs) (Goodfellow et al. 2014), Wasserstein GAN (Arjovsky et al. 2017), Least Squares-GAN (Mao et al. 2017), BigGAN (Brock et al. 2019), StyleGAN (Karras et al. 2019), ProGAN (Karras et al. 2018), InfoGAN (Chen et al. 2016), Conditional GANs, pix2pix (Isola et al. 2017), CycleGAN (Zhu et al. 2017), Spectral Normalization (Miyato et al. 2018), and methods and metrics for the evaluation of GANs. Representation Learning Introduction to recent developments in self-supervised representation learning, semi-supervised techniques, label noise, pseudo-labeling, consistency regularization, and contrastive learning. Applications Analysis of the state-of-the-art in various practical applications of machine learning including: image retrieval, visual saliency, semantic segmentation, object detection, machine translation, speech-to-text, and language models. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Indicative Reading List Books:
Articles:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Other Resources None | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||