Module Specifications.
Current Academic Year 2024 - 2025
All Module information is indicative, and this portal is an interim interface pending the full upgrade of Coursebuilder and subsequent integration to the new DCU Student Information System (DCU Key).
As such, this is a point in time view of data which will be refreshed periodically. Some fields/data may not yet be available pending the completion of the full Coursebuilder upgrade and integration project. We will post status updates as they become available. Thank you for your patience and understanding.
Date posted: September 2024
| |||||||||||||||||||||||||||||||||||||||||||||||||
Repeat the module GTE modules have no resit opportunity. Students that do not successfully complete the module may retake module in another year. |
|||||||||||||||||||||||||||||||||||||||||||||||||
Description This module is aimed at Ph.D students that are studying topics related to artificial intelligence and machine learning and need a deep and up-to-date understanding of the most important state-of-the-art in a fast moving field. The module will provide students with a broad understanding of the latest theoretical and experimental results in the field, the ability to mathematically formulate machine learning problems, the foundations needed to develop novel approaches, and experience with the peer-review process, both from the perspective of the author and of the reviewer. This is an advanced module and will require that students have already taken a first course in machine learning. | |||||||||||||||||||||||||||||||||||||||||||||||||
Learning Outcomes 1. Demonstrate a critical understanding of a wide variety of topics in machine learning research 2. Formulate and analyze machine learning problems in a mathematically rigorous way 3. Develop novel research in machine learning and document the results in a research paper that is to publication standards 4. Critically appraise a research paper and productively engage in the peer review process as both an author and a reviewer | |||||||||||||||||||||||||||||||||||||||||||||||||
All module information is indicative and subject to change. For further information,students are advised to refer to the University's Marks and Standards and Programme Specific Regulations at: http://www.dcu.ie/registry/examinations/index.shtml |
|||||||||||||||||||||||||||||||||||||||||||||||||
Indicative Content and Learning Activities
ML Foundations and FundamentalsA formal mathematical introduction to the foundations of modern machine learning and neural networks, learn notational conventions, matrix calculus, and stochastic optimization principles.Deep Neural ArchitecturesProvide students with an in-depth understanding of the principles of deep neural architecture engineering, including: vanishing gradients, skip connections, residual connections, separable convolutions, group convolutions, upsampling techniques, neural architecture search, stochastic depth, etc.Neural Network Optimization and GeneralizationAnalysis of current trends in optimization and regularization including transfer learning, domain adaptation, stochastic optimization, optimization techniques, deep network optimization theory and practice.Kernel MethodsDiscussion of kernel methods and their interaction with modern deep learning. Topics covered will include: the kernel trick, Kernel Principal Component Analysis (KPCA), Kernel Support Vector Machines (SVM), Kernel Ridge Regression, and Neural Tangent Kernels (Jacot et al, 2018)Probabilistic Unsupervised ModelsDiscussion of recent developments in unsupervised probabilistic neural models. Topics covered will include: autoregressive models, Masked Autoencoder for Distribution Estimation (MADE) (Germain et al. 2015), Pixel Recurrent Neural Networks (Pixel-RNN) (Van Den Oord et al. 2016) and Pixel-CNN (Salimans et al. 2016), Variational Autoencoders (Kingma et al. 2014), variational lower bounds, and Normalizing Flows (Kobyzev et al. 2016).Generative ModelsOverview of the principles and recent developments in generative neural models. Specific topics covered will include: Generative Adversarial Networks (GANs) (Goodfellow et al. 2014), Wasserstein GAN (Arjovsky et al. 2017), Least Squares-GAN (Mao et al. 2017), BigGAN (Brock et al. 2019), StyleGAN (Karras et al. 2019), ProGAN (Karras et al. 2018), InfoGAN (Chen et al. 2016), Conditional GANs, pix2pix (Isola et al. 2017), CycleGAN (Zhu et al. 2017), Spectral Normalization (Miyato et al. 2018), and methods and metrics for the evaluation of GANs.Representation LearningIntroduction to recent developments in self-supervised representation learning, semi-supervised techniques, label noise, pseudo-labeling, consistency regularization, and contrastive learning.ApplicationsAnalysis of the state-of-the-art in various practical applications of machine learning including: image retrieval, visual saliency, semantic segmentation, object detection, machine translation, speech-to-text, and language models. | |||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||
Indicative Reading List
| |||||||||||||||||||||||||||||||||||||||||||||||||
Other Resources None | |||||||||||||||||||||||||||||||||||||||||||||||||