Royal Holloway logo and departmental theme Royal Holloway, University of London

MSc by Research in Computer Science

Back to MSc by Research in Computer Science Home Page

Machine Learning Strand

Aims

This course provides a modern view of inductive inference, and a rapid introduction to several active research areas in machine learning and computational learning theory. At the end of this course the student should have a good theoretical understanding of inductive inference, and should be able to make practical use of several recent learning methods.

Sessions of Provisional Syllabus

  1. Universal approaches to learning
    • Overview: an introduction to Solomonoff's approach to inductive inference, and the theory of Kolmogorov complexity, leading to learning with resource-bounded algorithms.
    • Topics: Solomonoff's formulation of inductive inference, Kolmogorov complexity, induction using resource-bounded algorithms, and the minimum description length principle for induction.
  2. Prediction using the Aggregating Algorithm
    • Overview: The aggregating algorithm (AA) can be regarded as a tool for extending the universal learning techniques to problems with arbitrary loss functions. The AA performs perhaps the simplest and most general method of statistical prediction possible: it is a general minimax procedure that makes no statistical assumptions whatsoever about the sequence that is to be predicted, and yet it can achieve results that are close to those achieved by statistical prediction methods that make the far stronger (and often unjustified) assumption that the sequence to be predicted is stationary.
    • Topics: Derivation of AA, and proof of upper bounds on relative loss. Extension of AA to bandit problems. Applications of AA to least-squares regression, choice of investment portfolios, and choice of optimal hypothesis complexity.
  3. Support Vector Machines
    • Overview:

      A practical and theoretical introduction to support-vector machines, with some comparisons to more conventional dual-variable methods.

    • Topics: Definition of optimal hyperplane, derivation of quadratic optimisation problem, extensions to regression, extension to non-linear Kernel functions, non-linear principal components analysis, practical application of SV machines.
  4. Learning in Bayesian Belief Networks
    • Overview: An introduction to the representation of independence assumptions as networks, and to the propagation of probabilities by local computations on these networks, leading to learning algorithms for finding belief networks empirically from data.
    • Topics: Probabilistic and graphical representations; algorithms on tree structures; learning algorithms; practical implementations and case studies.
  5. Reinforcement Learning
    • Overview: Reinforcement learning is a method of learning control policies from experience. It can be regarded as a method of dynamic programming using a Monte-Carlo method.
    • Topics Dynamic programming and control. Temporal difference estimation. Q-learning and real-time dynamic programming. Linear estimation of optimal value functions in optimal stopping problems. Applications of reinforcement learning to scheduling and control problems in operational research.
  6. The uniform convergence approach to statistical learning
    • Overview: This session is an introduction to Vapnik's non-Bayesian approach to statistical learning. Given a set of possible statistical models, the approach is to construct an a priori uniform bound on the rate of convergence of the empirical losses of hypotheses in the set to their true values. The Vapnik-Chervonenkis dimension, and the basic proof techniques and core theorems will be covered in some detail.
    • Topics: Vapnik-Chervonenkis dimension, uniform bounds on empirical losses, structural risk minimisation, generalisation of the VC dimension for real-valued function classes.
  7. Data dependent generalisation and boosting
    • Overview: An introduction to bounds on generalisation error that depend on the data actually observed. A technique known as ``boosting'' for constructing classification rules appears remarkably effective in practice. The most convincing explanation for the effectiveness of boosting is that it is from data-dependent bounds on generalisation error.
    • Topics: Data dependent bounds, boosting.

    Provisional Course Work

    In addition to weekly question sheets which will form the basis of the tutorial sessions, there will be two substantial assessed pieces of coursework as follows:

    Implementation of AA for a finite number of experts, and application to a selected data set.

    Practical use of SVM, including non-linear PCA and a comparison of different kernels, applied to a selected data set.

    Supervisors

    A. Gammerman, V.N. Vapnik, V. Vovk, C. Watkins

    Main References

    1. Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and its Applications, Springer
    2. Vladimir Vapnik, The Nature of Statistical Learning Theory, Springer
    3. Judea Pearl, Probabilistic Reasoning in Intelligent Systems, 1995
    Back to MSc by Research in Computer Science Home Page

Last updated Fri, 23-Jan-2009 15:12 GMT / PS
Department of Computer Science, University of London, Egham, Surrey TW20 0EX
Tel/Fax : +44 (0)1784 443421 /439786
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@
@@('' )@@