Semester 2, 2024-25

Exploring the maximum cycle mean as a function and extending the tropical discrete logarithm

Cristian Alonso Baeza, Norwegian University of Science and Technology (Trondheim, Norway)

Tuesday 4 February 2025, 14:00-15:00
Nuffield, Seminar Room G19

In this talk I'll present a series of results related to an extension of the tropical discrete logarithm problem and connections with the maximum cycle mean function over finite matrices.

Solving Multi-Objective Optimisation Problems using Evolutionary Algorithms

Miqing Li, University of Birmingham, School of Computer Science

Tuesday 11 February 2025, 14:00-15:00
Nuffield, Seminar Room G18

Multi-objective optimisation problems are frequently encountered in real-world scenarios, requiring solutions that balance multiple conflicting objectives. Evolutionary algorithms have emerged as one of the most popular approaches for tackling multi-objective optimisation problems.

In this talk, I will begin by addressing what multi-objective optimisation is and why evolutionary algorithms are well-suited for solving it. I will then illustrate their practical applications through one or two real-world examples, demonstrating how to develop effective evolutionary algorithms for solving complex optimisation problems.

An invitation to domain uncertainty quantification in computational electromagnetics

Carlos Jerez Hanckes, Faculty of Engineering and Sciences, Universidad Adolfo Ibáñez, Chile.

Tuesday 25 February 2025, 14:00-15:00
Watson Building, Lecture Theatre C

In this talk, we will focus on solving time-harmonic electromagnetic (EM) wave scattering problems when the domains are not fully known but can be modeled as random perturbations departing from a nominal shape. Specifically, we will consider two novel approaches depending on the relative size of these perturbations. For small ones, we will employ sparse tensor approximations of deterministic statistical moments, whereas for larger perturbations, we will use the framework of affine-parametric shape parametrizations. Along the way, we will encounter and combine many mathematical tools, some new and some old, ranging from (i) shape derivatives and Hadamard’s structure theorem; (ii) integral representation formulas and boundary integral equations; (iii) sparse tensor equations; (iv) operator preconditioning; to (iv) infinite-parameter holomorphy; (v) edge or Nédélec finite-elements and new Strang-type lemmas, (vi) domain-to-solutions maps, (vii) multilevel Monte Carlo methods, and (viii) multilevel sparse-grid quadrature. Fortunately, several numerical experiments will help us understand the problems at hand and confirm our error convergence and complexity bounds. Moreover, these results will show that one can beat, to a certain extent, the curse of dimensionality. Also, they pose new questions you may have the answer to!

Exploiting geometric structure in matrix-valued optimization

Melanie Weber, Harvard University, USA

Tuesday 4 March 2025, 14:00-15:00
Location: Zoom

Matrix-valued optimization tasks arise in many machine learning applications. Often, exploiting non-Euclidean structure can yield algorithms that are computationally superior to standard nonlinear programming methods. In this talk, we consider the problem of optimizing a function on a Riemannian manifold subject to convex constraints. Several classical problems, such as barycenter problems, optimistic likelihoods, and operator scaling, can be formalized in this way. We focus on instances that exhibit additional structure, including geodesic convexity and objectives that are “difference of convex” functions. Leveraging the algebraic properties of geodesically convex functions, we introduce a disciplined programming approach for certifying geodesic convexity, which gives rise to global optimality guarantees for first-order methods. We then discuss a class of efficient, CCCP-style algorithms for unconstraint, geodesically convex “difference of convex” optimization and present global, non-asymptotic convergence results. Finally, we introduce a class of regularizers for constraint problems that induce this desirable structure, significantly expanding the range of problems to which the CCCP approach applies. Based on joint work with Suvrit Sra, Andrew Cheng, and Vaibhav Dixit.

Semester 1, 2024-25

The SLO Hierarchy of pseudo-Boolean Functions and Runtime of Evolutionary Algorithms

Per Kristian Lehre, University of Birmingham

Tuesday 8 October 2024, 15:00-16:00
Nuffield, G13

Many real-world problems require finding optimal solutions within complex, high-dimensional spaces Evolutionary Algorithms (EAs) are popular optimisation heuristics inspired by natural selection. Theory of evolutionary computation defines the runtime of an EA as the number of times it evaluates the fitness function until a solution with satisfactory quality is evaluated. Runtime analysis is the mathematical study of the distribution of the runtime of EAs. For many EAs, it is fairly well understood how the runtime depends on algorithmic parameters such as population size, mutation rate, and selective pressure.

The runtime of EAs depends critically on the structure of the "fitness landscape" (optimisation problem). The impact of certain structural properties such as plateaus and gaps are now well understood. However, the more general relationship between fitness landscape structure and the runtime of EAs is still poorly understood. Recently, Dang et al. (2021) introduced a classification of pseudo-Boolean problems showing that "sparsity" of local optima and the "density" of fitness valleys can be crucial characteristics when determining the runtime of EAs. However, their approach could only classify some classes of pseudo-Boolean functions and thus defined an incomplete hierarchy.

We generalise the previous work to a novel, complete hierarchy for all pseudo-Boolean functions. The hierarchy is consistent with existing results for the runtime of EAs. The hardest part of the hierarchy consists of problems satisfying the No Free Lunch theorem. The easiest part contains well-known theoretical benchmark problems, easy for EAs. The intermediary parts contain instances of NP-hard problems. Problem classes where local optima sparsity exceed fitness valley density are shown to have exponential black-box complexity. We study how random perturbations of a function can change its classification. E.g, randomly perturbing search points in OneMax with constant probability leads to a problem class that can still be optimised efficiently with appropriately tuned non-elitist EAs. Our work provides a significant step towards a comprehensive theory of EA performance, with implications for algorithm design and the analysis of complex optimisation problems.

This is joint work with Duc-Cuong Dang and Anton Eremeev.

Can We Successfully Approximate Large Sparse Hessian Matrices?

Jennifer Scott, Rutherford Appleton Laboratory

Tuesday 26 November 2024, 14:30-15:30
Arts 103

The success of large-scale optimization algorithms frequently depends on the availability of (approximate) Hessian matrices. If we are extremely fortunate, an analytic expression may be available. But this is not generally the case and it is frequently difficult to provide second derivatives. Therein lies our challenge.

Existing methods for approximating Hessian matrices either do not impose sparsity and their computational cost is prohibitive for large problems. To overcome these limitations, we propose two novel approaches. Both are based on using data from a sequence of previous iterates to build the approximation. The first reformulates the problem as a large-scale linear least squares problem while the other seeks to satisfy as many componentwise secant equations as necessary to define each row of the Hessian matrix. A naive application of this idea is prohibitively expensive when the Hessian has some relatively dense rows but by carefully taking into account the symmetry and connectivity of the Hessian matrix, we are able devise an approximation algorithm that is fast and efficient with scope for parallelism. Sparse Hessian matrices taken from the CUTEst test problem collection for optimization illustrate the effectiveness and robustness of the new ideas.

This is joint work with Jaroslav Fowkes and Nick Gould.

Title: TBA

Miqing Li, University of Birmingham

Tuesday 3 December 2024, 15:00-16:00
Arts 103

Abstract: TBA