Computer Science Seminars

Practical information

The seminars are usually held on Thursdays or Fridays between 12:30 and 1:30PM, Campus de la Plaine.

You can receive seminar announcements by subscribing to the 'computerscience' mailing list. Help on ULB/VUB mailing lists can be found here. For more information, please contact Stefan Langerman (slanger at ulb dot ac dot be).

Thursday October 13, 2016, 12:00 noon, room: P.2NO9.06
The DOMP revisited: new formulations, properties and algorithms
Diego Ponce (Université Libre de Bruxelles, Belgium, Universidad de Sevilla, Spain)

Abstract: The Discrete Ordered Median Problem (DOMP) is a modelling tool that provides flexible representations of a large variety of problems, which include most of the classical discrete location problems. In this talk we present new formulations for the DOMP and a novel embedding based on set packing constraints on a column generation approach built over a set formulation which leads to a Branch & Price method. These new formulations, allow us to gain some insights on the polyhedral structure of the DOMP that advance the knowledge of this problem over what was known previously.

The speaker is hosted by GOM.

Friday September 23, 2016, 11:30AM, room: Rotule NO8
Approximation methods for a class of robust optimization problems
Marcio Costa Santos (Université de Technologie de Compiègne, France)

Abstract: Multistage optimization problems under uncertainty have attracted interest from both the theoretical and the practical level and robust optimization stands among the most established methodologies for dealing with such problems. In robust optimization, we look for a solution that optimizes the objective function for the worst possible scenario. We address this problem through the lens of decomposition methods. These methods are based on the decomposition of the robust problem into a master problem (MP) and several adversarial separation problems (APs). The master problem contains the original robust constraints, however, written only for finite numbers of scenarios. Additional scenarios are generated on the fly by solving the APs. In this context, we propose new dynamic programming algorithms to solve the APs for robust problems involving budgeted uncertainty, which are based on the maximum number of deviations allowed and on the size of the deviations. These algorithms can be applied to lot-sizing problems and vehicle routing problems among others.

The speaker is hosted by GOM.

Thursday September 1st, 2016, 12:00PM, room: Rotule NO8
Telling (3-)manifolds apart: an introduction into topological invariants
Jonathan Spreer (University of Queensland)

Abstract: This talk is about manifolds, i.e., surfaces and their higher dimensional analogues. More precisely, it is about topological manifolds. This means that -- in contrast to geometry -- we do not care about the shape of an object, but rather about its properties that do not change under continuous deformation. Deciding whether two manifolds are continuous deformations of each other (i.e., deciding whether they represent the same topological manifold) is remakably difficult in dimensions beyond two. As a result, topologists rely on what is called topological invariants, i.e., properties of manifolds which do not change under continuous deformation. In this talk I give an introduction to the field of computational topology in general, the use of topological invariants in particular, and explain how all this is connected to the fields of complexity theory and algorithms. As an example I present some recent results concerning the efficient computation of Turaev-Viro invariants -- a family of particularly powerful invariants for 3-dimensional manifolds.

The speaker is hosted by Jean Cardinal.

Tuesday June 28, 2016, 11:30AM, room: Rotule NO8
Complexity, approximability ans solving strategies for the multidimensional binary vector assignment problem.
Guillerme Duvilié (LIRMM, Montpellier, France)

Abstract: Let us consider some variants of the multidimensional assignement problem, where weight of the hyperedges are locally encoded via node labels. In these problems, we are given an m-partite complete hypergraph H = (V^1, V^2, ..., V^m). Each set V^i contains exactly n nodes labeled by a p-dimensional binary vector. To each hyperedge we associate a representative $p$-dimensional binary vector obtained by performing the bitwise-and between every label of the nodes in the hyperedge. The weight of the hyperedge is given by the number of zeros or the number of ones in the representative vector. The objective is then to select n disjoint hyperedges while maximizing the overall number of ones (max sum 1) or minimizing the overall number of zeros (min sum 0) in the representative vectors of the selected hyperedges.
In a first time I provide negative results for both of the variants. In a second time, we present ILP formulations that can be used to solve in practice these problems and provide experimentations results based on these formulations.

The speaker is hosted by GOM.

Thursday June 23, 2016, 12 noon, room: Rotule NO8
Digraphs: subdivision of cycle and chromatic number.
William Lochet (ENS Lyon, France)

Abstract: The chromatic number of a digraph is the chromatic number of the underling graph. The Gallai-Hasse-Roy-Vitaver Theorem states that any digraph with chromatic at least k contains a directed path (where all arcs go in the same direction) of length k. When looking out for directed cycle, the picture isn't so simple since transitive tournaments can have chromatic number arbitrarily large without any directed cycle. However, Bondy showed that if the graph is strongly connected then the chromatic number is a lower bound on the size of the largest directed cycle. We look at a generalisation of this result for other orientations of the cycle. First we show that the strong connectivity is again necessary, i.e that for any C, there exists a digraph with arbitrarily large chromatic number and without any subdivision of C. Then we show an analog of Bondy's theorem for oriented cycles which are the unions of 2 directed paths (cycle of 2 blocks).
Joint work with Nathann Cohen, Frédéric Havet and Nicolas Nisse.

The speaker is hosted by Gwenaël Joret.

Tuesday June 21, 2016, 11:30AM, room: Rotule NO8
Zero-price Energy Offering by Multiband Robust Optimization.
Fabio D'Andreagiovanni (Zuse Institute Berlin, Germany)

Abstract: We consider the problem of a price-taker generating company that wants to select energy offering strategies for its generation units, to maximize the profit while considering the uncertainty of market price. First, we review central references available in literature about the use of Robust Optimization (RO) for price-uncertain energy offering, pointing out how they can expose to the risk of suboptimal and even infeasible offering. We then propose a new RO-based offering method, which is characterized by making offers at zero price and overcomes all the limits of other methods. We show the effectiveness of the new method on realistic instances provided by our industrial partners, getting very high increases in profit. Our method is based on Multiband Robustness (MR - Büsing, D'Andreagiovanni, 2012), an RO model that refines the classical Bertsimas-Sim model, while maintaining its computational tractability and accessibility. MR is essentially based on the use of histogram-like uncertainty sets, which result particularly suitable to represent empirical distributions commonly available in uncertain real-world optimization problems.
Essential References:
- C. Büsing, F. D'Andreagiovanni, New Results about Multi-band Uncertainty in Robust Optimization, Experimental Algorithms, Springer LNCS 7276, doi: 10.1007/978-3-642-30850-5_7
- F. D'Andreagiovanni, G. Felici, F. Lacalandra, Revisiting the use of Robust Optimization for optimal energy offering under price uncertainty, Submitted, http://arxiv.org/abs/1601.01728

The speaker is hosted by GOM.

Monday June 6, 2016, 11:30AM, room: Salle Solvay, NO5
On a class of stochastic programs with exponentially many scenarios.
Gustavo Angulo Olivares (Pontificia Universidad Católica de Chile)

Abstract: In stochastic programming, it is usually assumed that the random data of the problem follows a known distribution P. When P is either continuous or finite with a large number of atoms, sampling methods can be used to approximate the true problem with a model involving a reasonable number of scenarios. But what hap- pens when P is “easy” to describe and still involves an enormous number of possible outcomes? A natural question to ask is whether we can solve the true problem without relying on sampling.
In this work, we propose a model where scenarios are affinely parametrized by the vertices or integral vectors within a given polytope Q. For instance, with Q being the n-dimensional unit cube, a vertex is a binary vector that might represent the availability of a set of resources in a particular scenario. Given that in general the number of vertices is exponential with respect to the size of the description of Q, a natural integer programming formulation that includes binary variables to choose which scenarios are satisfied is simply impractical. For this reason, we present a for- mulation that implicitly discards the k worst scenarios for a given vector x without including variables for each scenario, leading to a mixed-binary program of mod- erate size. We also study a model where the average cost of the k worst scenarios is considered, for which we present a compact linear formulation. These models can be interpreted in the context of VaR- and CVaR-constrained problems, respec- tively, exhibiting similar relationships. A key tool we exploit is a dynamic program to optimize a linear function over a set of integral matrices with lexicographically ordered rows, which might be of independent interest.
This is joint work with Mathieu Van Vyve.

The speaker is hosted by GOM.

Wednesday June 1st, 2016, 4PM, room: Salle Solvay NO5
Finding Things.
John Iacono (New York University, USA)

Abstract: Speed matters. Faster algorithms create a better user experience and and also use less resources in terms of both power consumption and the number of computers needed to handle a given workload. Standard textbook algorithm development is done using a simplistic model of a computer which is increasingly irrelevant as modern computational devices are not simple, with a heterogeneous mix of multiple CPUs, GPUs, disks, SSDs, and caches, each of which have radically different performance characteristics and which require different algorithms. In this talk we explore the world of modern computational hardware, abstract models of this hardware, and different algorithm design paradigms that have been developed to exploit maximal performance from different types of hardware. We examine this through the lens of the one of the most fundamental class of algorithms: search algorithms.

The speaker is hosted by Stefan Langerman.

Thursday May 12 2016, 12 noon, room: Rotule NO8
On Proximity problems in Euclidean spaces.
Luis Barba (ULB + Carleton U.)

Abstract: In this talk, I will present the results of my PhD thesis which focuses on problems involving the proximity of geometric objects in the space.
In the first part, we turn our attention to facility location problems. In this setting, we are given a set of sites in a geometric space and we want to place a facility at a specific place in such a way that the distance between the facility and its farthest site is minimized. We address first a constrained version of the problem and then study a variant under the geodesic metric in simple polygons. In the process of solving facility location problems, we rely heavily on geometric structures called Voronoi diagrams. This leads us to study the problem of constructing Voronoi diagrams incrementally by analyzing the number of edge insertions and deletions needed to maintain its combinatorial structure as new sites are added.
Finally, we study intersection detection problems. In this setting, we are given two (or more) geometric objects and we are allowed to preprocess them. Then, the objects are translated and rotated within a geometric space, and we need to efficiently test if they intersect in these new positions. We develop representations of convex polytopes in any (constant) dimension that allow us to perform this intersection test in logarithmic time.

This seminar will be presented as part of Luis Barba's cotutelle Ph.D. requirements.

Monday May 9, 2016, 2:00PM, room: Rotule NO8
$\chi$-bounded classes of oriented graphs.
Pierre Aboulker (Université de Nice Sophia-Antipolis)

Abstract: Let $G$ be a graph. The chromatic number $\chi(G)$ is the smallest number of colors needed to color the vertices of G so that no two adjacent vertices share the same color. The clique number $\omega(G)$ is the size of a maximum complete subgraph of G. It is clear that $\omega(G) <= \chi(G)$, but some graphs may have small clique number and arbitrarily large chromatic number (for example triangle-free graph might have arbitrarily large chromatic number). A class of graphs is $\chi$-bounded if there exists a function f such that for any graph G in the class, $\chi(G) <= f(\omega(G))$. Gyárfas and Sumner independently conjectured that for every tree T, the class of graphs that do not contain T as an induced subgraph is $\chi$-bounded. We will give an overview of the known results on the Gyárfas-Sumner conjecture and will discuss two way to generalize this conjecture to digraphs. This a joint work with J. Bang-Jensen, N. Bousquet, P. Charbit, F. Havet, F. Maffray, S. Thomassé and J. Zamora

The speaker is hosted by Samuel Fiorini.

Monday May 9, 2016, 11:30AM, room: P.OF2070
k-Delete Recoverable Robustness.
Christina Büsing (RWTH Aachen University)

Abstract: We consider 0-1 optimization problems and their k-Delete recoverable robust (k-del RR) version. K-del RR is a two stage process to deal with cost-uncertainties. In the first stage, a solution of the underlying problem is fixed. In the second stage, depending on the revealed cost, at most k elements of this solution are deleted to reduce the cost. We prove in the first part of the talk that the k-del RR shortest path and minimum matching problem are strongly NP-hard and provide on the other hand polynomial solvable cases. In the second part of the talk we focus on exact algorithms to solve these problems. We will therefore derive several different IP-formulations and provide preliminary computational results on the run-time.

The speaker is hosted by GOM.

Monday May 2, 2016, 11:00AM, room: P.2NO9.06
Modeling convex subsets of points.
Maurice Queyranne (CORE, UCL)

Abstract: Many planning and location problems in forestry, mining, political districting, farmland consolidation, as well as in data mining, entail the selection of a subset, or a partition into subsets, that should satisfy some, often ill-specified, shape constraints. A typical such shape constraint is that the subsets be convex, or “approximately convex”. Thus we consider associated optimization (sub)problems (finding a “best” convex subset) and modeling problems (representing such subsets using moderate numbers of variables and linear constraints) for convex subsets of given points. In general terms, a subset S of a given (finite) set of points in a convexity structure is convex if every given point that is in the convex hull ofS is itself in S. In the associated optimization problem, each point has a given weight (of arbitrary sign) and we seek a convex subset with maximum (or minimum) total weight. We also seek polyhedral descriptions of the characteristic vectors of convex subsets, and extended formulations that are compact (i.e., polynomial-sized), and if possible ideal (i.e., with integer extreme solutions).
We first review known results for the well-studied and well-understood one-dimensional case. In contrast, we also describe hardness results for dimensions three and higher. We then consider the two-dimensional (planar) case. Convexity can be enforced by a polynomial (quartic) number of linear inequalities in the natural binary variables, but the resulting formulation is very weak. We review existing optimization algorithms and derive compact ideal extended formulations.
We also consider related notions of convexity in partially ordered sets and in networks and metric spaces.

The speaker is hosted by GOM.

Friday April 29, 2016, 11:30AM, room: Rotule NO8
Network disconnection games.

Abstract: We study combinatorial games arising from removing arcs from a network, to disconnect two distinguished nodes s and t. First we study a two-person zero-sum game where one player, the attacker, chooses every few minutes a set of arcs that intercepts every path from s to t, then a second player, the inspector, chooses every few minutes an arc to inspect trying to find the attacker. We give polynomial algorithms to find optimal strategies for both players.
Then we study a cooperative game where a coalitions corresponds to sets of arcs, and the value of a coalition S is the maximum number of disjoint st-cuts included in S. This is the number of independent ways that a coalition can use to disconnect s and t. We give polynomial algorithms for testing membership to the core and for computing the nucleolus. And if we have time, we study disruption games, where a coalition wins if it disconnects s and t. We give a polynomial combinato- rial algorithms for finding the least-core and for computing the nucleolus of the least-core.

The speaker is hosted by GOM.

Thursday April 21, 2016, 12:00AM, room: Rotule NO8
Quantum walks on regular graphs.
Krystal Guo (U. Waterloo)

Abstract: A quantum walk is a quantum process on a graph, which can be used to implement a universal model of quantum computation. In this talk, we will discuss discrete-time quantum walks. Emms, Hancock, Severini and Wilson proposed a graph isomorphism routine for the class of strongly regular graphs, based on the spectrum of a matrix related to the discrete-time quantum walk. We disprove this conjecture. Another matrix related to the discrete-time quantum walk has been independently studied as the Bass-Hashimoto edge adjacency operator, in the context of the Ihara zeta function of graphs. We find its spectrum for the class of regular graphs. We will also discuss a result about the cycle space of line digraphs of graphs, which is motivated by the previous problems. This is joint work with Chris Godsil and Tor Myklebust.

The speaker is hosted by Samuel Fiorini.

Friday March 4, 2016, 11:30AM, room: Rotule NO8
Scheduling a soccer league.
Frits Spieksma (KU Leuven)

Abstract: Soccer has become a major business involving many stakeholders, such as teams, police, fans, sponsors, and broadcasting companies. Huge amounts of money are being paid for the broadcasting rights, illustrating the economic value of soccer competitions. It also emphasizes the relevance of finding a good schedule. Each involved party has numerous (possibly conflicting) constraints and wishes, and a schedule that satisfies all constraints and wishes simply doesn't exist. Hence, developing a schedule that is considered fair, and that is acceptable to all parties, is a challenge.
We describe our experience in scheduling the ProLeague over the last seasons. We outline some of the different models we used, and we emphasize the advantages of using model-based schedules: transparency, perceived fairness, or simply said: the quality of the schedule.

The speaker is hosted by GOM.

Tuesday February 23, 2016, 11:30AM, room: Rotule NO8
The N+-Perfect Graph Conjecture for Claw-Free Graphs.
Annegret Wagler (Université Blaise Pascal, Clermont-Ferrand, France)

Abstract: The subject of this talk is the study of the Lovasz-Schrijver PSD-operator N+ applied to the edge relaxation ESTAB(G) of the stable set polytope STAB(G) of a graph. We are particularly interested in the problem of characterizing the graphs G for which N+(G) := N+(ESTAB(G)) equals STAB(G), called N+-perfect graphs, and to find an appropriate polyhedral relaxation that coincides with N+(G) and STAB(G) if and only if G is N+-perfect. An according conjecture has been recently formulated (N+-Perfect Graph Conjecture); here we verify it for the well-studied class of claw-free graphs.
This is a brand-new result (we plan to submit the paper to IPCO 2016).

The speaker is hosted by GOM.

Tursday February 18, 2016, 12 noon, room: Rotule NO8
On the unique completability of low rank positive semidefinite matrices.
Shin Ichi Tanigawa (Kyoto U., Japan)

Abstract: In the rank-constrained positive semidefinite matrix completion problem, a partially filled matrix is given and the goal is to determine the missing entries so that the resulting matrix becomes positive semidefinite and has a specific rank. In this talk I will discuss how to decide the uniqueness of completions by analyzing the underlying graph determined by the positions of the known entries. I will explain connections among matrix completion, graph rigidity, and combinatorial optimization.

The speaker is hosted by Stefan Langerman.

Monday January 25, 2016, 12 noon, room: Rotule NO8
Differential Computation Analysis: Hiding your White-Box Designs is Not Enough.
Joppe Bos (NXP, Belgium)

Abstract: Although all current scientific white-box approaches of standardized cryptographic primitives are broken, there is still a large number of companies which sell "secure" white-box products. In this paper a new approach to assess the security of white-box implementations is presented which requires neither knowledge about the look-up tables used nor any reverse engineering effort. This differential computation analysis (DCA) attack is the software counterpart of the differential power analysis attack as applied by the cryptographic hardware community. We developed plugins to widely available dynamic binary instrumentation frameworks to produce software execution traces which contain information about the memory addresses being accessed. We show how DCA can extract the secret key from all publicly (non-commercial) available white-box programs implementing standardized cryptography by analyzing these traces to identify secret-key dependent correlations.

The speaker is hosted by QUALSEC.

Thursday December 17, 2015, 12:00PM, room: Rotule NO8
Obstructions for 3-colouring graphs with one forbidden induced subgraph.
Jan Goedgebeur (Ghent University)

Abstract: The complexity of colouring graphs without long induced paths is a notorious problem in algorithmic graph theory, an especially intriguing case being that of 3-colourability. So far, not much was known about certification in this context. We prove that there are only finitely many 4-critical P6 -free graphs, and give the complete list that consists of 24 graphs. In particular, we obtain a certifying algorithm for 3-colouring P6 -free graphs, which solves an open problem posed by Golovach et al. Here, P6 denotes the induced path on six vertices.
Our result leads to the following dichotomy theorem: if H is a connected graph, then there are finitely many 4-critical H-free graphs if and only if H is a subgraph of P6. This answers a question of Seymour. The proof of our main result involves two distinct automatic proofs, and an extensive structural analysis by hand.
In this talk we will mainly focus on the algorithmic results by presenting a new algorithm for generating all minimal forbidden subgraphs to k-colorability for given graph classes. This algorithm (combined with new theoretical results) has been successfully applied to fully characterise all forbidden subgraphs for k-colourability for various classes of graphs without long induced paths.
This is joint work with Maria Chudnovsky, Oliver Schaudt and Mingxian Zhong.

The speaker is hosted by Jean Cardinal.

Wednesday December 9, 2015, 11:30AM, room: NO5, Solvay Room
A scalable resilient routing scheme.
Dritan Nace (Université de Technologie Compiègne, France)

Abstract: The context of this study is the total or partial replacement of a network by switches or new generation routers. Networks based on these equipments may represent an interesting alternative to conventional router networks. Our aim is to propose a fully distributed resilient routing strategy which ensures that wherever a link or node fails the traffic can always be (re)routed to the destination by an alternative route which is embedded in a preplanned tree. Hence, the routing will be transparent to occurring failures and the routing decision in each node will be done on basis of the couple (destination, entry node). We show that under mild conditions a routing scheme that ensures routing on this basis and it is resilient to single link failures exists. We generalize this to single node failures and provide numerical results showing the link dimensioning cost of this routing scheme. Exact formulation and heuristic approaches are proposed and numerical results will be presented.
(Joint work with: S.T. Pham, J. Carlier, J-L. Lutton and J. Lattmann)

The speaker is hosted by GOM.

Thursday November 19, 2015, 12:00PM, room: Rotule NO8
Epidemics and diffusion of information on Human contact networks
Luis E C Rocha (Université de Namur, Belgium and Karolinska Institutet, Sweden)

Abstract: Network science provides the ideal framework to study how infectious diseases and information spread in society. Human contact networks for example are characterised by the structure of connections between individuals and by the temporal patterns of activity between these individuals. Extensive research has been done to understand how the structure of these contacts constrains dynamic processes, such as epidemics, opinion dynamics, and random walks, taking place on networks. On the other hand, the role of the timings of node and link activation remains little understood. In this talk, I will discuss some of our contributions to the field. In particular, I will present a unique dataset of sexual contacts extracted from a web forum and discuss how its evolving structure potentially affects epidemic outbreaks. This will be followed by a presentation of some recent results on the competition between time and structure to regulate diffusion processes on dynamic networks. I will conclude the talk briefly presenting our TempoRank method to identify influential nodes and our Individual-based Approximation to model epidemics on dynamic contact networks.

The speaker is hosted by the Machine Learning Group

Thursday November 5, 2015, 12:00PM, room: Rotule NO8
Flattening and Unfolding Convex Polyhedra.

Abstract: The problem of flattening or unfolding a polyhedron has applications in manufacturing 3D shapes out of metal, cardboard or plastic. I will present two results: (1) a new method to cut the surface of a convex polyhedron and unfold it into a simple non-overlapping polygon in the plane; (2) a natural method to continuously flatten the surface of any convex polyhedron to the plane without cutting (e.g. imagine flattening a paper bag).

The speaker is hosted by Stefan Langerman.

Thursday October 29, 2015, 11:30AM, room: Rotule NO8
Revealed preference tests of collectively rational consumption behavior.
Yves Crama (HEC Management School, University of Liege, Belgium)

Abstract: To verify the empirical adequacy of a particular household consumption model, it is important to develop efficient tests that can be applied to real-world data. These tests check whether the observed household behavior is "rational", in the sense that it is consistent with the predictions of the model. In this talk, we present different approaches based on revealed preferences to test collective models of household consumption. Testing collective rationality is computationally difficult (NP-hard). In order to overcome this negative result, we introduce mixed-integer programming formulations which can be used for medium-sized data sets. Next, we propose simulated annealing heuristics, which allow for efficient testing of the collective model on large data sets. We present the results of computational experiments with our approaches.
(joint work with Fabrice Talla Nobibon, Laurens Cherchye, Thomas Demuynck, Bram De Rock and Frits C.R. Spieksma)

The speaker is hosted by GOM.

Monday October 26, 2015, 12:00PM, room: Rotule NO8
Evolutionary Computation in Cryptology.
Stjepan Picek (KU Leuven)

Abstract: In this talk we discuss how to use evolutionary computation (EC) in cryptology. We start with a short introduction on EC and then briefly elaborate on similarities and differences between EC and machine learning methods. Since cryptology includes a plethora of difficult and non-deterministic problems it is obvious that evolutionary computation can be employed there. In fact, EC methods have been successfully used in cryptology for more than 20 years. However, the attempts remain somewhat isolated and without clear justification until now. In accordance with that, we present a number of EC applications in crypto area. Finally, several interesting future research directions are given.

The speaker is hosted by QUALSEC.

Monday October 26, 2015, 11:00AM, room: Rotule NO8
AutoGraphiX-III : A new system for computer aided graph theory.

Abstract: More than fifteen years after the beginning of the development of AutoGraphiX (AGX), a third version of the software is made available. In these days when increasing research is applied to complex networks (such as social networks), the use of quantities related to vertices, indicating the centrality (the importance of an actor in the network measured as a topological indicator) naturally leads researchers toward the mathematical study of these quantities. This new paradigm implies a complete change in the optimization algorithm that now natively handles multiobjective problems involving vertex-related measures.

The speaker is hosted by GOM.

Thursday October 22, 2015, 12:00PM, room: Rotule NO8
Optimal Binary Comparison Search Trees.
Mordecai J. Golin (HKUST, Hong Kong)

Abstract: Constructing optimal (minimum average search time) binary search trees (BSTs) is one of the canonical early uses of dynamic programming. In 1971, Knuth described how to solve this problem in O(n^2) time, with input specifying the probability of the different successful and unsuccessful searches. While the trees constructed were binary, the comparisons used were ternary. Successful searches terminated at internal nodes and unsuccessful searches at leaves.
By contrast, in binary comparison trees (BCSTs), internal nodes perform binary comparisons; the search branches left or right depending upon the comparison outcome and all searches terminate at leaves. Polynomial algorithms exist for solving the optimal BCST problem in special cases with input restricted to successful searches. Hu and Tucker gave an O(n log n) algorithm when all comparisons are the inequality “<”; Anderson et. al. developed an O(n^4) algorithm when both “<” and “=” comparisons are allowed.
In this talk we present the first polynomial time algorithms for solving the optimal BCST problem when unsuccessful searches are included in the input and any set of comparisons are permitted. Our running times depend upon the comparisons allowed. If equality is not allowed, our algorithm runs in O(n log n) time; if equality is allowed, O(n^4). We also demonstrate O(n) time algorithms that yield almost optimal binary comparison trees, with tree cost within constant additive factor of optimal.
This is joint work with Marek Chrobak, Ian Munro and Neal Young.

Bio: After receiving his doctorate from Princeton University in 1990, Dr Golin worked as a researcher in the Projet Algorithmes of the Institut National de Recherche en Informatique et en Automatique (INRIA) in Rocquencourt, France before arriving at HKUST in 1993. Since then, has also been a visiting researcher at the Max-Planck-Institut fur Informatik in Saarbrucken, Germany, INRIA-Sophia in France, AT&T Labs-Research, and DIMACS. In addition he served as the HKUST Associate Vice-President for Postgraduate Studies from 2011-2014.

The speaker is hosted by Stefan Langerman.

Monday October 19, 2015, 11:30AM, room: NO5, Solvay Room
k-Delete Recoverable Robustness.
Christina Büsing (RWTH Aachen University)

Abstract: We consider 0-1 optimization problems and their k-Delete recoverable robust (k-del RR) version. K-del RR is a two stage process to deal with cost-uncertainties. In the first stage, a solution of the underlying problem is fixed. In the second stage, depending on the revealed cost, at most k elements of this solution are deleted to reduce the cost. We prove in the first part of the talk that the k-del RR shortest path and minimum matching problem are strongly NP-hard and provide on the other hand polynomial solvable cases. In the second part of the talk we focus on exact algorithms to solve these problems. We will therefore derive several different IP-formulations and provide preliminary computational results on the run-time.

The speaker is hosted by GOM.

Monday October 19, 2015, 10:30AM, room: NO5, Solvay Room
Redesigning Benders Decomposition for Large Scale Facility Location.
Ivana Ljubic (Essec Business School, Paris, France)

Abstract: The Uncapacitated Facility Location (UFL) problem is one of the most famous and most studied problems in the Operations Research literature. Given a set of potential facility locations, and a set of customers, the goal is to find a subset of facility locations to open, and to allocate each customer to open facilities, so that the facility opening plus customer allocation costs are minimized. In our setting, for each customer the allocation cost is assumed to be a linear or separable convex quadratic function.
Motivated by recent UFL applications in business analytics, we revise approaches that work on a projected decision space and hence are intrinsically more scalable for large scale input data. Our working hypothesis is that many of the exact (decomposition) approaches that have been proposed decades ago and discarded soon after, need to be reinvented and redesigned to draw the advantage of the new technologies. To this end, we thin out'' the classical models from the literature, and use (generalized) Benders cuts to replace a huge number of allocation variables by a small number of continuous variables that model the customer allocation cost directly.
Obtained results show that Benders decomposition allows for a significant boost in the performance of a Mixed-Integer Programming solver. We report the optimal solution of a large set of previously unsolved benchmark instances widely used in the available literature. In particular, dramatic speedups are achieved for UFL's with separable quadratic allocation costs --- which turn out to be much easier than their linear counterpart when our approach is used.
Joint work with: Matteo Fischetti, Markus Sinnl

The speaker is hosted by GOM.

Thursday October 1, 2015, 12:00PM, room: Rotule NO8
On the cluster Voronoi diagrams.
Elena Khramtcova (USI, Switzerland)

Abstract: Voronoi diagrams are famous geometric structures that encode proximity information and that are studied in computational geometry for decades. Are not all the questions about them settled? Surprisingly, this is not true, for example, for the cluster Voronoi diagrams. They are natural generalizations of the simplest Voronoi diagrams, and have both practical (VLSI design) and theoretical [2] applications.
Given a set of points (sites) in the plane, its (simplest) nearest- and farthest-point Voronoi diagrams are the subdivisions of the plane into regions such that each two points in the same region have the same nearest and farthest site, respectively. Roughly speaking, if we change sites to be clusters of points instead of points, the nearest-point Voronoi diagram becomes the Hausdorff Voronoi diagram, and the farthest-point Voronoi diagram becomes the farthest-color Voronoi diagram. These two diagrams do not satisfy the requirements of some standard construction methods, thus posing a challenge to enhance such methods. Although interesting on their own due to the above reason, cluster diagrams can serve as a machinery to solve some seemingly unrelated theoretical questions.
In this talk I will (1) review the state of the art of the cluster Voronoi diagrams, including our recent research on the randomized incremental construction algorithms for the Hausdorff Voronoi diagrams [1,3] and the current open problems; (2) show the connection between the cluster Voronoi diagrams and the stabbing circles problem [2].
[1] Panagiotis Cheilaris, Elena Khramtcova, Stefan Langerman, Evanthia Papadopoulou: A Randomized Incremental Approach for the Hausdorff Voronoi Diagram of Non-crossing Clusters. LATIN 2014
[2] Merce Claverol, Elena Khramtcova, Evanthia Papadopoulou, Maria Saumell, Carlos Seara: Stabbing circles for sets of segments in the plane. ECG 2015
[3] Elena Khramtcova and Evanthia Papadopoulou: Randomized Incremental Construction for the Hausdorff Voronoi Diagram: CG week YRF 2015

Bio: Elena Khramtcova is a PhD student at the University of Lugano (USI), Switzerland, in the group of Evanthia Papadopoulou. She completed her Master degree at Saint-Petersburg State University, Russia. Her research interests are in Combinatorial and Computational Geometry, with a focus on the study of generalized Voronoi diagrams.

The speaker is hosted by Stefan Langerman.

Wednesday September 16, 2015, 12:00PM, room: NO9.06
Just In Time Classifiers For Recurrent Concepts.
Giacomo Boracchi (Politecnico di Milano, Italy)

Abstract: Many machine-learning techniques make the assumption that training and testing data are sampled from the same probability distribution. Unfortunately, in an increasing number of real-world learning scenarios data arrive in a stream, and the probabilistic properties of the data generating process might be changing with time, violating the above assumption. Any algorithm or model that does not account for such change is almost certainly going to fail when data are sampled from a drifting or changing distribution, namely when data are affected by concept drift. Approaches for learning under concept drift can be divided in two main learning strategies: i) undergoing continuous adaptation to match the recent concept (passive approach), or ii) steadily monitoring the data stream to detect concept drift and eventually react (active approaches). In this talk, I will present Just In Time (JIT) Classifiers, a family of classifiers that implement an active approach to handle concept drift. In particular, JIT classifiers monitor the data-generating process by means of change-detection tests, and build representations of the encountered concepts that are then handled by suitable operators to identify recurrent concepts. Giacomo Boracchi

References:
Cesare Alippi, Giacomo Boracchi and Manuel Roveri, "Just In Time Classifiers for Recurrent Concepts", IEEE Transactions on Neural Networks and Learning Systems, 2013. vol. 24, no. 4, pp. 620 - 634 doi:10.1109/TNNLS.2013.2239309
Cesare Alippi, Giacomo Boracchi, Manuel Roveri, "A just-in-time adaptive classification system based on the intersection of confidence intervals rule", Neural Networks, Elsevier vol. 24 (2011), pp. 791-800 doi: 10.1016/j.neunet.2011.05.012

BioSketch: Giacomo Boracchi received the M.S. degree in Mathematics from the Università Statale degli Studi di Milano, Italy, and the Ph.D. degree in Information Technology at Politecnico di Milano, Italy, in 2004 and 2008, respectively. He was researcher at Tampere International Center for Signal Processing, Finland, during 2004-2005. Currently, he is an assistant professor at the Dipartimento di Elettronica, Informazione e Bioingegneria of the Politecnico di Milano. His main research interests encompass two different areas: computational intelligence and image analysis and enhancement; in particular, his research activity concerns learning methods for nonstationary environments, change/anomaly detection, computational imaging, and image restoration.

The speaker is hosted by the Machine Learning Group

Thursday September 3, 2015, 12:00PM, room: NO9.06
Bounds on the profitability of a durable good monopolist.
Gerardo Berbeglia (Melbourne Business School, Australia)

Abstract: A durable good is a long-lasting good that can be consumed repeatedly over time, and a duropolist is a monopolist in the market of a durable good. In 1972 Ronald Coase, conjectured that a duropolist has no monopoly power at all. Specifically, a duropolist who lacks commitment power cannot sell the good above the competitive price. The Coase conjecture was first proved by Gul et al. (1986) under an infinite time horizon model with non-atomic consumers. Remarkably, the situation changes dramatically for atomic consumers and an infinite time horizon. Bagnoli et al. (1989) showed the existence of a subgame perfect Nash equilibrium where the duropolist extracts all the consumer surplus. Observe that, in these cases, duropoly profits are either arbitrarily smaller or arbitrarily larger than the corresponding static monopoly profits - the profit a monopolist for an equivalent consumable good could generate. Neither situation accords in practice with the profitability of durable good producers. We show that the results of Gul et al. (1986) and Bagnoli et al. (1989) are driven by the infinite time horizons. For finite time horizons, duropoly profits closely relate to static monopoly profits. In particular, for atomic agents, we prove that duropoly profits are always at least as large as static monopoly profits, but never exceed double the static monopoly profits. Joint work with Peter Sloan and Adrian Vetta.

The speaker is hosted by Gwenaël Joret.

Wednesday July 8, 2015, 11:30AM, room: Rotule NO8
Context-sensitive Ordinal Regression Models for Human Facial Behaviour Analysis.
Ognjen Rudovic (Imperial College London)

Abstract: Enabling computers to understand human facial behaviour has the potential to revolutionize many important areas such as clinical diagnosis, marketing, human computer interaction, and social robotics, to mention but a few. However, achieving this is challenging as human facial behaviour is a highly non-linear dynamic process driven by many internal and external factors, including ‘who’ the observed subject is, ‘what’ is his current task, and so on. All this makes the target problem highly context-sensitive, resulting in the changes of dynamics of human facial behaviour, which, in turn, is critical for interpretation and classification of target affective states (e.g., intensity levels of emotions or pain). In this talk, I will propose several extensions of the Conditional Ordinal Random Fields (CORF) model that are able to learn spatio-temporal and context-sensitive representations of human facial behaviour useful in various tasks of facial analysis. In particular, I will show how the proposed CORF models can be used for problems such as intensity estimation of facial expressions of emotion, intensity estimation of facial action units and facial expressions of pain. I will also demonstrate the performance of the models on the task of classification of facial expressions of persons with autism spectrum condition. Finally, I will discuss other potential applications of the models proposed and further challenges in modelling of human facial behaviour.

Speaker: Ognjen Rudovic received his PhD from Imperial College London, Computing Dept., UK, in 2014, a MSc degree in Computer Vision and Artificial Intelligence from Computer Vision Center (CVC), Barcelona, Spain, in 2008, and BSc in Automatic Control Theory from Electrical Engineering Dept., University Of Belgrade, Serbia, in 2006. He is currently working as a Research Fellow at the Computing Dept., Imperial College London, UK. His research interests include computer vision and machine learning, with a particular focus on face analysis, Bayesian learning and inference methods, and their application to human sensing. He is a member of Intelligent Behaviour Understanding Group (IBUG) at Imperial College London.

The speaker is hosted by the Machine Learning Group

Monday June 22, 2015, 11:00AM, room: Solbosch, C.5.130
On Solving Hard Quadratic 3-Dimensional Assignment Problems from Wireless Comminications
Hans Mittelmann (Arizona State University, USA)

Abstract: We address the exact solution of a very challenging (and previously unsolved) instance of the quadratic 3-dimensional assignment problem, arising in digital wireless communications. The paper describes the techniques developed to solve this instance to proven optimality, from the choice of an appropriate mixed integer programming formulation, to cutting planes and symmetry handling. Using these techniques we were able to solve the target instance with moderate computational effort (2.5 million nodes and one week of computations on a standard PC). Further such problems with different modulations and thus reduced symmetry are near optimally solved using metaheuristic and engineering ad hoc approaches.

The speaker is hosted by Martine Labbé. This is a joint seminar between the CS department and IRIDIA.

Monday June 8, 2015, 12:00PM, room: Rotule NO8
Forecasting Uncertainty in Electricity Smart Meter Data.
Souhaib Ben Taieb (King Abdullah University of Science & Technology, Saudi Arabia)

Abstract: Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared to traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand, in order to undertake appropriate planning of generation and distribution. Smart meter data provide the data to meet this need. Much of the existing literature has focused on forecasting the average electric load (often called point forecasting); that is, in forecasting the mean of the future demand distribution, conditional on a number of predictor variables such as calendar and temperature variables. However, it has become increasingly important to forecast not only the conditional mean but the entire distribution of the future demand. In other words, a shift is occurring from point forecasting to probabilistic forecasting. The literature on probabilistic load forecasting is rather sparse, and is even more limited for smart meter data. We adopt a quantile regression approach where a different model is estimated for each quantile of the future distribution by minimizing the pinball loss. We propose to compare different quantile regression methods in terms of forecast accuracy for different quantiles and different forecast horizons. Our experiments will be based on a smart meter dataset collected from 3639 households in Ireland at 30-minute intervals over a period of 1.5 years. We discuss and present initial results together with some planned future work, including peak demand forecasting, hierarchical forecasting and whether considering customer behavior similarities can improve the forecast performance.

The speaker is hosted by Gianluca Bontempi.

Thursday May 28, 2015, 12:00PM, room: Forum F
Flips in Edge-Labelled Triangulations.

Abstract: Flips in triangulations have generated a lot of interest over the past decades, but little attention has been given to the movement of individual edges during a sequence of flips. We study this question by attaching a unique label to each edge of the triangulation. We give upper and lower bounds on the diameter of the flip graph in this setting, for three kinds of triangulations: triangulations of a convex polygon, combinatorial triangulations, and pseudo-triangulations.

The speaker is hosted by Stefan Langerman.

Thursday May 21, 2015, 12:00PM, room: Rotule NO8
Limits of order-types.
Rémi de Joannis de Verclos (G-SCOP Grenoble, France)

Abstract: The notion of limits of dense graphs was invented, among other reasons, to attack problems in extremal graph theory. It is straightforward to define limits of order-types in analogy with limits of graphs, and this paper examines how two approaches developed to study limits of dense graphs can be adapted to this setting. We first consider flag algebras, which were used to open various questions on graphs to mechanical solving via semi-definite programming (SDP). We define flag algebras of order-types, and use them to obtain, via SDP, new lower bounds on the density of 5- or 6-tuples in convex position in arbitrary point sets, as well as some inequalities expressing the difficulty of sampling order-types uniformly. We next consider graphons, a continuous representation of limits of dense graphs that enable their study by continuous probabilitic or analytic methods. We investigate how planar measures fare as a candidate analogue of graphon for limits of order-types. Any such measure naturally defines a limit; we show that the map sending a measure to its associated limit is continuous and, if restricted to convex measures, a homeomorphism when both measures and limit spaces are equipped with the adequate topology. We give, however, two examples of limits of order-types that may only be represented, if at all, by measures that are not compact or everywhere irregular. The latter limit originates in a classical construction of Erdös and Szekeres and is analyzed via analogues of Sylvester's problem on the probability that k random points are in convex position. Joint work with Xavier Goaoc, Alfredo Hubard, Jean-Sebastien Sereni and Jan Volec.

The speaker is hosted by Gwenaël Joret.

Wednesday May 13, 2015, 12:00PM, room: Rotule NO8
Combinatorial Redundancy Detection Algorithm.
May Krisztina Szedlák (ETH Zürich)

Abstract: The problem of detecting (and removing) redundant constraints is fundamental in optimization. We focus on the case where we are given a set H of n halfspaces in the d-dimensional real space. The feasible solution set is given by the intersection of all halfspaces in H and a halfspace is called redundant if its removal does not change the feasible solution set. The currently fastest known algorithm to detect all redundancies is the one by Clarkson. This method solves n linear programs, each of them on at most s variables, where s is the number of nonredundant variables. In this talk we study the combinatorial aspect of redundancy detection. How and how fast can we detect all redundant halfspaces? Instead of the linear system we only consider the finitely many signed dictionaries, i.e., matrices that can be thought of as an enriched version of an intersection point of d halfspaces of H. We show that given only this combinatorial information, there is an output sensitive algorithm to detect all redundancies. Although our running time is worse than Clarkson's, in the case where all constraints are in general position we essentially match its running time.

The speaker is hosted by Stefan Langerman.

Tuesday May 5, 2015, 12:00PM, room: Rotule NO8
Modeling and Solving the One-to-One Multi-Commodity Pickup and Delivery Traveling Salesman Problem
Mario Ruthmair (AIT, Austria)

Abstract: We address the one-to-one multi-commodity pickup and delivery traveling salesman problem (m-PDTSP) which is a generalization of the TSP and arises in several transportation and logistics applications. The objective is to find a minimum-cost directed Hamiltonian path which starts and ends at given depot nodes, the demand of each given commodity is transported from the associated source to its destination, and the vehicle capacity is never exceeded. In contrast, the many-to-many one-commodity pickup and delivery traveling salesman problem (1-PDTSP), just considers a single commodity and each node can be a source or target for units of this commodity. We show that the m-PDTSP is equivalent to the 1-PDTSP with additional precedence constraints defined by the source-destination pairs for each commodity and explore several models based on this equivalence. In particular, we consider a formulation based on a 3-dimensional layered graph that combines time and load together and achieves tight LP bounds, at the cost of a large model size. Especially for tightly capacitated instances with a large number of commodities our branch-and-cut algorithms outperform the existing approaches. For the uncapacitated m-PDTSP (sequential ordering problem) we are able to solve to optimality several open instances from the TSPLIB.

The speaker is hosted by GOM.

Tuesday April 21, 2015, 12:00PM, room: Rotule NO8
A multi-stage stochastic supply network design problem with financial decisions and risk management: modeling aspects and an exact algorithm
Francisco Saldanha da Gama (University of Lisbon, Portugal)

Abstract: Structuring a global supply chain is a complex decision making process. The complexity arises from the need to integrate several decisions each of which with a relevant contribution to the performance of the whole system. In such problems, the typical input includes a set of markets, a set of products to be manufactured and/or distributed, demand forecasts for the different markets and some information about future conditions (e.g., production and transportation costs). Making use of the this information, companies must decide where facilities (e.g., plants or distribution centers) should be set operating, how to allocate procurement/production activities to the different facilities, and how to plan the transportation of products through the supply chain network in order to satisfy customer demands. Often, the objective consists of minimizing the costs for building and operating the system. Historically, researchers have focused relatively early on the design of production/distribution systems. However, in the last decade, much research has been done to progressively develop more comprehensive (but tractable) models that can better capture the essence of many supply chain network design (SCND) problems and obtain useful tools for helping decision making processes. However, many aspects of practical relevance in supply chain management are still far from being fully integrated in the models existing in the literature. Financial decisions and risk measures are among them. In corporate planning, financial decisions may strongly interact with the supply chain planning. The evaluation of the investments made in a supply chain is usually based on their return rate. This fact calls for the inclusion of revenues in SCND models, which also gives the possibility of setting a target for the return on investment. Additionally, the multi-period nature of some decisions has often to be accommodated in the models. Finally, the uncertainty associated with future conditions that may influence the input of the problem can hardly be neglected. In this talk, a modeling framework is discussed for supply network design problems capturing the above features. For the resulting multi-stage mixed-integer stochastic programming formulation an exact procedure is described which turns out to be a general algorithm for multi-stage stochastic problems involving integer variables. Computational tests performed using randomly generated instances are reported.

The speaker is hosted by GOM.

Thursday March 26, 2015, 12:00PM, room: Rotule NO8
Topological minors of cover graphs and dimension.
Piotr Micek (TU Berlin)

Abstract: We introduce new tools to tackle problems involving upper bounds on poset dimension. Using these tools we prove two results: (1) Posets of bounded height whose cover graphs exclude a fixed graph as a topological minor have bounded dimension; (2) The $(k+k)$-free posets whose cover graphs exclude a fixed graph as a topological minor contain only standard examples of size bounded in terms of $k$. The first result was already given by Walczak. Our argument is very different, elementary, and in particular does not rely on structural decomposition theorems. The second result supports a conjectured generalization of the first result for $(k+k)$-free posets. In this talk I will try to give a slow introduction to combinatorics of posets and their dimension in particular. Joint work with Veit Wiechert.

The speaker is hosted by Gwenaël Joret.

Thursday February 26, 2015, 12:00PM, room: P.2NO9.06
Facebook & Network Analytics ; The dark side of Facebook, la connaissance au delà des apparences
Vandy Berten (Smals)

Abstract: L'exposé a pour but de sensibiliser à la quantité considérable d'information que l'on peut trouver à son propos sur les réseaux sociaux, en particulier sur Facebook. Nous verrons que ce qu'on y découvre va bien au-delà de ce que les utilisateurs indiquent de leur plein gré. Des techniques de network analytics ou d'analyse de graphes, permettront de déduire beaucoup de choses à propos d'une « cible », telles que les différents groupes sociaux auxquels elle appartient ou le degré de proximité avec ses amis. Des méthodes d'inférence peuvent se baser sur les informations publiées par ses amis pour identifier parmi les groupes sociaux ceux correspondant au travail, au parcours scolaire, à la famille, aux loisirs… En utilisant des outils de web crawling, sur base d'une librairie Python libre, il sera possible de reconstituer une grande partie du réseau d'amis de quelqu'un, même s'il a choisi de ne pas publier sa liste d'amis ou de masquer ses photos. À aucun moment nous nous servirons de techniques de hacking ou de phishing ; nous nous baserons uniquement sur l'utilisation de données publiques.

The speaker is hosted by Olivier Markowitch.

Thursday February 12, 2015, 12:00PM, room: Rotule NO8
Railway Timetabling for Passengers
Peter Sels (Logically Yours/Infrabel/KU Leuven)

Abstract: The practice of railway timetabling is still mainly a manual one and takes more than hundred man-months for Belgium. Infrabel, the Belgian Railway Infrastructure Manager company, explores more automated ways to tackle this problem. A Mixed Integer Linear (MILP) Optimisation model of which the constraints are based on the Periodic Event Scheduling Problem (PESP) approach was constructed. The innovative stochastic objective is to minimize expected passenger time in practice. Cyclic timetables that withstand comparison with the manually constructed ones are computed in about two hours.

The speaker is hosted by GOM.

Thursday December 18, 2014, 11:00AM, room: 2NO7.07
GCG: A Generic Branch-Price-and-Cut Solver
Marco Lübbecke (Aachen University, Germany)

Abstract: We implemented GCG, a branch-price-and-cut solver based on the branch-price-and-cut framework SCIP. Given a MIP, the solver performs a Dantzig-Wolfe reformulation (based on user input, or in some cases the solver suggests a reformulation), does column generation and full branch-price-and-cut. GCG inherits advanced MIP solving features from SCIP, like presolving, propagation, (combinatorial) cutting planes, pseudo-costs etc. A number of additional plugins are implemented which are specific to exploiting the availability of having an original compact and an extended column generation formulation, like primal heuristics or branching rules. We report on computational experiments on a number of applications, recent development and future plans for the code, and discuss what can be learned from a generic solver.

The speaker is hosted by GOM.

Thursday December 11, 2014, 12:30AM, room: 2NO7.08
Neil Olver (VU Amsterdam)

Abstract: We consider the following problem. We have a piece of information that we wish to disseminate throughout a population. It costs us money to directly give someone the information, but the information also spreads randomly through the population (according to a simple epidemic model). Our goal is to ensure that everyone has the information by a given deadline time, while spending the least amount of money. In particular, we are interested in the question of how much it can help to track the spread of the information and adaptively adjust our dissemination strategy based on this knowledge, as opposed to informing people only at the beginning and at the deadline. Our main result concerns a bound on this "adaptivity gap" in the homogeneous case where interactions between pairs of users all occur at the same rate. The primary (though not the only) motivation for this problem comes from "opportunistic" communication, a popular topic in the networking community. The ideas is that a cellular network is augmented with an ad-hoc network formed by users who are close together, the goal being to substantially reduce the usage of the cellular network when sending information of common interest (e.g., real-time traffic data). (Joint work with Alberto Vera Azócar, Jose Correa and Marcos Kiwi.)

Talk joint with the Algebra and combinatorics Seminar (Department of Mathematics). The speaker is hosted by Samuel Fiorini.

Wednesday December 3, 2014, 12:30PM, room: 2NO7.08
Virtual Network Embedding: Models, Complexity, and Computations
Arie Koster (Aachen University, Germany)

Abstract: Virtualization techniques allow for the coexistence of many virtual networks jointly sharing the resources of an underlying physical network. We focus on the service provider's point of view of deciding not only which virtual network requests to accept, but also which quantity of resources to reserve from the physical network owner so to maximize the revenue. We present Mixed-Integer Linear Programming formulations for the problem, discuss its complexity for special cases, and investigate them computationally.

The speaker is hosted by GOM.

Thursday November 27, 2014, 12:30AM, room: 2NO7.08
Incidences between points and lines in four dimensions.
Noam Solomon (Tel Aviv University)

Abstract: We show that the number of incidences between m distinct points and n distinct lines in $\mathbb R^4$ is $O\left(2^{c\sqrt{\log m}} (m^{2/5}n^{4/5}+m) + m^{1/2}n^{1/2}q^{1/4} + m^{2/3}n^{1/3}s^{1/3} + n\right)$, for a suitable absolute constant $c$, provided that no 2-plane contains more than $s$ input lines, and no hyperplane or quadric contains more than $q$ lines. The bound holds without the factor $2^{c\sqrt{\log m}}$ when $m \le n^{6/7}$ or $m \ge n^{5/3}$. Except for this factor, the bound is tight in the worst case. This result generalizes to four dimensions the classical Szemerédi-Trotter theorem for incidences in the plane, and the groundbreaking paper of Guth and Katz from 2010 for incidences in $\mathbb R^3$ (using this bound they solved Erdős distinct distances problem). The talk will include a review on techniques for deriving incidence bounds in two and three dimensions, including the polynomial partitioning method of Guth and Katz and more classical methods, and will explain the various issues that arise when extending the analysis to four dimensions. Joint work with Micha Sharir.

Talk joint with the Algebra and combinatorics Seminar (Department of Mathematics). The speaker is hosted by Stefan Langerman.

Wednesday November 26, 2014, 11:00AM, room: OF.2072
Extensions and limitations of a simple approach to revenue management
Patrice Marcotte (Université de Montréal, Canada)

Abstract: In the airline industry, and others as well, firms apply a set of "revenue management" techniques in order to maximize their profits. One of them consists in dynamically controlling the supply of available products, in order to match these products with demand. For instance, economy fares will be unavailable at dates close to departure, in order not to turn down customers with high willingness-to-pay. This problem can be fairly easily formulated as a stochastic and dynamic mixed integer program where, at each time period, one has to decide the set of products that are available. Unfortunately, the curse of dimensionality makes this approach infeasible. An alternative is to consider a static and deterministic approximation, which performs surprisingly well. In this presentation, we present variants of this approach that embed rules that are typically encountered in the airline industry, and show how the resulting mixed integer programs can be tackled within a column generation framework, either exact or heuristic. The efficiency of this approach is demonstrated on standard benchmarks from the literature, and as well as a real application.

The speaker is hosted by GOM.

Thursday November 13, 2014, 12:30PM, room: 2NO7.08
Souror Elloumi (ENSIIE and CEDRIC, France)

Abstract: We consider problem (QP) of minimizing a quadratic function subject to linear or quadratic constraints. Variables are bounded integers. This very general problem can model many classical problems. A major difference between (QP) and integer linear programs lies in the fact that, in general, its continuous relaxation is an NP-hard optimization problem. To overcome this difficulty, the Convex Quadratic Reformulation approach transforms (QP) into a problem (QP'), equivalent to (QP), but whose continuous relaxation is a convex problem. To compute a global optimal solution to (QP) it becomes possible to solve (QP') by an implicit enumeration algorithm based on continuous quadratic convex optimization. We make an overview of recent developments in this approach. We show in particular how positive semidefinite relaxations can be used to build the most interesting equivalent problems (QP'). We also give a vision of classical linearization as a special case of Quadratic Convex Reformulation.

The speaker is hosted by GOM.

Friday November 7, 2014, 11:00AM, room: 2NO9.06
Polyhedral Techniques for Consensus Systems.
Pierre-Yves Chevalier (UCL)

Abstract: Consensus systems are dynamical systems in which a group of agents try to reach agreement on a common value. We use invariant polyhedron techniques to address the problem of determining if a discrete time switched consensus system converges for any switching sequence, and that of determining if it converges for at least one switching sequence. For these two problems, we provide necessary and sufficient conditions that can be checked in singly exponential time. The first problem was known to be NP-hard but decidable with doubly exponential complexity while no result was known for the second problem. This is joint work with Raphaël Jungers (UCL).

Talk joint with the Algebra and combinatorics Seminar (Department of Mathematics). The speaker is hosted by Samuel Fiorini.

Thursday October 23, 2014, 12:30PM, room: 2NO7.08
Realizations of neighborly, inscribable and egg-scribable polytopes.

Abstract: Delaunay triangulations in R^d are a central object in many areas of mathematics and computational geometry. However, little is known about which combinatorial types can appear as a Delaunay triangulation. With an inverse stereographic projection, this question translates to asking which combinatorial types of polytopes admit realizations with all the vertices on a d-sphere. We will construct a large family of neighborly polytopes that are not only inscribable on the sphere, but also in every smooth strictly convex body. This provides the current best lower bound for the number of combinatorial types of Delaunay triangulations. In the second part of the talk, we will use these techniques to show that realization spaces of inscribed polytopes present Universality in the sense of Mnëv. For example, this will allow us to construct a pair of point configurations whose Delaunay triangulations are combinatorially equivalent but yet there is no continuous transformation that maps one to the other without changing the combinatorics of the Delaunay triangulation. This talk reports joint work with Karim Adiprasito, Bernd Gonska and Louis Theran.

Talk joint with the Algebra and combinatorics Seminar (Department of Mathematics). The speaker is hosted by Samuel Fiorini.

Wednesday October 22, 2014, 12:30PM, room: 2NO7.08
Linear kernels and single-exponential algorithms via protrusion decompositions.
Ignasi Sau (CNRS LIRMM, France)

Abstract: A t-treewidth-modulator of a graph G is a subset X of V(G) such that the treewidth of G-X is at most some constant t. In this talk, we will present an algorithm to compute a decomposition scheme for graphs G that come equipped with a t-treewidth-modulator. This decomposition, called a "protrusion decomposition", is the cornerstone in obtaining the following two main results. (We will certainly not have time in this talk to discuss both results in detail.)

We first prove that any parameterized graph problem (with parameter k) that has "finite integer index" and is "treewidth-bounding" admits a linear kernel on H-topological-minor-free graphs, where H is some arbitrary but fixed graph. A parameterized graph problem is called treewidth-bounding if all positive instances have a t-treewidth-modulator of size O(k), for some constant t. This result partially extends previous meta-theorems on the existence of linear kernels on graphs of bounded genus [Bodlaender et al., FOCS 2009] and H-minor-free graphs [Fomin et al., SODA 2010].

Our second application concerns the Planar-F-Deletion problem. Let F be a fixed finite family of graphs containing at least one planar graph. Given an n-vertex graph G and a non-negative integer k, Planar-F-Deletion asks whether G has a subset X of V(G) such that |X| < k+1 and G-X is H-minor-free for every graph H in F. An algorithm for Planar-F-Deletion with running time 2^O(k) n^O(1) (such an algorithm is called "single-exponential") was presented in [Fomin et al., FOCS 2012] under the condition that every graph in the family F is connected. Using our algorithm to construct protrusion decompositions as a building block, we get rid of this connectivity constraint and present an (optimal) single-exponential algorithm for the general Planar-F-Deletion problem.

This is joint work with EunJung Kim, Alexander Langer, Christophe Paul, Felix Reidl, Peter Rossmanith, and Somnath Sikdar.

The speaker is hosted by Gwenaël Joret.

Thursday September 25, 2014, 12:30PM, room: 2NO7.08
Nonlinear forecasting of macroeconomic variables using three automated model selection techniques
Timo Terasvirta (Aarhus University)

Abstract: In this paper, the focus is on the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. A central problem with these models is how to specify their structure and estimate the parameters. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. This leads to a situation that is somewhat atypical, at least in time series econometrics, in which the number of variables may vastly exceed the number of observations. We compare three methods of model selection capable of handling this problem. One is White's QuickNet, and the other two are the Marginal Bridge Estimator (MBE), well known to statisticians and microeconometricians, and Autometrics, popular among time series econometricians. We consider multiperiod forecasts. There are two main ways of generating them. One is to specify and estimate a single model and generate the forecasts recursively from it. It is also possible to build a separate model for each forecast horizon and use it for obtaining the forecasts directly for the horizon in question. We compare the performance of these alternatives, when the set of available models consists of linear autoregressive, neural network and nonparametric ones.

The speaker is hosted by Gianluca Bontempi.

Monday September 8, 2014, 12:30PM, room: 2NO7.08
Gas network operation and optimization under uncertainty
Jonas Schweiger (Zuse Institute Berlin, Germany)

Abstract: With the deregulations in the gas markets, the requirements on the network change rapidly and demand more flexibility from the network operators. Gas network operators therefore have to invest into their network infrastructure. As these investments are very cost-intensive and long-living, network extensions should not only focus on one bottleneck scenario, but should increase the flexibility to fulfill different demand scenarios. First, we formulate a model for network extension, that is we search cost-optimal network extensions such that an infeasible demand scenarios can be realized in the extended network. Nonlinear gas physics and discrete operation decisions make already this deterministic planning model a challenging mixed-integer non-convex optimization problem. Second, we extend this model to several scenarios and propose a decomposition. A Branch&Bound-algorithm which uses the single-scenario problem as subproblem is proposed to solve the problem. Since the single-scenario problems itself are challenging problems, we solve them to global optimality only in the leaf nodes of our Branch&Bound-Tree, but still use valid bounds and solutions in every node of the tree. Furthermore, we present conditions under which previously found solutions for the scenario problems can be reused to avoid recomputations.

The speaker is hosted by GOM.

Thursday June 12, 2014, 12:30PM, room: 2NO6.07
Service start time optimization in elementary shortest path problems with resource constraints
Yasemin Arda (Université de Liège)

Abstract: Besides their real-life applications, elementary shortest path problems with resource constraints are faced as the pricing sub-problem when more complex problems are solved through column generation and branch-and-price. We consider an elementary shortest path problem with resource constraints, where there is a capacitated single vehicle at the depot for serving a set of customers by respecting their associated time windows. The vehicle can start serving the customers at any desired time and can be used for a fixed amount of time. The total transportation cost is defined as a function of the total distance traveled and the total amount of time that the vehicle spends by performing the assigned trip. Every time the vehicle visits a customer, it delivers the required load and collects a revenue in return for the delivery. The objective is to determine the trip to be performed and the service start time from the depot so as to minimize the total loss, which is calculated as the total transportation cost minus the collected revenues. In this study, we develop two exact dynamic programming algorithms which can deal with an infinite number of Pareto-optimal states arising from the fact that the vehicle can depart from the depot at any point in time and charges depending on the amount of time it spends for serving customers. Apart from reporting computational results for our elementary shortest path problem, we also embed it into a column generation framework for the corresponding capacitated vehicle routing problem with time windows. Recent results for the developed branch-and-price algorithm are also reported.

The speaker is hosted by GOM.

Thursday June 5, 2014, 12:30PM, room: 2NO7.07
Approaches to bilevel programs with interval coefficients

Abstract: Bilevel problems have been proposed for dealing with hierarchical processes involving two levels of decision making. Bilevel programs involve two optimization problems where the constraint region of one of the problems is implicitly determined by the other. In this talk, linear bilevel programs whose both objective functions coefficients are interval numbers are addressed. Interval coefficients have been used in the literature to deal with decision problems which coefficients are only approximately known. First, we focus on the optimal value range problem which consists of computing the best and worst optimal objective function values and determining the settings of the interval coefficients which provide these values. In a published paper with H.I. Calvete, we prove by examples that, in general, there is no precise way of systematizing the specific values of the interval coefficients that can be used to compute the best and worst possible optimal solutions. Taking into account the properties of linear bilevel problems, we prove that these two optimal solutions occur at extreme points of the polyhedron defined by the common constraints. Moreover, we develop two algorithms based on ranking extreme points that allow us to compute them as well as determining settings of the interval coefficients which provide the optimal value range. In a recent work, the focus is on the order relations between interval numbers that represent the decision makers' preference. Taking into account them, we show that the bilevel problem with interval coefficients can be solved by transforming it into a bicriteria bilevel problem properly defined. We are currently working on some algorithms for solving this problem.

The speaker is hosted by Martine Labbé.

Thursday June 5, 2014, 11:15AM, room: TBA
A model for the BitCoin block chain that takes propagation delays into account.
Peter Taylor (U. Melbourne)

Abstract: Unlike cash transactions, most electronic transactions require the presence of a trusted authority to verify that the payer has sufficient funding to be able to make the transaction and to adjust the account balances of the payer and payee. In recent years 'BitCoin' has been proposed as an 'electronic equivalent of cash'. The general idea is that transactions are verified in a coded form in a 'block chain', which is maintained by the community of participants. Problems can arise when the block chain splits: that is different participants have different versions of the block chain, something which can happen only when there are propagation delays, at least if all participants are behaving according to the protocol. In this talk I shall present a preliminary model for the splitting behaviour of the block chain I shall then go on to perform a similar analysis for a situation where a group of participants has adopted a recently-proposed strategy for gaining a greater advantage from BitCoin processing than its combined computer power should be able to control.

The speaker is hosted by Guy Latouche.

Thursday May 15, 2014, 12:30PM, room: 2NO6.07
The seminar of Alfredo Marin (University of Murcia, Spain) has been canceled.

Thursday March 27, 2014, 12:30PM, room: 2NO6.07
Nurse rostering models and algorithms
Greet Vanden Berghe (KUL)

Abstract: Health care is under high pressure to improve efficiency, given increasing requirements and decreasing resources. Among the activities to optimise, nurse rostering is one of the most relevant to address. The problem is computationally complex and has potential for improvement through automated decision support. Personnel rosters also have a considerable socio-economic impact. This optimisation problem has yielded ample practice-oriented operational research approaches. Despite the vast amount of academic research results, it remains hard for novice developers to profit from general insights or re-usable approaches. This cold start’ issue can be partially explained by complicated regulations typical for personnel environments with 24/7 duties and different in almost every organisation. The very same issue also persists due to the lack of a theoretical framework for nurse rostering. From an academic point of view, interesting models have been developed for varying nurse rostering problems. The approaches range from self-rostering and manual problem decompositions to different levels of automated decision support. The seminar will focus on the relevance of academic results and on the interplay between practical and theoretical nurse rostering contributions.

The speaker is hosted by GOM.

Thursday March 13, 2014, 3:00PM, room: 2NO8.08
Refinement heuristics for capacitated extended-ring network design
Alessandro Hill (Universiteit Antwerpen)

Abstract: We present two heuristic approaches to solve capacitated network design problems based on re finement techniques. The main idea is to improve a given solution iteratively by solving subproblems defined on structural neighbourhoods to optimality. In our generic approaches we use the exact solution techniques in the most efficient way. This means that we build subproblems of largest possible size for complexity that still can be solved with low computational e ffort in our framework. Especially for networks with capacity constraints and extended topologies these approaches turn out to be quite powerful. Although we focus on the minimization of the total edge costs in our work, related problem features such as prize collecting or locational installation costs could be integrated. We first consider the multi-depot ring star problem (MDRSP) which generalizes the ring star problem and therefore the classical travelling salesman problem. Several depots may serve a restricted number of rings by optional usage of Steiner vertices. Customers are either directly connected by such rings or assigned to them leading to ring stars. Each ring star obeys a depot-dependent customer limit. Various structural local neighbourhoods are introduced and explored as subproblems of type MDRSP themselves. We propose a branch & cut method for the MDRSP which is used to carry out the re finements. For the optimization on a global level, multiple contraction techniques are proposed leading to known vehicle routing problem variants that we solved exactly, too. The effi ciency is shown by computational results improving known upper bounds for instance classes from the literature containing up to 1000 vertices. 91% of the known best objective values are improved up to 13% in competitive computational time. Secondly, we address the capacitated ring tree problem (CRTP), a recent capacitated single-depot network design model that combines the ring topology from classical capacitated vehicle routing problems with the tree structure considered in Steiner tree problem variants. In contrast to the ring star problems a prespecifi ed subset of customers is allowed to be assigned to rings by tree structures instead of simple links. Again, we use an exact branch & cut algorithm to carry out refi nements. To increase the spectrum of neighbourhoods that can be modelled we refi ne in a local branching fashion: for several structural neighbourhood ideas, a suitable set of restricting inequalities is provided to a master model representing the overall problem. This corresponds to a strategy for the partial exploration of multiple branch & bound trees on an integer programming formulation. Preliminary results are presented and compared to bounds obtained by the exact method and a local search based heuristic approach.

The speaker is hosted by GOM.

Thursday February 20, 2014, 12:30PM, room: 2NO6.07
Julien Forget (Université Lille 1)

Abstract: Usually, real-time system developers first design a system as a set of "functionalities" (blocks of code) with real-time constraints and then implement these functionalities as a set of concurrent real-time tasks (threads). Complex systems, such as the flight control system of an aircraft for instance, are typically made up of several hundreds or thousands of functionalities. Thus, several functionalities must be clustered into the same task, to reduce context switches, memory consumption, ... Our work aims at automatizing this "task clustering" process. In this talk, we will first quickly present the context of our work: the language Prelude, a high-level language for programming real-time embedded control systems. We will then emphasize how task clustering fits inside the compilation of such a language and detail our first results, which consider clustering from a real-time scheduling point-of-view.

The speaker is hosted by Joël Goossens.

Thursday December 12, 2013, 12:30PM, room: 2NO7.07
The making of the HIPPEROS RTOS at the PARTS Research Unit
Ben Rodriguez (PARTS, ULB, Belgique)

Abstract: This talk is not the usual research talk. Our goal is to show how we bridged the gap from research to valorization by starting with real-time scheduling theory and OS design principles, then went on to create a sound RTOS implementation that meets realistic constraints and models, and finally obtained industrial and economic support to create a valorization structure. This process is far from complete. We show that there are several areas where our group could benefit from synergies with other research units, e.g. security, formal proofs, software engineering, compilers, etc.

Thursday November 28, 2013, 12:30PM, room: 2NO7.07
Identifying codes on classes of graphs closed by induced subgraphs
Aline Parreau (Université de Liège, Belgique)

Abstract: An identifying code is a subset of vertices of a graph such that each vertex is uniquely determined by its closed neighborhood within the identifying code. Since determining the minimum size of an identifying code is a NP-complete problem, we restrict our study to classes of graphs closed by induced subgraphs. We will present precise results for specific classes of graphs like interval and line graphs and give more general results using the Vapnik-Chervonenkis dimension.

The speaker is hosted by Eglantine Camby.

Thursday November 21, 2013, 12:30PM, room: 2NO7.07
Global Liner Shipping Network Design
Shahin Gelareh (University of Artois, France)

Abstract: All shipping liner companies divide their service regions into several rotations (strings) in order to operate their container vessels. A string is the ordered set of ports at which a container vessel will call. Each port is usually called at no more than twice along one string, although a single port may be called at several times on different strings. The size of string dictates the number of vessels required to offer a given frequency of service. In order to better use their shipping capacity, groups of Liner Service Providers sometimes make a short term agreement to merge some of their service routes (in a certain region) into one main ocean going rotation and p feeder rotations. In order to minimize the weighted sum of transit time, and fixed deployment costs, we develop a mixed integer linear programming model of the network design, and an allocation of proper capacity size and frequency setting for every rotation. Given that none of the existing general-purpose MIP solvers is able to solve even very small problem instances in a reasonable time, we apply a Lagrangian decomposition approach which uses a heuristic procedure and is capable of obtaining practical and high quality solutions in reasonable times. The model has been applied on a real example, and we shall present some of the results obtained by our model which show how it facilitates a better use of assets and a significant reduction in the use of fuel, therefore allowing a more environmentally friendly service.

The speaker is hosted by Bernard Fortz.

Thursday November 14, 2013, 12:30PM, room: 2NO7.07
Some Math Programming and Game Theoretic approaches for the design of Robust Railway Networks
Justo Puerto (University of Seville, Spain)

Abstract: This talk discusses and extends competitive aspects of some games proposed in the literature, where a robust railway network design problem is formulated as a non-cooperative zero-sum game in normal form between a designer/operator and an attacker. Due to the importance of the order of play and the information available to the players at the moment of their decisions, we here extend those previous models by proposing formulations of this situation as dynamic games. Besides, we propose a new mathematical programming model that optimizes both the network design and the allocation of security resources over the network. This approach also proposes a model to distribute security resources over an already existing railway network in order to minimize the negative effects of an intentional attack. The models will be illustrated with examples.

The speaker is hosted by Martine Labbé.

Thursday November 7, 2013, 12:30PM, room: 2NO7.07
Large-Scale Reformulations of Combinatorial Problems: Not All Master Problems are Created Equal
Antonio Frangioni (Università di Pisa, Italy)

Abstract: Development of a "good" mixed-integer formulation is most often a fundamental step for being able to solve hard combinatorial problems. Unfortunately, "good quality" formulations, i.e., those with low continuous relaxation gaps, are most often exceedingly large. This requires techniques for incrementally generating them, like polyhedral methods (row generation), Lagrangian relaxations (column generation), and their combinations. We will review some recently proposed ideas to improve the performances of incremental generation techniques, based on different ways to modify and reformulate the standard "model" (master problem). In particular we will concentrate on stabilization techniques, describing some ways, different but related, in which specific structures of the problem can be algorithmically exploited. Computational results will be shown to demonstrate the potential benefits of these approaches, and for pointing out the issues that still limit the widespread applicability of these techniques.

The speaker is hosted by Martine Labbé.

Thursday October 31, 2013, 12:30PM, room: 2NO7.07
Exact algorithms for Weak Roman Domination
Mathieu Chapelle (Algorithmique, ULB, Belgium)

Abstract: We consider the Weak Roman Domination problem. Considering the Great Roman Empire, the aim is to put "legions" on some cities of the Empire in order to defend all of them in case of a ennemy attack. More formally, the goal is to find a weak roman domination function (wrd-function) of minimum cost, i.e. a function which associates to each vertex either 0, 1 or 2 legions, such that (1) each vertex is defended, that is it has at least one legion in its closed neighborhood (which includes itself), and (2) if one vertex without legion is attacked, a neighboring legion can move onto this vertex to protect it, and still all vertices of the graph are defended. In this talk, I will present two exact algorithms to solve this problem in time better than the trivial enumeration algorithm: I will first present an algorithm in O^*(2^n) time needing exponential space, and then describe an O^*(2.2279^n) algorithm using polynomial space. Our results rely on structural properties of a wrd-function, as well as on the best polynomial space algorithm for the Red-Blue Dominating Set problem.

Thursday October 24, 2013, 12:30PM, room: 2NO9.06
Integer programming models for open stack problems
Luigi De Giovanni (University of Padova, Italy)

Abstract: In many applications, a suitable permutation of patterns (product orders, cutting patterns, electronic circuit nodes etc.) has to be found in order to optimize over some given objective function (minimization of the work-in-process, of the open stacks, of the circuit area etc.), so giving rise to the so-called Open Stack problems. Literature presents different Integer Programming models based on different representations, among which interval graphs and binary matrices. We will focus on a formulation that exploits the structural properties of consecutive ones matrices, presenting different families of valid inequalities derived from the facets of the Consecutive Ones polytope, and discussing the related separation procedures. The model is solved by a branch-and-cut method, initialized by a heuristic solution provided by a genetic algorithm. The procedure has been applied to literature benchmarks, as well as to real instances from the industry of Very Large Scale Integrated (VLSI) circuits, showing improvements with respect to previous approaches.

The speaker is hosted by Martine Labbé.

Tuesday June 11, 2013, 12:30PM, room: Forum H
On algebraic methods in incidence geometry
Abdul Basit (Rutgers University, NJ, USA)

Abstract: Incidence Geometry deals with the number of possible incidences between a finite number of geometric objects such as points, lines, and circles. Recently, a number of breakthroughs in the subject (e.g. Dvir's solution of the finite field Kakeya conjecture, or Guth and Katz's near optimal solution of the Erdos Distinct Distances problem) have been obtained by applying the polynomial method. One of the main ingredients of their proof is partitioning space using a polynomial constructed via the polynomial ham-sandwich theorem and then using tools from algebraic geometry to understand the objects being studied. In this talk, we give an introduction to these methods. Specifically, we will prove the classical Szemeredi-Trotter theorem. We will then talk about incidences between points and spheres in three dimensions.

This is a joint Algebra & Combinatorics, and Computer Science seminar. The speaker is hosted by Stefan Langerman.

Thursday June 6, 2013, 12:30PM, room: Forum H
Column generation approachs for two combinatorial problems of a telecommunication company
Murat Firat (France Telecom, Sophia Antipolis. Orange Labs.)

Abstract: Problems that are encountered in a telecommunication company are highly complex problems, hence most of them are NP-hard. In this talk two of these problems will be mentioned: the two-level FTTH network design problem and the stable workforce assignment problem. Compact formulations of both problems that are convenient for the Column Generation (CG) method will be presented. In the formulation of the former problem, the complexity of two pricing problems are analyzed. It turns out that both pricing problems are NP-hard to approximate. The pricing problem of the latter problem is finding a team of technicians for a job that is NP-Hard as well. Our CG method is embedded into the Branch-and-Bound search for real-life instances, and we find the optimal solutions in reasonable times compared to solution times by using an LP-solver like CPLEX.

The speaker is hosted by Bernard Fortz.

Thursday May 16, 2013, 12:30PM, room: Forum H
Advances on Matroid Secretary Problems: Free Order Model and Laminar Case.
Jose A. Soto (TU-Berlin and University of Chile)

Abstract: In the standard matroid secretary problem we are given a matroid M in which every element has a hidden nonnegative weight. Our task is to incrementally construct an independent set of large weight in M in the setting where the weights are revealed in random order, and we must add elements to our solution as soon as they are revealed. There are constant-factor approximation algorithms for this problem on several classes of matroids. A well-known open problem is to determine if every matroid admits such an algorithm. In this talk, I will discuss this problem and present a simple constant factor approximation for the non-trivial class of laminar matroids. I will also present the first constant-factor approximation for a relaxed version of the matroid secretary problem in which we are allowed to adaptively choose the order in which elements reveal their weights. This is based on joint work with Patrick Jaillet (MIT) and Rico Zenklusen (Johns Hopkins University).

The speaker is hosted by Samuel Fiorini.

Wednesday May 15, 2013, 12:30PM, room: Rotule NO8
Tensors, colours, octahedra.
Imre Bárány (Alfréd Rényi Mathematical Institute)

Abstract: Several classical results in convexity, like the theorems of Caratheodory, Helly, and Tverberg, have colourful versions. In this talk I plan to explain how two methods, the octahedral construction and Sarkaria's tensor trick, can be used to prove further extensions and generalizations of such colourful theorems.

The speaker is hosted by Stefan Langerman.

Thursday March 28, 2013, 12:30PM, room: NO5, Solvay Room
Souhaib Ben Taieb (U.L.B, Machine Learning Group)

Abstract: We describe and analyze the approach used by Team TinTin (Souhaib Ben Taieb and Rob J. Hyndman) in the Load Forecasting track of the Kaggle Global Energy Forecasting Competition 2012. The competition involved a hierarchical load forecasting problem for a US utility with 20 geographical zones. Eight in-sample and one out-of-sample weeks for 21 separate time series need to be backcast and forecast, respectively. The electricity demand for the next day is forecasted using a separate model for each hourly period. We use component-wise gradient boosting to estimate each hourly model with univariate penalized regression splines as base learners. The models allow for the electricity demand to change with time-of-year, day-of-week, time-of-day, and on public holidays with the main predictors being current and past temperatures as well as past demand. Our model ranked fifth out of 105 participating teams.

Thursday February 28, 2013, 12:30PM, room: 2NO.506
Differentiating Defaulters in Credit Scoring using Data Mining and Game Theory.
Richard Weber (U. de Chile)

Abstract: In Credit Scoring we usually try to differentiate between defaulters and non-defaulters. In this work, we present a methodology for improving credit scoring models by differentiating between two forms of rational behaviour of the defaulters of loans. According to practitioners' knowledge there are two types of defaulters, those who do not pay because of cash flow problems ("Can’t Pay"), and those that do not pay because of lack of willingness to pay ("Won't Pay"). Within our approach the results of several supervised models are benchmarked, where the models deliver the probability of belonging to one of these three new classes (good payers, "Can't Pays", and "Won't Pays"). The process brings significant improvement in classification accuracy, delivers strong insights about the nature of defaulters, and could lead to improved credit scoring systems.

The speaker is hosted by Martine Labbé.

Wednesday January 9, 2013, 12:30PM, room: Rotule NO.8
Response Time bounds for Static-Priority Tasks and Arbitrary Relative Deadlines with Resource Augmentation.
Pascal Richard (LIAS/Université de Poitiers)

Abstract: We propose a parametric approximation algorithm (Fully Polynomial Time Approximation Scheme - {\FPTAS}) that defines a compromise on the precision of computed worst-case response time upper bounds and the amount of extra processor speed required to achieve exact worst-case response times. Such a result fills the remaining gap between our previously published work and also extends them to tasks with arbitrary relative deadlines. Numerical results monitoring speedup factors will also be presented.

The speaker is hosted by Joël Goossens.

Thursday November 29, 2012, 12:30PM, room: NO7.07
Triangle-free intersection graphs of line segments with large chromatic number.
Piotr Micek (Jagiellonian University)

Abstract: In the 1970s Erdős asked whether the chromatic number of intersection graphs of line segments in the plane is bounded by a function of their clique number. We show the answer is no. Specifically, for each positive integer $k$ we construct a triangle-free family of line segments in the plane with chromatic number greater than $k$. Our construction disproves a conjecture of Scott that graphs excluding induced subdivisions of any fixed graph have chromatic number bounded by a function of their clique number.

The speaker is hosted by Jean Cardinal.

Thursday June 14, 2012, 12:30PM, room: 2NO7.07
Perfect matchings in cubic graphs.
Louis Esperet (CNRS, G-SCOP, Grenoble)

Abstract: A cubic graph is a graph in which every vertex has exactly three neighbors, and a perfect matching is a set of edges covering each vertex exactly once. We prove that 2-edge-connected cubic graphs have an exponential number of perfect matchings. This was conjectured by Lovasz and Plummer in the 70s. (joint work with F. Kardos, A. King, D. Kral, and S. Norine)

The speaker is hosted by Gwenaël Joret.

Thursday May 3, 2012, 12:30PM, room: 2NO7.07
Search Tree Mysteries.
Robert E. Tarjan (Princeton U. and HP Labs)

Abstract: The search tree is a classical and ubiquitous data structure, fundamental to databases and many other computer applications. The AVL tree, a type of balanced binary tree, was invented fifty years ago; since then, many different kinds of search trees have been described, analyzed, and used. Yet the design space is vast, and mysteries remain. This talk will describe recent work by the speaker and his colleagues that has produced a new framework for defining and analyzing balanced search trees, a new kind of balanced tree with especially nice properties, and a way to maintain balance by rebalancing only on insertion, not on deletion.

Short Biography: Robert E. Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University and a Senior Fellow at HP labs. His main research areas are the design and analysis of data structures and graph and network algorithms. He has also done work in computational complexity and security. He has held positions at Cornell University, U. C. Berkeley, Stanford University, NYU, Bell Laboratories, NEC, and InterTrust Technologies. He is a member of the National Academy of Sciences, the National Academy of Engineering, the American Philosophical Society, and the American Academy of Arts and Sciences. He was awarded the Nevanlinna Prize in Informatics in 1982 and the Turing Award in 1986.

The speaker is hosted by Stefan Langerman.

Friday March 30, 2012, 12:30PM, room: NO5, Salle Solvay
Cooperative decision-making in cell regulation.
Kim van Roey (European Molecular Biology Laboratory, EMBL Heidelberg)

Abstract: Cells must continuously monitor and integrate the variety of signals they perceive in order to generate appropriate responses. This requires reliable and robust signal transduction, which is mediated by an intricate and interlinked network of pathways and processes that are tightly regulated. Assembly of the dynamic macromolecular complexes that modulate these pathways often depends on multiple transient, low-affinity interactions that are context-dependent, highly cooperative and easily tuneable. Such interactions provide the dynamic plasticity that is required for proper cell signalling and underlie the ability of proteins to act as switchable regulatory modules. This raises the question, how do proteins integrate available information to correctly make decisions? This talk addresses the role of intrinsically disordered protein regions, and more specifically short linear motifs, in cooperative decision-making and briefly introduces our current efforts to computationally describe cooperative interactions.

The speaker is hosted by Tom Lenaerts.

Thursday March 29, 2012, 12:30PM, room: 2NO7.08
Dynamic programming in sparse graphs.
Ignasi Sau (CNRS, LIRMM, Montpellier)

Abstract: A fundamental parameter in the design and analysis of graph algorithms is the branchwidth of a graph, which can be seen as a measure of the topological resemblance of a graph to a tree (similar to treewidth). Its algorithmic importance is justified by Courcelle's theorem, stating that graph problems expressible in Monadic Second Order Logic can be solved in f(bw)n steps (here bw is the branchwidth and n is the number of vertices of the input graph). As the bounds for f(bw) provided by Courcelle's theorem are huge, it is of great interest the design algorithms so that f(bw) is a relatively simple function.

In this talk we overview a general framework for the design and analysis of dynamic programming algorithms for graphs embedded in arbitrary surfaces where f(bw)=2^{O(bw)}. Our approach combines tools from topological graph theory and analytic combinatorics. In particular, we introduce a new type of branch decomposition called "surface cut decomposition", which generalizes "sphere cut decompositions" introduced by Seymour and Thomas for planar graphs. That way, we unify and improve all previous results in this direction.

If time permits, we will also outline the main ideas in order to extend our framework to deal with families of graphs excluding a fixed graph as a minor.
This is joint work with Juanjo Rué and Dimitrios M. Thilikos.

The speaker is hosted by Gwenaël Joret.

Wednesday March 14, 2012, 12:30PM, room: 2NO5.06
Abelian returns in infinite words.
Pavel Salimov (ULg)

Abstract: Finite words u and v are abelian equivalent, that is denoted by u ∼ v, if each symbol occurs in both of them the same number of times. For an infinite word x, the word x_i x_i+1 . . . x_j-1 is denoted by x[i, j). A semi-abelian return to a subword u of an infinite word x is a word x[i, j) such that u ~ x[i, i + |u|) ~ x[j, j + |u|) and, for any k s.t. i < k < j, we have x[k,k + |u|) ~̸ u. This is a generalization of the intensively studied notion of a return word. In the talk, some properties, problems and results related to this notion are described.

The speaker is hosted by Émilie Charlier.

Wednesday February 29, 2012, 12:00, room: NO, Salle Solvay (5th floor)
Weighted Timed Automata.
Karin Quaas (University of Leipzig)

Abstract: We present a general model of weighted timed automata that allows for the modelling of continuous resource consumption of real-time systems. A timed automaton is a finite automaton extended with finitely many real-valued clocks that measure the time. We additionally equip the transitions of a timed automaton with weights coming from a semiring. Also, the states are assigned a weight function determining the weight that arises while being in a state. A weighted timed automaton recognizes a timed series, i.e., a function that maps each timed word to a coefficient in the semiring, namely its weight. We present closure properties of recognizable timed series and investigate weighted extensions of classical decision problems like the emptiness or equivalence problem. Also we provide characterizations of recognizable timed series in terms of logic and generalizations of the well-known regular expressions.

The speaker is hosted by Jean-François Raskin.

Wednesday February 22, 2012, 16:00, room: NO, Salle Solvay (5th floor)
Structure, Unstructure and Alternative Splicing.
Philip Kim (University of Toronto)

Abstract: Many protein interactions, in particular those in signaling networks, are mediated by peptide recognition domains. These recognize short, linear amino acid stretches on the surface of their cognate partners with high specificity. Residues in these stretches are usually assumed to contribute independently to binding, which has led to a simplified understanding of protein interactions. Conversely, in large binding peptide data sets different residue positions display highly significant correlations for many domains in three distinct families (PDZ, SH3 and WW). These correlation patterns reveal a widespread occurrence of multiple binding specificities and give novel structural insights into protein interactions. For example, a new binding mode of PDZ domains can be predicted and structurally rationalized for DLG1 PDZ1. While protein structure is very important for peptide binding domains, the regions they bind are usually unstructured (intrinsically disordered). These regions are widespread, especially in proteomes of higher eukaryotes, and have been associated with a plethora of different cellular functions. Aside from general importance for signaling networks, they are also important for such diverse processes as protein folding or DNA binding. Leveraging knowledge from systems biology can help to structure the phenomenon. Strikingly, disorder can be partitioned into three biologically distinct phenomena: regions where disorder is conserved but with quickly evolving amino acid sequences (“flexible disorder”), regions of conserved disorder with also highly conserved amino acid sequence (“constrained disorder”) and, lastly, non-conserved disorder. I will also introduce new efforts to map protein interactions affected by alternative splicing.

The speaker is hosted by Tom Lenaerts.

Thursday February 16, 2012, 12:30PM, room: 2NO7.08
Obtaining a bipartite graph by contracting few edges.
Pim van 't Hof (University of of Bergen, Norway)

Abstract: We initiate the study of the Bipartite Contraction problem from the perspective of parameterized complexity. In this problem we are given a graph $G$ on $n$ vertices and an integer $k$, and the task is to determine whether we can obtain a bipartite graph from $G$ by a sequence of at most $k$ edge contractions. We show that the Bipartite Contraction problem is fixed-parameter tractable when parameterized by $k$, i.e., the problem can be solved in time $f(k) n^{O(1)}$ for some function $f$ that does not depend on $n$. Our algorithm is based on a novel combination of the irrelevant vertex technique, introduced by Robertson and Seymour, and the concept of important separators. Both techniques have previously been used as key components of algorithms for fundamental problems in parameterized complexity. However, to the best of our knowledge, this is the first time the two techniques are applied in unison.
(joint work with Pinar Heggernes, Daniel Lokshtanov and Christophe Paul)

The speaker is hosted by Marcin Kaminski.

Wednesday January 18, 2012, 12:30PM, room: 2NO7.07
Ensemble learning for real-world classification.
Nima Hatami (University of Cagliari)

Abstract: Most real-world classification problems are too complicated to be tackled by a single expert. An alternative approach is to use ensem- ble of experts inspired by Divide-and-conquer principle which has proven to be efficient in many of these cases. A complex problem is first divided into some simpler sub-problems, each of them assigned to an expert. The final solution of the problem obtained by consensus of experts, is proven to be more effective and efficient. This talk will cover the application of different multiple-classifier systems to some real-world classification problems e.g. gene expression cancer classification, face recognition and text categorization.

The speaker is hosted by Gianluca Bontempi.

Thursday December 22, 2011, 12:30PM, room: NO6.07
The seminar of Kim Van Roey ( EMBL) has been canceled.

Wednesday December 21, 2011, 12:30PM, room: 2NO7.07
Title: Matheuristics for combinatorial optimization problems
Fabio Salassa (D.A.I. Politecnico di Torino, Italy)

Abstract: The solution techniques for solving combinatorial optimization problems are traditionally split into exact (mostly based on the optimal solution of the integer programming formulation of the problem) and heuristic algorithms. Recently a new wave has rapidly grown in the community of researchers, the hybridization of these two approaches, the so called Matheuristics which rely on the idea of exploiting the best of the two. While for the combination of heuristic procedures there exists a wide literature, matheuristics are still in development. Several different examples and results of such matheuristics on a variety of test instances will be presented and some conclusions from the performed experiments and trajectories for future research will be drawn.

The speaker is hosted by Martine Labbé.

Monday December 12, 2011, 2PM, room NO8.08
Title: On Minimal Representation of Rational Arrival Processes
Miklos Telek (Budapest University of Technology and Economics)

Abstract: Rational Arrival Processes (RAPs) form a general class of stochastic processes which include Markovian Arrival Processes (MAPs) as a subclass. We study RAPs and their representations of different size. We show transformation methods between different representations and we present conditions to evaluate the size of the minimal representation. By showing some analogies to linear system theory. A minimization approach is defined which allows one to transform a RAP into one of its minimal representation (a representation of minimal size). An algorithm for computing a minimal representation is also given. Furthermore, we extend the approach to RAPs with batches or multiple entity types that are denoted as BRAPs or MRAPs.

The speaker is hosted by Guy Latouche.

Friday December 9, 2011, 12:30PM, room 2NO7.08
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor
Paul Regnier (Federal University of Bahia Brazil)

Abstract: Optimal multiprocessor real-time schedulers incur significant overhead for preemptions and migrations. We present RUN, an efficient scheduler that reduces the multiprocessor problem to a series of uniprocessor problems. RUN significantly outperforms existing optimal algorithms with an upper bound of O(\log m) average preemptions per job on m processors (less than 3 per job in all of our simulated task sets) and reduces to Partitioned EDF whenever a proper partitioning is found.

The speaker is hosted by Joël Goossens.

Thursday November 24, 2011, 12:30PM, room 2NO6.07
The seminar of Takeshi Tokuyama (Tohoku University) has been canceled.

Thursday November 17, 2011, 12:30PM, room 2NO6.07
Range majority and Ortoganal Range Counting using Techniques from Succinct Data Structures
Ian Munro (U. of Waterloo)

Abstract: Algorithms and data structures developed for one application domain and set of constraints are often applied to other situations. Succinct data structures were developed, to a large extent, for extremely space efficient string indexing. We now seem to be in an era in which such methods are being used for applications in which space requirements are not as severe and indeed for applications outside stringology, such as geometric problems. We give a couple of examples of this trend through recent results on reporting range majority and dynamic orthogonal range counting. This is joint work with Meng He et al.

The speaker is hosted by Jean Cardinal.

Thursday November 10, 2011, 12:30PM, room 2NO6.07
Boundary properties of the satisfiability problems
Vadim V. Lozin (University of Warwick)

Abstract: The satisfiability problem is known to be NP-complete in general and for many restricted instances, such as formulas with at most 3 variables per clause and at most 3 occurrences per variable, or planar formulas. The latter example refers to graphs representing satisfiability instances. These are bipartite graphs with vertices representing clauses and variables, and edges connecting variables to the clauses containing them. Many authors point up the importance of finding the strongest possible restrictions under which a problem remains NP-complete. First, this can make it easier to establish the NP-completeness of new problems by allowing easier transformations. Second, this can help clarify the boundary between tractability and intractability. In this paper, I address the second issue and reveal the first boundary property of graphs representing satisfiability instances.

The speaker is hosted by Marcin Kaminski.

Tuesday November 8, 2011, 12:30PM, room 2NO7.07
Re-optimization problems
M.Grazia Speranza (University of Brescia)

Abstract: In most optimization models one of the basic underlying assumptions is that all the information needed is available at the time the model is solved and that the optimal solution found is implemented. A common situation is that a problem instance has been optimally solved and then an event takes place that slightly modifies the instance before the solution is implemented. In general, we say that we face a re-optimization problem whenever an optimal solution of a problem instance has been found, a (minor) change takes place in the problem instance, and a new (slightly different) instance has to be solved. Re-optimizing means taking into account the work done on the previous instance and taking advantage of the knowledge of the previously optimal solution. Often the time available to find a solution to a re-optimization problem is limited. In the case of a polynomially solvable problem, a re-optimization problem may be solved exactly taking advantage of the computational work done on the original instance. In the case of an NP-hard problem, an approximation algorithm for a re-optimization problem may take advantage of the optimal solution of the original instance by finding a solution of better quality or a solution of the same quality in a smaller computational time. We present approximation algorithms for the re-optimization of the Traveling Salesman Problem, of the Knapsack Problem and of the Rural Postman Problem. We also present simple procedures for the re-optimization of two basic polynomially solvable problems, the minimum spanning tree and the maximum flow problem.

The speaker is hosted by Martine Labbé.

Tuesday October 18, 2011, 1PM, rotule 2NO8
Switching on and off the full capacity of an M/M/∞ queue
Eugene Feinberg (State University of New York at Stony Brook)

Abstract: This paper studies optimal switching on and off the capacity of an M/M/∞ queue with holding, running and switching costs. The goal is to minimize average costs per unit time. The main result is that an optimal policy either always runs the system or is defined by two thresholds M and N, such that the system is switched on upon an arrival epoch when the number of customers in the system accumulates to N and it is switched off upon a departure epoch when the number of customers in the system decreases to M.

The speaker is hosted by Giang Nguyen.

Friday October 7, 2011, 12:30PM, room 2NO7.08
Lightpath Topology Design of Wavelength-Routed Networks to Serve Shortest Path-based IP Networks and impact on virtualization
Deep Medhi (University of Missouri)

Abstract: In this talk, we address the problem of designing lightpath topologies for wavelength-routed networks to serve shortest path-based IP networks. The objective is to design a lightpath topology that remains connected in the event of any single physical link failure while providing the IP network with unique shortest paths for all node-pairs. We present an optimization model and a heuristic to solve this problem. We consider a number of different performance measures to evaluate our solution approach and to discuss impact on survivable topology design, especially in terms of the number of transreceivers deployed. Our approach encompasses the three cross-layers of IP, lightpaths, and the physical topology in an integrated design framework. We will then discuss impact on network virtualization.

The speaker is hosted by Bernard Fortz.

Thursday September 29, 2011, 12:30PM, room 2NO6.07
Algorithmic Graph Minors: Theorems and Techniques
Dimitrios M. Thilikos (University of Athens)

Abstract: The main mathematical achievement of the Graph Minors Theory (GMT), developed by Robertson and Seymour, was the proof of Wagner's conjecture, now known as the {\sl Robertson \& Seymour Theorem}, stating that graphs are well quasi ordered under the minor containment relation. Besides its purely theoretical importance, GMT induced a series of powerful algorithmic results and techniques that had a deep influence in theoretical computer science. GMT offered the theoretical base for the understanding and the resolution of some of the most prominent graph-algorithmic problems in parameterized complexity. In this talk we give a brief presentation of the main results and techniques to this direction.

The speaker is hosted by Samuel Fiorini and Marcin Kaminski.

Wednesday September 21, 2011, 12:30PM, room 2NO7.07
On the Hazmat Transport Network Design Problem
Maurizio Bruglieri (Politecnico di Milano)

Abstract: We consider the problem of designing a network for hazardous material transportation where the government can decide which roads have to be forbidden to hazmats and the carriers choose the routes on the network. We assume that the government is interested in minimizing the overall risk of the shipments whereas the carriers minimize the route costs. In spite of the rich literature on hazmat transportation and of the relevance of this network design problem, it has received little attention and only quite recently. In this work we prove that the version of the hazmat transport network design problem where a subset of arcs can be forbidden is NP-hard even when a single commodity has to be shipped. We propose a bilevel integer programming formulation that guarantees solution stability and we derive a (single-level) mixed integer linear programming formulation that can be solved in reasonable time with a state-of-the-art solver.

The speaker is hosted by Bernard Fortz.

Friday September 2, 2011, 14:00PM, room: NO7.07
Predicting structured-output from protein sequence.
Andrea Passerini ( Università degli Studi di Trento)

Abstract: Recent advances in high-throughput sequencing techniques are drastically increasing the amount of biological sequences available for further study. On the other hand, experimentally determining their three-dimensional structure is an expensive and time-consuming process. In this scenario, automatic approaches to sequence analysis are crucial in order to fill this gap and devise information on their biological function. I will present machine learning techniques for predicting protein structural features from sequence. The talk will focus on challenging problems where the desired output is a discrete structure, e.g. a graph connecting certain residues in the sequence. I will first discuss the prediction of disulphide bridges, i.e. covalent bonds between pairs of cysteines, which help stabilizing protein 3D structure and have a relevant structural and functional role. This task can be effectively addressed with a nearest-neighbour approach in the space of candidate configurations. I will then introduce the problem of metal binding site prediction, whose characteristics prevent the application of this method. I will present a search-based structured-output technique relying on an online strategy learning to discriminate between correct and incorrect moves. The advantages and drawbacks of these algorithms will be discussed together to their applicability to other structured-output problems.

The speaker is hosted by Gianluca Bontempi and Tom Lenaerts.

Thursday August 25, 2011, 12:30PM, room: NO7.07
Modeling and optimizing of drinking water supply networks.
Derek Verleye (Universiteit Gent)

Abstract: The mathematical formulation of an operational planning model for daily operations at a water supply network results in a non-convex MINLP. In this model, which is based on the existing network of VMW (Vlaamse Maatschappij voor Watervoorziening), there is a large variety of complex hydraulic constraints. Nonsmooth functions are needed to describe friction losses in pipes, whereas the relation between pressure and flow in pumps lead to a non-convex power term. We add binary variables to describe the possibility of free inflow or re-injection of water at reservoirs. We develop and test different formulations and propose convexifications or smooth functions where applicable. Since optimization is done over several periods, the computation time is very large. The MINLP solver Bonmin is used to generate a locally optimal solution. Decomposition techniques from the literature, such as Feasibility Pump, are tested on the model and results are discussed.

The speaker is hosted by Michael Poss.

Thursday June 23, 2011, 12:30PM, room: NO7.08
The construction of multiscale kernel smoothing decompositions.
Maarten Jansen (ULB)

Abstract: Continuation of the seminar of April 28.

Wednesday June 22, 2011, 3:00PM, room: Forum G
Integrating Database Systems and Data Mining Algorithms.
Carlos Ordonez (University of Houston)

Abstract: Data mining remains an important research area in database systems and a major challenge in computer science. We present a review of processing alternatives, storage mechanisms, algorithms, data structures and optimizations that enable data mining on large data sets. We focus on the computation of well-known multidimensional statistical and machine learning models. We pay particular attention to SQL (together with UDFs) and MapReduce as two competing technologies for large scale processing, especially with parallel computing. We conclude with a summary of solved major problems and open research issues.
bio: Carlos Ordonez received a degree in applied mathematics and an M.S. degree in computer science, from UNAM University, Mexico, in 1992 and 1996, respectively. He got a Ph.D. degree in Computer Science from the Georgia Institute of Technology, in 2000. Dr Ordonez worked six years extending the Teradata DBMS with data mining algorithms. Carlos had the opportunity to collaborate in more than 20 data mining projects from many companies with large data warehouses. He is currently is an Assistant Professor at the University of Houston. His research is centered on the integration of statistical and data mining techniques into database systems and their application to scientific problems.

The speaker is hosted by Esteban Zimanyi.

Thursday June 16, 2011, 12:30PM, room: NO7.08
Cooperation, Reputation & Gossiping.
Vincent Traag (Université Catholique de Louvain, Belgium)

Abstract: Explaining how cooperation can emerge, and persist over time in various species is a prime challenge for both biologists and social scientists. Whereas cooperation in non-human species might be explained through mechanisms such as kinship selection or reciprocity, this is usually regarded as insufficient to explain the extent of cooperation observed in humans. It has been theorized that indirect reciprocity---I help you, and someone else later helps me---could explain the breadth of human cooperation. Reputation is central to this idea, since it conveys important information to third parties whether to cooperate or not with a focal player. In this talk I analyze a model for reputation dynamics through gossiping, and pay specific attention to the possible cooperation networks that may arise. We show that gossiping agents can organize into cooperative clusters, i.e. cooperate within clusters, and defect between them, which can be regarded as socially balanced. Surprisingly then, more social influence can lead to a more fragmented network of cooperation. We also investigate the infinite population dynamics, and show that individualistic prejudiced agents do particularly well in a "friendly" environment, whereas social trusting agent do well in "though" environments.

The speaker is hosted by Tom Lenaerts.

Thursday June 9, 2011, 12:30PM, room: NO7.08
Approximating Petri Net Reachability Along Context-free Traces.
Pierre Ganty (IMDEA Software Institute, Madrid, Spain)

Abstract: We investigate the issue of determining whether the intersection of a context-free language (CFL) and a Petri net language (PNL) is empty. Our contribution to solve this long-standing problem which relates, for instance, to the reachability analysis of recursive programs over unbounded data domain, is to identify a class of CFLs called the finite-index CFLs for which the problem is decidable. The k-index approximation of a CFL can be obtained by discarding all the words that cannot be derived within a budget k on the number of non-terminal symbols. A finite-index CFL is thus a CFL which coincides with its k-index approximation for some k. We decide whether the intersection of a finite index CFL and a PNL is empty by reducing it to the reachability problem of a Petri nets with ordered inhibitor arcs, a class of systems with infinitely many states for which reachability is known to be decidable. Conversely, we show that the reachability problem for a Petri net with ordered inhibitor arcs reduces to the emptiness problem of a finite-index CFL intersected with a PNL.
This is a joint work with Mohamed-Faouzi Atig.

The speaker is hosted by Jean-François Raskin.

Thursday May 19, 2011, 12:30PM, room: NO7.08
Applications of Machine Learning in Side Channel Analysis.
Benedikt Gierlichs and Gabriel Hospodar (Katholieke Universiteit Leuven, ESAT-SCD-COSIC & IBBT, Belgium)

Abstract: We discuss the application of supervised machine learning techniques in side channel analysis, with a focus on power analysis. We first recap the necessary basics of power analysis with a view towards machine learning. Then we explain two state-of-the art attacks that are based on Bayesian classification. In the second part of the talk we present our work on attacks that use (LS-)SVMs as classifiers and report some preliminary results.

The speakers are hosted by Gianluca Bontempi and Olivier Markowitch.

Wednesday May 4, 2011, 12:30PM, room: 2NO7.07
Triangulating the Square and Squaring the Triangle.
Wolfgang Mulzer (Freie Universität Berlin, Germany)

Abstract: We show that Delaunay triangulations and compressed quadtrees are equivalent structures. More precisely, we give two algorithms: the first computes a compressed quadtree for a planar point set, given the Delaunay triangulation; the second finds the Delaunay triangulation, given a compressed quadtree. Both algorithms run in deterministic linear time on a pointer machine. Our work builds on and extends previous results by Krznaric and Levcopolous and by Buchin and Mulzer.
As a corollary, we obtain deterministic versions of many previous algorithms related to Delaunay triangulations, such as splitting planar Delaunay triangulations, preprocessing imprecise points for faster Delaunay computation, and transdichotomous Delaunay triangulations.
Joint work with Maarten Löffler.

The speaker is hosted by Stefan Langerman.

Thursday April 28, 2011, 12:30PM, room: NO7.08
Constructing and processing multiscale sparse data representations.
Maarten Jansen (ULB)

Abstract: In this talk, I discuss the two main topics of my research, which concentrate on the construction of multiscale sparse data decompositions, and on the usage of these sparse data representations in statistical signal processing. In the introduction, I explain that sparsity leads to non-linear processing and that non-linearity is especially useful for intermittent data, i.e., data with non-stationary behavior. Second, the construction of multiscale sparsity part focuses on the so-called lifting scheme for the construction of wavelets and similar decompositions on data with arbitrary configurations, including nonequidistant time series, 2d scattered data, but also data on large molecules or networks. Current research is about multiscale kernel smoothing within this lifting scheme. The third part is about sparse data processing, with focus on elements of variable selection under sparsity and structured sparsity. I discuss the minimization of information criteria for the selection of significant components in high-dimensional data. The proposed procedure is compared to other methods that are based on type-I-error control (false discovery rate). This comparison is situated in the perspective of structured variable selection, which is subject of future research.

Wednesday March 23, 2011, 12:30PM, room: NO6.07
Small cases of Scott's Conjecture.
Nicolas Trotignon (CNRS, Université Denis Diderot - Paris 7, LIAFA, France)

Abstract: It is an old and well known result that there exist triangle-free graphs of arbitrarily large chromatic number. So, it is impossible in general to bound the chromatic number of graphs from above with a function of the size of their largest cliques (a clique is a set of pairwise adjacent vertices). However, obviously, this can be done for some classes of graphs (trees, bipartite graphs, perfect graphs, ...). When H is a graph, we denote by Forb*(H) the class of all graphs that do not contain any subdivision of H as an induced subgraph. Let chi(G) denotes the chromatic number of G, and omega(G) denotes the size of a largest clique in G. Alex Scott conjectures that for every graph H, there exists a function f, depending on H, such that all graphs G from Forb*(H) satisfy chi(G) <= f(omega(H)).
In this talk, I will survey the proof of Scott's Conjecture for several small graphs. In particular, we will show that the conjecture holds when:

• H is the diamond (that is the complete graph on 4 vertices with one edge deleted), obtained jointly with Kristina Vuskovic;
• H is the complete graph on 4 vertices, (Scott observed that this follows easily from a theorem due to Kuhn and Osthus);
• H is the bull (that is a triangle with two pending non-adjacent edges), obtained jointly with Maria Chudnovsky, Irena Penev and Alex Scott.

The speaker is hosted by Marcin Kaminski.

Thursday March 10, 2011, 12:30PM, room: NO7.08
Disjoint-Path Facility Location: Theory and Practice
Howard Karloff (AT&T Labs, NJ, USA)

Abstract: Internet service providers hope to provide their customers with superior Internet connectivity, but do they always do so? How can an ISP even know what quality of service it's providing to its customers? To this end, researchers recently proposed a new scheme an ISP could use in order to estimate the packet loss rates experienced by its customers.
To implement the new scheme, one has to approximately solve an interesting NP-Hard optimization problem on the ISP's network. Specifically, one must choose a small set of network nodes such that from each customer node there are arc-disjoint paths to *two* of the selected nodes. I will discuss recent work, mostly at ATT, attacking this problem and its surprisingly good results, in light of the problem's provable inapproximability in the worst case.
This is joint work with many people.

The speaker is hosted by Samuel Fiorini.

Friday February 25, 2011, 14:00PM, room: A2.122
Beyond Space For Spatial Networks
Renaud Lambiotte (Imperial College, London, UK)

Abstract: Many complex systems are organized in the form of a network embedded in space. Important examples include the physical Internet infrastucture, road networks, flight connections, brain functional networks and social networks. The effect of space on network topology has recently come under the spotlight because of the emergence of pervasive technologies based on geo-localization, which constantly fill databases with people's movements and thus reveal their trajectories and spatial behaviour. Extracting patterns and regularities from the resulting massive amount of human mobility data requires the development of appropriate tools for uncovering information in spatially-embedded networks. In contrast with most works that tend to apply standard network metrics to any type of network, we argue in this paper for a careful treatment of the constraints imposed by space on network topology. In particular, we focus on the problem of community detection and propose a modularity function adapted to spatial networks. We show that it is possible to factor out the effect of space in order to reveal more clearly hidden structural similarities between the nodes. Methods are tested on a large mobile phone network and computer-generated benchmarks where the effect of space has been incorporated.

The speaker is hosted by Tom Lenaerts.

Thursday February 24, 2011, 12:30PM, room: NO7.08
Coloring graphs without a fixed induced linear forest
Daniël Paulusma (Durham University, UK)

Abstract: The k-Coloring problem is the problem to decide whether a graph can be colored with at most k colors. The Vertex Coloring problem is the problem to determine the chromatic number of a graph. Kral, Kratochvil, Tuza, and Woeginger completely characterized the complexity of Vertex Coloring for graphs with one forbidden induced subgraph H in their WG 2001 paper. A similar computational complexity classification of the k-Coloring problem is still open. We discuss partial results that extend current work. In particular, we consider the case when H is a fixed linear forest (disjoint union of a collection of paths).

The speaker is hosted by Marcin Kaminski.

Thursday January 27, 2011, 12:30PM, room: NO7.08
Predictive Network Inference in Colon Cancer
Catharina Olsen (Machine Learning Group, ULB)

Abstract: In bioinformatics, the key question concerns the understanding of how genes interact with each other in a given context, i.e. which genes take part in the regulation of a target gene. Given the characteristics of data generated in this field, usually containing a very high number of genes while at the same time only a low number of samples, network inference algorithms have in the past helped to build models explaining these gene-gene relationships. In our work, we tackled two problems: 1) not knowing the underlying true interaction network makes the assessment of the inferred networks difficult 2) how can results from the research community be integrated into the model building processs. As to the first problem we used the network's predictive ability to obtain a measure of the network's similarity to the underlying truth, i.e. the higher the prediction accuracy the better the network. Regarding the second problem, we present a network inference approach based on information theory and regression modeling. Using publicly available microarray data we show experimentally that using prior knowledge helps to identify better networks.

Thursday November 18, 2010, 12:30PM, room: NO7.08
Satellite communications for disaster management
Laurent Franck (Télécom Bretagne, France)

Abstract: Every year, we are reminded about the challenges of disaster management when a catastrophe strikes and makes hundreds of casualties. Every year, we are experiencing the frustrating feeling that coping effectively with disaster management is a never-ending story and that everything has still to be done.
This is obviously inaccurate but it reflects the complexity of the topic. Among the various factors that make disaster management so challenging and each situation unique, let us highlight a few of them:

• Disasters strike anywhere, anytime but call for a fast response;
• The transport, power and telecommunication infrastructures are often impaired if not destroyed;
• Disaster management implies the co-operation of various organisations and authorities from different countries;
• The sovereignty and laws of country where the disaster takes place must not be infringed despite the gravity of the situation.
In this context, it is not surprising that the first hours or days after the disaster are chaotic. One important tool to work against the ramping entropy is telecommunications and one might notice that on the first hours after the disaster, the media teams are the one likely to be equipped with the most powerful telecommunications means. Indeed, media teams face similar challenges: transmitting data from anywhere at anytime without a-priori knowledge of available telecommunications facilities. Satellites offer mission-critical services to them. This leads us to two conclusions: (a) satellite technology deserves a specific care when addressing emergency telecommunications, (b) the satellite technologies to be used and the way they should be used is not straightforward when applied to disaster situations.
The presentation will start with an overview of disaster management and how telecommunications may help to mitigate the death toll. We will then go through the existing and possible roles of satellite communications. The presentation will be concluded with an highlight on current research activities on satellite communications for disaster management.

Note: this talk is made to be open to everyone, even those with almost no telecommunications background. We will also to show how disaster management is peculiar and must not be adressed through the traditional engineering & business practices.

The speaker is hosted by Olivier Markowitch.

Friday October 22, 2010, 14:00PM, room: NO8 Rotule
Statistical issues in the development of clinically useful biomarkers in oncology from microarrays
Stefan Michiels (Bordet, ULB)

Abstract: The recent revolution in genomics and the advent of molecularly targeted therapies have increased the interest in biomarker-based prediction models for the outcome of cancer patients. Multigene classifiers, also known as "gene signatures", can be developed from microarray-based expression profiling or other high-dimensional arrays as potentially useful prognostic tools. This talk focuses on developing biomarkers that modify the prognosis of patients, biomarkers that indicate how patients will respond to specific treatments, and to a lesser extent surrogate endpoints for drug development. I will illustrate the statistical techniques, criteria and study designs appropriate for the identification and validation of these various types of biomarkers, and their use in clinical trials. For gene signatures, the role of gene or feature selection, multiple testing, sample size, classification methods, as well as the need for internal and external validation will be discussed. Using previous work on mainly breast cancer as an example, I shall review some of the problems in interpreting the results, and discuss the validity and the clinical usefulness of the findings.

Biosketch Stefan Michiels: Stefan Michiels is since May 2010 at the Breast Cancer Translational Research Laboratory (Faculté de Médecine, Université libre de Bruxelles, ULB290) of the Institut Jules Bordet. His areas of expertise are statistical analysis of biomarker and genomic array data, development and validation of prognostic models, clinical trials and meta-analyses in oncology. Stefan holds a PhD in Biostatistics from the School of Public Health at the Paris XI University and Master Degrees in Statistics and in Applied Mathematics from the University of Leuven. His previous positions included the Institut Gustave Roussy (France), the National Cancer Institute (France) and the University of Leuven. He has authored about 50 peer-reviewed publications in journals such as Lancet, JAMA, Lancet Oncology, Journal of Clinical Oncology, Journal of Clinical Epidemiology, Statistics in Medicine, Bioinformatics and Statistical Methods in Medical Research. He is member of the editorial board of Biometrics and Cancer Prevention Research.

The speaker is hosted by Gianluca Bontempi.

Friday Sept 10, 2010, 12:30PM, room NO7.07
Title: Higher order asymptotics from multivariate generating functions
Mark C. Wilson, University of Auckland

Abstract: For the past decade I have been working on a project with Robin Pemantle to systematize coefficient extraction from multivariate generating functions. I will give a sketchy overview of results so far and then concentrate on recent work on higher order terms. Emphasis is placed on concrete computation and interesting examples. There are many unsolved problems!

The speaker is hosted by Jean Cardinal.

Thursday June 24, 2010, 12:30PM, room NO7.08
Title: Approximability of Precedence Constraint and Robust Scheduling Problems
Nikolaus Mutsanas (IDSIA, Switzerland)

Abstract: We study the approximability of a classical single machine scheduling problem in different contexts. First, we consider the problem of scheduling precedence constraint jobs with the objective to minimise the sum of weighted completion times. This problem was known to be NP-hard already in the seventies. Several 2-approximation algorithms for the general case exist, as well as some better than 2-approximations -- up to exact algorithms -- for special classes of precedence constraints. We present a framework that unifies and in some cases improves on the best known algorithms for all previously considered special classes of precedence constraints. This framework is based on recent advancements on this problem which established it to be a special case of the vertex cover problem. The resulting graph turns out to be well known to combinatorialists, which allows us to tap the rich dimension theory of partial orders in order to devise better than 2-approximation algorithms whenever the so-called fractional dimension of the partial order is bounded. We then study the above scheduling problems in the presence of uncertainty. We show that this problem cannot be approximated within a better than logarithmic factor whenever the number of different scenarios, i.e. different configurations of the numerical parameters is unbounded, even in the absense of precedence constraints. This result contrasts the difficulty in proving inapproximability results for the classical, non-robust problem and hints at the increase in complexity caused by the uncertain environment. We find it therefore surprising that we were able to develop a polynomial time 2-approximation algorithm for the case when only one of the two parameters is affected by uncertainty. The fact that our result holds in the presence of precedence constraints implies that it cannot be improved without improving upon the 2-approximation algorithm for the classical precedence constrained scheduling problem, a long standing open problem in scheduling theory. We further prove inapproximability results for the unweighted case and give a polynomial time algorithm for the case when both the number of scenarios and processing times / weights are bounded by some constant.

The speaker is hosted by Samuel Fiorini.

Thursday June 3, 2010, 12:30PM, room NO7.08
Titre: Formulations for the Center Facility Location Network Design Problem with Budget Constraint
Elena Fernández (Universitat Politècnica de Catalunya, Spain)

Abstract: The "Center Facility Location Network Design Problem with Budget Constraint" is presented. It is a new model for integrated facility location and network design, which simultaneously considers the location of service facilities and the design of its underlying transportation network. The problem includes a budget constraint on the overall cost for the location of facilities plus the construction of the network. The objective is to minimize the maximum distance between any demand point and its allocated service facility. The evaluation of the objective is involved as the paths connecting the nodes and their allocated facilities must be identified. Indeed, such paths relate the selected arcs, the selected hubs and the allocation of non-hubs to hubs. The complexity of the problem is established and two alternative integer programming formulations are presented and compared. A multicommodity-based formulation is clearly outperformed by a smaller formulation that exploits the structure of the problem. This smaller formulation is taken as basis for further improvements and is successively strengthened by introducing further classes of valid inequalities. The effect of employing tight upper bounds to reinforce some constraints of the formulation and to derive simple reduction tests is discussed. Numerical results of a series of computational experiments for instances on up to 100 nodes are presented and analyzed.

The speaker is hosted by Martine Labbé.

Thursday May 6, 2010, 12:30PM, room NO7.08
Title: Research on Stochastic Local Search Algorithms at IRIDIA
Thomas Stuetzle (IRIDIA, ULB)

Abstract: Stochastic local search (SLS) algorithms are among the most powerful techniques for solving computationally hard problems in many areas of computer science, operations research and engineering. SLS techniques range from rather simple constructive and iterative improvement algorithms to general-purpose SLS methods (usually called metaheuristics) such as ant colony optimization, iterated local search, memetic algorithms or tabu search. In this talk, we first give a concise introduction into SLS and argue that the development of effective SLS algorithms requires a sound algorithm engineering methodology. Next, we give an overview of the research directions in SLS that are currently followed at the IRIDIA laboratory. In the remainder of the talk, we review developments and results in three areas:

• the development and application of tools for the automatic configuration of SLS algorithms;
• the design and implementation of new state-of-the-art algorithms for the probabilistic traveling salesman problem;
• the design and implementation of new state-of-the-art algorithms for a number of bi-objective flow shop scheduling problems.

The speaker is hosted by Bernard Fortz.

Thursday April 22, 2010, 12:30PM, room NO7.08
Title: Introduction to Constraint Programming using Comet

Abstract: After introducing basic concepts of Constraint Programming (CP) we will cover:

• satisfaction and optimization problems
• global constraints and filtering algorithms
• search and heuristics
• large neighborhood search
• visualization
• scheduling
• hybridization methods using CP
All concepts will be illustrated with small examples and visualization using the Comet solver. No specific knowledge will be assumed. Interested people can already download Comet at www.dynadec.com

The speaker is hosted by Bernard Fortz.

Thursday April 1, 2010, 12:30PM, room NO7.08
Title: Counting on Graphs (and playing with pebbles)
Rudi Penne (Karel de Grote-Hogeschool and University of Antwerp)

Abstract: How many measurements do we need for a configuration of points to be determined in the plane or in space?
These measurements can be distances, angles, collinearity tests, etc... Most often we do not require the determination in the strict sense (the exact position of the points), but up to congruency or up to similarity or up to a more general geometric transformation. Applications can be found in CAD (defining a complicated design by a partial set of point specifications), Structural Mechanics (deciding what joint pairs must be linked by bars to ensure rigidity), Chemistry (determination of the molecular structure by measuring atomic distances), Sensor Networks (localization of nodes by direction, bearing or distance information between "agents"), etc...
In all these different applications, the main questions are universal:

• realizability: can a given subset of data (distances, measurements,...) be represented by a geometric configuration?
• uniqueness: is the given set of data sufficient to determine the configuration, and is this determination global or local?
• independence: are all given measurements independent from each other, or do we have redundant data?
In this talk we will restrict to the local determination of a planar point configuration by given distances (up to a rigid motion). If we represent the given data by a graph then the second and the third of the previous questions can be addressed for general configurations in a pure combinatorial way: the "Laman criterion". At the end of our talk we show a class of elegant algorithms for testing this count criterion, the so-called "pebble games".

The speaker is hosted by Perouz Taslakian.

Thursday March 25, 2010, 12:30PM, room: NO9
Title: Stochastic Dominance Relations in Stochastic Integer Programming
Ruediger Schultz (Universität Duisburg, Germany)

Abstract: Stochastic dominance is a proven tool in decision theory. Recently it has made its way into stochastic programming, offering modeling alternatives for risk aversion. In the talk we review this development with emphasis on two-stage stochastic integer programs. We study basic structural properties, discuss algorithmic possibilities involving decomposition, and provide illustrations at decision problems from power production and trading.

The speaker is hosted by Martine Labbé.

Thursday March 18, 2010, 12:30PM, room NO7.08
Title: The MCF-Separator -- Detecting and Exploiting Multi-Commodity Flow Structures in MIPs
Christian Raack (ZIB, Berlin)

Abstract: In this talk the new separator "MCF" that recently has been added to the mixed integer programming (MIP) solvers CPLEX and SCIP will be introduced. This is joint work with Tobias Achterberg from IBM.
Given a MIP, we automatically detect block structures in the constraint matrix together with the coupling by capacity constraints arising from multi-commodity flow (MCF) formulations. We identify the underlying graph and generate cutting planes based on cuts in the detected network. We make use of the complemented mixed integer rounding framework (c-MIR) but provide a special purpose aggregation heuristic that exploits the identified network structure.
We show that the proposed separation scheme speeds-up the computation for a large set of network design problems by a factor of two on average. On the other hand, in roughly 10% of general MIP instances we found consistent embedded networks. For these instances the computation time is decreased by 18% on average. There is almost no degradation for the remaining (non-network) instances.

The speaker is hosted by Michael Poss.

Friday March 5, 2010, 12:30PM, room: OF2070
Title: Reasoning about Collections with Cardinality Constraints
Ruzika Piskac (EPFL, Switzerland)

Abstract: Applications in software verification and interactive theorem proving often involve reasoning about sets of objects. Cardinality constraints on such collections also arise in these scenarios. Multisets arise for analogous reasons as sets: abstracting the content of linked data structure with duplicate elements leads to multisets. Interactive theorem provers such as Isabelle specify theories of multisets and prove a number of theorems about them to enable their use in interactive verification. However, the decidability and complexity of constraints on multisets is much less understood than for constraints on sets.
In this talk we will show that the satisfiability problem for a logic of sets, multisets (bags), and cardinality constraints is decidable. We will also show that the optimal complexity results. It relies on an extension of integer linear arithmetic with a "star" operator, which takes closure under vector addition of the solution set of a linear arithmetic formula. We show that the satisfiability problem for this extended language remains in NP (and therefore NP-complete). Our proof uses semilinear set characterization of solutions of integer linear arithmetic formulas, as well as a generalization of a recent result on sparse solutions of integer linear programming problems.
We also mention several extensions of these results: function image operators, collections with fractional multiplicity, and a result on combination of non-disjoint theories which share set symbols and operations. In addition to verification, we also report on the use of these decision procedures in synthesis.

The speaker is hosted by Gilles Geeraerts.

Thursday March 4, 2010, 12:30PM, room: NO7.08
Solving Non-Convex Lasso Type Problems With DC Programming
Romain Herault (INSA de Rouen, France)

Abstract: We propose a novel algorithm for addressing variable selection (or sparsity recovering) problem using non-convex penalties. A generic framework based on a DC programming is presented and yields to an iterative weighted lasso-type problem. We have then showed that many existing approaches for solving such a non-convex problem are particular cases of our algorithm. We also provide some empirical evidence that our algorithm outperforms existing ones.
Based on the article:
G. Gasso, A. Rakotomamonjy, S. Canu, Recovering sparse signals with non-convex penalties and DC programming, IEEE Trans. Signal Processing, Vol 57, no.12, pp 4686-4698, 2009.

The speaker is hosted by Gianluca Bontempi.

Friday February 26, 2010, 12:30PM, room: NO9, DATE CHANGED
Title: Exact cost of preemption in monoprocessor real-time scheduling with multiple constraints
Patrick Meumeu Yomsi (ULB)

Abstract: We are interested in the scheduling problem of periodic tasks in critical (hard) real-time systems. In the literature, the approximation of the preemption cost, which is the variable part of the Operating System (OS) cost, in the WCET of tasks leads us to a compromise between waste and safety of the schedulability, that is not satisfactory. Some works have been proposed to solve this problem but the results led to taking into account either a minimum or a maximum number of preemptions. In this talk, we introduce a new model to solve the general scheduling problem of hard real-time systems with many constraints such as precedence, strict periodicity, latency and jitter, while taking into account the exact cost of preemption for all scenarios of first activation for all tasks (simultaneous or not simultaneous). We have developed a software called SAS (Simulation Analysis of Scheduling) to make them available to users the results we have achieved. The main contribution of SAS, compared to other commercial and academic tools of the same type, is that it takes into account the exact cost of preemption during the schedulability analysis.

The speaker is hosted by Joël Goossens.

Thursday February 4, 2010, 12:30PM, room NO7.08
Title: Cryptographic hash functions, sponge functions and the SHA-3 contest
Gilles Van Assche (STmicroelectronics)

Abstract: A cryptographic hash function takes as input a message of any length and produces a digest, usually a few dozen bytes long. Such functions are used ubiquitously in cryptographic applications, e.g., in digital signatures or secret-key derivation. Recently, many popular and standard hash functions (e.g., MD4, MD5, SHA-1) have been broken, hence triggering a new interest for research in this field. While a short-term migration to SHA-2 is taking place, long-term solutions are needed. The SHA family, standardized by NIST, is designed in the same line of thought as the MD4 and MD5 family. Fresh new ideas are thus welcome in this field, hence currently making it a live and exciting research area.
To encourage fresh new ideas, the NIST has organized a competition to define the future standard hash function, called SHA-3. In October 2008, 64 entries were received by NIST, 51 of which were valid and were analyzed (and sometimes broken) by the community during almost a year. In July 2009, NIST selected 14 candidates for the second round of the process. After further cryptanalysis and short-listing, a winner is expected to be announced in 2012.
In the meantime, sponge functions are a specific construction useful for hashing. Among the SHA-3 candidates, some are actually sponge functions and many borrow at least some ideas from it. Sponge functions provide a particular way to generalize hash functions to more general functions whose output length is arbitrary. Based on the sponge construction, they model in a very simple way the finite memory any concrete construction has access to. We have shown that sponge functions can be used as an alternative to the random oracle model for expressing security claims, and that the sponge construction can lead to a concrete design method, e.g., for our candidate Keccak, providing many interesting properties of sponge functions.

The speaker is hosted by Olivier Markowitch.

Thursday January 21, 2010, 12:30PM, room NO7.08
Title: Approximate Bayesian inference and model selection in systems biology
Tina Toni (Imperial College London)

Abstract: Mathematical modelling has become an important tool in the biological sciences. Due to the overwhelming complexity of biological systems, it is not straightforward to determine the structure of realistic and predictive computer models. Moreover, the majority of parameter values are unknown and despite technological advances, these parameters are often difficult to measure experimentally. Therefore, statistical and computational techniques are needed to distinguish the good models from the unsuitable ones and to estimate unknown parameters. In this talk we present a novel algorithm for elucidating computational models for dynamical systems, which can be applied for both parameter estimation and model selection. The algorithm belongs to the class of Approximate Bayesian computation methods, which can infer posterior distributions without having to calculate likelihoods, and is based on a Sequential Monte Carlo framework. The algorithm is applied to a variety of stochastic and deterministic biological models. We employ stochastic Petri net modelling techniques to study the phage-shock-protein stress response in bacteria, and demonstrate how we can employ Petri net models for qualitative and semi-quantitative inferences in light of available steady state data. We also apply the model selection algorithm to study differential equation models of MAP kinase phosphorylation dynamics and the JAK-STAT signalling pathway.

The speaker is hosted by Guy Latouche.

Thursday December 3, 2009, 12:30PM
Titre: Probabilistic constraints in combinatorial optimization
Olivier Klopfenstein (France Télécom)

Abstract: When facing uncertainties on input data, it is natural to look for solutions with controllable feasibility. Hence, given a targeted risk probability ε>0, we aim at finding the best solution feasible with probability at least 1-ε. Such model is known as "chance-constrained programming". The first part of the presentation will be devoted to probabilistic constraints on 0-1 linear programs. Some theoretical analysis of the model will be presented, and a solution process will be described, together with numerical tests.
The second part will focus on an application: the routing of stochastic flows in a network. Traffic stochasticity is a central feature of telecommunication networks. We look for the "best" routing, with the goal to ensure that network link capacities are satisfied with controlled probability. Hence, we are led to chance-constrained programs. A specific interest will be devoted to single-path routing, because of its practical importance in network management.

The speaker is hosted by Martine Labbé.

Thursday November 26, 2009, 12:30PM
Titre: Exponential-time approximation of hardest NP-complete problems
Łukasz Kowalik (University of Warsaw)

Abstract: We study optimization problems that are neither approximable in polynomial time (at least with a constant factor) nor fixed parameter tractable, under widely believed complexity assumptions, like Maximum Independent Set, Vertex Coloring, Set Cover, and Bandwidth. In recent years, many researchers design exact exponential-time algorithms for these and other hard problems. The goal is getting the time complexity still of order O(c^n), but with the constant c as small as possible. We extend this line of research and we investigate whether the constant c can be made even smaller when one allows constant factor approximation. In fact, we describe a kind of approximation schemes -- trade-offs between approximation factor and the time complexity.
We study two natural approaches. The first approach uses general transformations from exponential-time exact algorithms to approximations that are faster but still exponential-time. For example, we show that for any reduction rate r, one can transform any O^*(c^n)-time algorithm for (Weighted) Set Cover into a (1+ln r)-approximation algorithm running in time O^*(c^{n/r}). The second approach consists of designing a backtracking algorithm with a small search tree. An example is a (4r-1)-approximation of Bandwidth in time O^*(2^{n/r}), for any positive integer r.
This is a joint work with Marek Cygan, Marcin Pilipczuk and Mateusz Wykurz.

The speaker is hosted by Marcin Kaminski.

Friday November 13, 2009, EXCEPTIONALLY ON FRIDAY AT 2:30PM, ROTULE NO8
Biomarker selection from microarray data: a transfer learning approach
Pierre Dupont (UCL, Belgium)

Abstract: Classification of microarray data is a challenging problem as it typically relies on a few tens of samples but several thousand dimensions (genes). Feature selection techniques are commonly used in this context, both to increase the interpretability of the predictive model and possibly to reduce its cost. Feature selection aims at finding a small subset of the original covariates that best predicts the outcome. In the case of clinical studies, the selected genes are considered to be biomarkers forming a signature of a patient status or his expected response to a treatment. A good signature is also ideally stable with respect to sampling variation, under the assumption that the biological process modelled is (mostly) common across patients. We focus here on embedded methods for which a multivariate feature selection is performed jointly with the classifier estimation. We study in particular regularized (or penalized) linear models, such as extensions to linear support vector machines (SVM) or variants of the LASSO, since they offer state of the art predictive performances for high dimensional and sparse data. In this context, we describe two original contributions. Firstly, some prior knowledge may be available to bias the selection towards some genes a priori assumed to be more relevant. We present a novel optimization algorithm to make use of such a partial supervision as a soft constraint. A practical approximation of this technique reduces to standard SVM learning with iterative rescaling of the inputs. The scaling factors depend on the prior knowledge but the final selection may depart from it if necessary to optimize the classification objective. Secondly, we show how to adapt the above algorithm in a transfer learning setting: a preliminary selection is performed on one or several source dataset(s) and is subsequently used to bias the selection on a target dataset. This is particularly relevant for microarray data for which each individual dataset is typically very small but a fastly growing collection of related datasets are produced and made publicly available. Experimental results illustrate that both approaches improve the stability and classification performances of the resulting models. We conclude this talk by sketching some open issues, both from a theoretical and a practical viewpoint.

The speaker is hosted by Gianluca Bontempi.

Thursday November 5, 2009, 12:30PM
Network Inference based on Information Theory Applied to Microarray Data
Patrick E. Meyer (ULB, Machine Learning Group)

Abstract: An important issue in computational biology is the extent to which it is possible to learn transcriptional interactions from measured expression data. The reverse engineering of transcriptional regulatory networks from expression data alone is challenging because of the combinatorial nature of the problem and of the limited amount of (noisy) samples available in expression datasets. This talk will focus on information-theoretic ap- proaches of network inference which typically rely on the estimation of mutual information and/or conditional mutual information from data in order to measure the statistical dependence between genes expressions. The adoption of mutual information in network inference can be traced back to Chow and Liu's tree algorithm. Nowadays, two main categories of information-theoretic network inference methods hold the attention of the bioinformatics community: i) methods based on pairwise mutual in- formation that infer undirected networks up to thousands of genes thanks to their low algorithmic complexity and ii) methods based on conditional mutual information that are able to infer a larger set of relationships be- tween genes but at the price of a higher algorithmic complexity. The strengths and weaknesses of these information-theoretic methods for in- ferring transcriptional networks will be detailed in this talk.

Thursday October 29, 2009, 12:30PM
The Replacement Paths Problem for Planar Graphs
Christian Wulff-Nilsen (University of Copenhagen, Denmark)

Abstract: Communication networks are in general not static but may change due to link failures. In such cases, alternative lines of communication need to be established and it may be of interest to determine the "quality" of such lines. This motivates the replacement paths problem: given two vertices s and t in a graph G with non-negative edge lengths and given a shortest path P (the line of communication) in G from s to t, compute, for each edge e on P, the length of a shortest path from s to t in G that avoids e. We give an O(n log n) time algorithm with linear space for n-vertex planar directed graphs. The previous best time bound was O(n(log n)^2).

The speaker is hosted by Stefan Langerman.

Thursday October 22, 2009, 12:30PM
A New Optimal Scheduling Algorithm for Multiprocessors
Shelby Funk (U. Georgia, USA)

Abstract: Online scheduling on multiprocessors has proven to be a particularly challenging problem. Algorithms that perform well on uniprocessors, such as Earliest Deadline First (EDF), do not perform as well on multiprocessors. Optimal online scheduling algorithms, such as Pfair and LLREF, have high scheduling overhead and incur many preemptions and migrations. This talk will introduce the NQ-fair scheduling policy, a general scheduling scheme that can be used to design online multiprocessor scheduling algorithms of periodic and sporadic tasks with unconstrained deadlines. Any algorithm adhering to the NQ-fair policy can schedule any task set on m identical processors provided the task set's total density is at most m and maximum density is at most 1. Thus, all NQ-fair algorithms are optimal for tasks with deadlines equal to periods. We then present a new NQ-fair scheduling algorithm, NQ-Wrap, which is significantly more efficient than other optimal online multiprocessor scheduling algorithms. Finally, we extend NQ-Wrap in two ways: First, we reduce the number of preemptions when periods and execution times are all integers; Then, we show how to implement NQ-Wrap on uniform multiprocessors.

The speaker is hosted by Joël Goossens.

October 15, 2009
Executing Code in the Past: Efficient In-Memory Object Graph Versioning
Frédéric Pluquet (ULB)

Abstract: Object versioning refers to how an application can have access to previous states of its objects. Implementing this mechanism is hard because it needs to be efficient in space and time, and well integrated with the programming language. We present HistOOry, an object versioning system that uses an efficient data structure to store and retrieve past states. It needs only three primitives, and existing code does not need to be modified to be versioned. It provides fine-grained control over what parts of objects are versioned and when. It stores all states, past and present, in memory. Code can be executed in the past of the system and will see the complete system at that point in time. We have implemented our model in Smalltalk and used it for three applications that need versioning: checked postconditions, stateful execution tracing and a planar point location implementation.

June 25, 2009
Arc-time formulations and algorithms for Scheduling Problems
Marcus Poggi (PUC-Rio, Brazil)

Abstract: This talk presents formulations and algorithms for single or parallel identical machine scheduling problems. While the overall algorithm can be viewed as a branch-cut-and-price over a very large extended formulation, a number of auxiliary techniques are necessary to make the column generation effective. Those techniques include a powerful fixing by reduced costs and a dual stabilization method. We tested the algorithms on instances of the classical weighted tardiness problem. All the 375 single machine instances from the OR-Library, with up to 100 jobs, are solved to optimality in reasonable times and with minimal branching, since the duality gaps are almost always reduced to zero in the root node. Many multi-machine instances with 40 and 50 jobs are also solved to optimality for the first time. The proposed algorithms and techniques are quite generic and can be used on several related scheduling problems.

June 4, 2009
Dynamic Observers for the Synthesis of Opaque Systems
Hervé Marchand (IRISA, Rennes)

Abstract: In this presentation, we address the problem of synthesizing opaque systems. A secret predicate S over the runs of a system G is opaque to an external user having partial observability over G, if s/he can never infer from the observation of a run of G that the run belongs to S. We first investigate the case of static partial observability where the set of events the user can observe is fixed a priori. In this context, we show that checking whether a system is opaque is PSPACE-complete, which implies that computing an optimal static observer ensuring opacity is also a PSPACE-complete problem. Next, we introduce dynamic partial observability where the set of events the user can observe changes over time. We show how to check that a system is opaque wrt to a dynamic observer and also address the corresponding synthesis problem: given a system G and secret states S, compute the set of dynamic observers under which S is opaque. Our main result is that the set of such observers can be finitely represented and can be computed in EXPTIME.

May 7, 2009
Environment Assumptions for Synthesis
Barbara Jobstmann (EPFL)

Abstract: In synthesis, we aim to construct a finite-state reactive system from a given omega-regular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification G by computing an environment assumption A such that the new specification A -> G is realizable.
In this talk, I will give a short introduction to the synthesis problem and the underlying game theory, discuss desired properties of assumptions, and present a two-step algorithm to compute useful environment assumptions. Our algorithm operates on the game graph that is used to answer the realizability question. First, it computes a safety assumption that removes a minimal set of environment edges from the graph. Second, it computes a liveness assumption that puts fairness conditions on some of the remaining environment edges. We use probabilistic games to compute the liveness assumptions.
This is joint work with Krishnendu Chatterjee and Tom Henzinger.

April 30, 2009 (EXCEPTIONALLY AT 3:30PM, 2NO800G (rotule))
The role of cooperative sensor networks in smart grids
Alfredo Vaccaro (U. Sannio, Italy)

Abstract: A smart grid delivers electricity from suppliers to consumers using digital technology to save energy, reduce cost and increase reliability. Such a modernized electricity network is being promoted by many governments as a way of addressing energy independence or global warming issues. Modern trends in Smart Grids operation ask for an increasing pervasion of distributed sensing systems. This process is hindered by the low scalability levels characterizing the existing monitoring systems, traditionally based on client server based paradigms. This has stimulated the power systems research community to define new monitoring architectures that move away from the older centralized paradigm to system distributed in the field with an increasing pervasion of smart sensors. According to these considerations, the talk intends to give a contribution toward the definition of a fully decentralized monitoring architecture by proposing the employment of self organizing sensor networks in which the spreading of information occurs as a result of the local coupling between adjacent nodes which act as mutually coupled adaptive oscillators. According to this paradigm each node knows both the performances of the monitored site, computed by acquiring local information, and the global performances of the monitored grid section, computed by local exchanges of information with its neighbors nodes. Thanks to this feature each node could automatically detect local anomalies. Moreover system operator can assess the system performance for each grid section by inquiring any node of the corresponding sensors network without the need of a central fusion center acquiring and processing all the node acquisitions. This makes the overall monitoring architecture highly scalable, self-organizing and distributed.

January 15, 2009
The Disconnected Cut problem
Daniel Paulusma (Durham U., UK)

Abstract: Let G=(V,E) be a finite connected undirected graph without multiple edges or self-loops. Let G[U] denote the subgraph induced by a subset U of V. Then U is a cut if G[V\U] is disconnected. We call a cut U a disconnected cut if G[U] is disconnected as well. This leads to the problem:
DISCONNECTED CUT
Instance: A graph G
Question: Does G have a disconnected cut?
As far as we know the computational complexity of this problem is open. We pinpoint relationships with other graph-theoretic concepts such as graph homomorphisms and graph contractions, discuss partial results and mention other open (sub)problems.

December 18, 2008
A Betweenness Approach for Solving the Linear Arrangement Problem
Marcus Oswald (Heidelberg, Germany)

Abstract: In the Linear Arrangement Problem we are looking for a permutation of objects in such a way that a linear function on the differences of positions of the objects is minimized. We present a new approach based on betweeness variables and give computational results showing that the lower bounds that are known in literature so far can be improved significantly.

December 16, 2008 (EXCEPTIONALLY ON TUESDAY AT 2PM)
Exploratory Analysis of Functional Data via Clustering and Segmentation
Fabrice Rossi (Telecom ParisTech)

Abstract: Functional data arise in numerous practical contexts. Spectrometry is a well known example in which each observation is described by a spectrum: a function that maps wavelengths to absorbance values. A more general example is given by time series data: each object is described by the evolution through time of several quantities, represented by a function that map time to the associated values. Applications of this paradigm range from online monitoring to joint time series prediction.
This talk focuses on the exploratory analysis of a set of functions (or time series). The main idea is to provide the analyst with a summary of the set with a manageable complexity. We propose to combine two well known methods: clustering and segmentation. The first one groups similar functions while the second one approximate prototype functions by more manageable versions, in general piecewise constant functions. We show how the segmentation part can be embedded into prototype based algorithm optimally with respect to two objectives. Firstly, given a homogeneous (sub)set of functions, we follow Bellman's lead and show how several types of optimal representations can be given in term of a user chosen minimal distortion (such as the mean squared error). Secondly, given a partition of a function set, we show how the representation quality of the clusters can be balanced automatically in an optimal way, again via dynamic programming. The proposed algorithm is illustrated on a real world example.

December 1st, 2008
Schémas de signature basés sur les codes correcteurs d'erreurs
Pierre-Louis Cayrel (Université Paris VIII)

Abstract: Je présenterai dans un premier temps la cryptographie basée sur les codes correcteurs d'erreurs (schéma de McEliece, Niederreiter) puis je détaillerai le schéma d'identification et de signature de Stern présenté à CRYPTO en 1993 et le schéma de signature de Courtois-Finiasz-Sendrier (ASIACRYPT 2001). Je présenterai ensuite 3 travaux récents sur ce sujet :
- une construction sécurisée du schéma de Stern contre les attaques par canaux cachés (DPA)
- un schéma de signature basé sur l'identité, prouvé sûr mêlant le schéma de Stern et le schéma de signature de Courtois-Finiasz-Sendrier.
- un schéma de signature de cercle (de taille n) à seuil (t-out-of-n threshold ring signature scheme) construit comme une généralisation du schéma de Stern et qui donne les meilleurs résultats connus pour ce type de signature.
Cet exposé se base sur une série de trois travaux réalisés avec Philippe Gaborit et Emmanuel Prouff (pour le premier), Philippe Gaborit, David Galindo et Marc Girault (pour le deuxième) et Carlos Aguilar Melchor et Philippe Gaborit (pour le troisième).

November 27, 2008
The communication complexity of non-signaling distributions
Jérémie Roland (NEC, Princeton)

Abstract: We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. Our techniques apply to any communication problem that can be reduced to a non-signaling distribution, including non- Boolean functions, most relations, partial (promise) problems, and multipartite settings.
The proofs are surprisingly easy and give very intuitive interpretations of the recent lower bounds of Linial and Shraibman, which is shown to be a special case of our method, for Boolean functions. Our lower bounds can be expressed as linear programs (or SDPs for quantum communication), and the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are shown to be closely related to the winning probability of XOR games.
Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.

October 2, 2008
Evaluating monotone Boolean functions and game trees in the priced information model
Martin Milanič (Universität Bielefeld, Germany)

Abstract: We study the fundamental problem of evaluating a function by sequentially selecting a subset of variables whose values uniquely identify the function's value. This basic problem arises in several domains of computer science, e.g., automatic diagnosis, applied game theory, database query optimization, to mention just a few. We address the variant of the function evaluation problem, introduced by Charikar et al. (2002), where different variables can incur different reading costs and competitive analysis is employed to measure the performance of an evaluation algorithm. Based on a linear programming approach, Cicalese and Laber (2008) developed a framework for the design of algorithms for evaluating functions. We extend this framework to the case where the cost of querying a variable can depend on the variable's value. In this model, we determine the tight extremal competitive ratio for monotone Boolean functions. We also obtain the tight extremal competitive ratio for game trees. For game trees, as well as for certain subclasses of monotone Boolean functions, optimal competitiveness can be achieved by polynomial (or pseudo-polynomial) time algorithms.
The talk is based on joint work with Ferdinando Cicalese.

September 25, 2008
Observe, track, and localize : Matrix problems for autonomous agents detection
Raphaël Jungers (ULB / UCL)

Abstract: Who has never been in this situation? You're at work, trying to progress in your Ph.D. thesis, and you absolutely have to find your advisor for some technical problem that you do not understand at all. But here is the problem: our advisors are very busy persons, and it is extremely difficult to find them whenever you need them.
In this talk, we will see how graph theory and linear algebra could help you to finish your thesis sooner. We provide efficient ways to localize an agent on a network. We present several concepts such as trackable networks, observable networks, local automata, and we relate them. We present efficient algorithms and/or complexity results for problems arising in practical applications. We put this in relation with some celebrated longstanding open questions such as Czerny's conjecture, and the finiteness conjecture.
This is Joint work with Vincent Blondel (UcL) and Vladimir Protasov (Moscow State University).

September 18, 2008
Parikh-equivalent bounded under-approximations
Pierre Ganty (UCLA)

Abstract: Many problems in the verification of concurrent software systems reduce to checking the non-emptiness of the intersection of two context-free languages, an undecidable problem. We propose a decidable under-approximation, and a semi-algorithm based on the under-approximation, for this problem through bounded languages, which are context-free subsets of a regular language of the form w_1*w_2*... w_k* for some w_1,...,w_k in Sigma*. Bounded languages have nice structural properties, in particular the non-emptiness of the intersection of a bounded language and a context free language is decidable.
Our main theoretical result is a constructive proof of the following result: for any context free language L, there is a bounded language L' included in L which has the same Parikh image as L. Along the way, we show an iterative construction that associates with each context free language a family of linear languages and linear substitutions that preserve the Parikh image of the context free language. We show two applications of this result: to under-approximate the reachable state space of multi-threaded procedural programs, and to under-approximate the reachable state space of counter automata with context-free constraints.

September 12, 2008
Comparison of string distances for complete genome phylogeny
Alain Guénoche (Institut de Mathématiques de Luminy, CNRS)

Abstract: We compare three string distances between complete genomes according to their ability to recover correct phylogenies. They are based on common words between the raw genomic sequences and do not require preliminary processing steps such as gene matching or sequence alignment.
The first one is a new distance; it is based on Maximum Significant Matches (Guénoche, Guyon 2007). The second one is the k-word distance computed from the frequencies of all the words of length k (Ji Qi, 2004). The third one, the ACS distance, is based on the average length of maximum common substrings (Ulitsky 2006).
A simulation process of evolution shows that the MSM distance outperforms the KW and the ACS distances which also provides accurate results to study intra-phylum relationships.

July 17, 2008
Distributed Indexing and Querying in Sensor Networks using Statistical Models
Arnab Bhattacharya (IIT Kanpur, India)

Abstract: As sensors become more inexpensive and are more easily deployed, individual measurements will pave the way to high level semantically rich events, directly mined from the raw and noisy sensor readings. Techniques to summarize, index and query the semantic events will then become an essential part of sensor network applications. The transformation of readings to semantics may take place in a central node; however, that entails huge communication costs. In this talk, I will explore an alternative path: first, transforming the readings into symbolic states locally at each sensor, and then performing an in-network indexing and aggregation of the resulting semantic models. I will use two statistical models to capture the behavior at each sensor: Markov Chains and Hidden Markov Models. I will then describe two algorithms to aggregate individual models into a composite model. Finally, I will show how these composite models can be organised into a distributed index structure to answer semantic queries on the network with low communication overheads.

June 19, 2008
Model checking memoryful logics over one-counter automata
Arnaud Sangnier (ENS Cachan)

Abstract: We study complexity issues related to the model-checking problem for LTL with registers (a.k.a. freeze LTL) over one-counter automata. We consider several classes of one-counter automata (mainly deterministic vs. nondeterministic) and several syntactic fragments (restriction on the number of registers and on the use of propositional variables for control locations). The logic has the ability to store a counter value and to test it later against the current counter value. We show that model checking LTL with registers over deterministic one-counter automata is PSPACE-complete with infinite accepting runs. By constrast, we prove that model checking LTL with registers over nondeterministic one-counter automata is undecidable in the infinitary and finitary cases even if only one register is used and with no propositional variable. This makes a difference with the facts that several verification problems for one-counter automata are known to be decidable with relatively low complexity, and that finitary satisfiability for LTL with a unique register is decidable. Finally, we explain how these results can be adapted to similar problems for its sister logic, first-order logic with data equality test.
This is a joint work with Stéphane Demri and with Ranko Lazic (University of Warwick).

June 18, 2008
Chance Constrained Network Design problem
Fausto Pascali (Università di Pisa)

Abstract: Given a communication network, to find a minimum cost capacity allocation that supports a set of non simultaneously demand matrices is the so called Robust Network Design problem. Literature results highlight which kind of uncertainty sets lead to tractable robust optimization problems, but it is not clear which sets better describe the real traffic fluctuations. When a probability distribution on traffic matrices is given, it is possible to estimate the probability that a future traffic matrix effectively belong to a chosen uncertainty set, but the task, given a budget, of maximizing the probability that a capacity network allocation supports a future traffic demand turn out to be, in general, a difficult non convex optimization problem belonging to the class of the chance constrained problems. We use recent literature result in order to build uncertainty sets that "well" approximate chance constraints while the computational tractability of the model is preserved and a predefined level of network reliability is achieved.

June 12, 2008
Treillis abstraits et alphabets infinis
Tristan Le Gall (IRISA, Rennes, France)

Abstract: Cet exposé propose de voir les langages réguliers avec un alphabet fini ou infini comme un treillis. Ce treillis est utilisé, dans le cadre de l'interprétation abstraite, pour vérifier des systèmes avec des files ou des piles non bornées.
Si les langages réguliers avec un alphabet fini sont bien connus, des problèmes de représentation et de manipulations surgissent quand l'alphabet est infini. J'introduirai la notion d'automates de treillis, qui sont semblables aux automates finis mais dont les transitions sont étiquetées par des éléments d'un treillis atomique infini.
Je détaillerai les opérations de déterminisation et de minimisation sur ce type d'automate et j'expliquerai comment on peut définir un opérateur d'élargissement. Je présenterai aussi quelques résultats expérimentaux tant pour l'analyse des systèmes FIFO que pour l'analyse interprocédurale.

May 8, 2008
Search via quantum walk
Jérémie Roland (ULB / UC Berkeley)

Abstract: By suppressing details, any search problem may be cast as the problem of finding a "marked" element from a set X with n elements. Let M be the set of marked elements. One approach to finding from M, if it is not empty, is to repeatedly sample from X uniformly until a marked element is picked. A more cost-effective approach re-uses resources expended in generating the first sample (time, random bits, black-box queries, etc.) by simulating the steps of a Markov chain with state space X to generate the next sample. This approach often takes advantage of some structure present in the ground set X and the Markov chain, and leads to a more efficient algorithm. In this talk, we study quantum analogues of this randomized scheme. In particular, we propose a new method for designing quantum search algorithms for finding a "marked" element in the state space of a classical Markov chain.

March 20, 2008
Towards Interactive Belief, Knowledge, and Provability: Possible Application to Zero-Knowledge Proofs
Simon Kramer (LIX, Paris)

Abstract: We argue that modal operators of interactive belief, knowledge, and provability are definable as natural generalisations of their non-interactive counterparts, and that zero-knowledge proofs (from cryptography) have a natural (modal) formulation in terms of interactive individual knowledge, non-interactive propositional knowledge and interactive provability.
Our work is motivated by van Benthem's investigation into rational agency and dialogue and our attempt to redefine modern cryptography in terms of modal logic.

March 13, 2008
Is Lazy Abstraction a Decision Procedure for Broadcast Protocols?
Rayna Dimitrova (Saarland University, Germany)

Abstract: Lazy abstraction is an interesting verification method based on the scheme coined counterexample-guided abstraction refinement (CEGAR). It builds up an abstract reachability tree by locally refining abstractions in order to eliminate spurious counterexamples in smaller and smaller subtrees. The method has demonstrated its practical usefulness for example for verification of systems code. It is still open whether its practical performance is matched by its theoretical qualities, i.e., whether the method terminates for already known decidable verification problems. In this talk, we answer the question positively for broadcast protocols and other infinite-state models in the class of so-called well-structured systems. This extends an existing result on systems with a finite bisimulation quotient.

March 6, 2008
Multi-triangulations as complexes of star polygons
Vincent Pilaud (ENS, Paris)
Abstract: Let n and k be two integers with n bigger than 2k+1. A k-triangulation of a convex n-gon is a maximal set of diagonals in it such that no k+1 of them mutually cross. There are several properties that generalize nicely what is known for triangulations. Among them:
1) all the k-triangulations of a convex n-gon have the same number of edges, namely k(2n-2k-1).
2) every edge of a k-triangulation (of length at least k+1) can be flipped in a unique way. The graph of flips is regular and connected.
3) the set of k-triangulations of a convex n-gon is enumerated by the same Catalan determinant counting families of k mutually non-crossing Dyck paths.
Although these results have already been proved before (with recursive arguments), we show how to obtain (1) and (2) directly using the new concept of stars in multi-triangulations, which generalize triangles in triangulations. This tool may hopefully make easier the analysis of further topics and open questions that we will briefly discuss.
(Joint work with Francisco Santos, Universidad de Cantabria, Santander, Espagne)

February 19, 2008 (TUESDAY)
Stackelberg Network Pricing Games
Martin Hoefer (RWTH Aachen)
Abstract: We study a multi-player one-round game in which a leader can set prices for a subset of m pricable edges in a graph. The other edges have a fixed cost. Based on the leader's decision one or more followers optimize a polynomial-time solvable combinatorial minimization problem and choose a minimum cost solution satisfying their requirements. The leader receives as revenue the total amount of prices paid by the followers for pricable edges in their solutions. Our first result is a tight analysis of a single-price algorithm for revenue maximization with a single follower, which provides a (1+eps)log m approximation for any constant eps > 0. This can be extended to a (1+eps)(log k + log m)-approximation for k followers. For the latter we show almost matching hardness results. Our second result is that in case of a single follower in Stackelberg bipartite vertex cover, there is an efficient algorithm to compute optimum pricies using LP-duality techniques. It can be extended to provide constant-factor approximations for any constant number of followers.

February 14, 2008
Accelerated Data-flow Analysis
Grégoire Sutre (LABRI, France)
Abstract: Acceleration in symbolic verification consists in computing the exact effect of some control-flow loops in order to speed up the iterative fix-point computation of reachable states. Even if no termination guarantee is provided in theory, successful results were obtained in practice by different tools implementing this framework. In this talk, we show how to extend the acceleration framework to data-flow analysis. Compared to a classical widening/narrowing-based abstract interpretation, the loss of precision is controlled here by the choice of the abstract domain and does not depend on the way the abstract value is computed. We illustrate our approach on convex polyhedral data-flow analysis, and we provide a cubic-time acceleration-based algorithm for solving interval constraints with full multiplication.
Joint work with Jérôme Leroux.

December 20, 2007
Adaptive Convex Hull of Convex Hulls
Abstract: Computing the convex hull of a set of points has applications in many domains, and in particular in computer vision. It can be computed efficiently, in time proportional to the size of the produced result. We consider the particular case where the set of points is composed of simpler objects from a library, for which the convex hull has been precomputed. We show that for many instances an adaptive algorithm is a better choice.

December 13, 2007
Robust capacity assignment in telecommunications networks under demand uncertainty
Abstract: In telecommunications network design, one of the key parameter is the set of requirements which in the past, were demands based on historical data and/or demographic predictions. Because of new technologies development and customers movement due to competitiveness, the requirements present considerable uncertainty. We will discuss some models and methods for the capacity assignment problem taking into account this uncertainty.

December 6, 2007
Randomized shortest-path problems
Marco Saerens (IAG - UCL)
Abstract: This work addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread in the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way; for instance by minimizing the total travel cost. Nothing particular up to now -- you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, to avoid congestion, or to avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness/unpredictability in the routing policy, i.e., the routing policy is randomized. This problem, which will be called the randomized shortest-path problem (RSP), is investigated. The global level of randomness of the routing policy will be quantified by the expected Shannon entropy spread into the network, and is provided a priori by the designer. Then, necessary conditions allowing to compute the optimal randomized policy - minimizing the expected routing cost - are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows to compute the optimal policy, that is, the set of transition probabilities in each node, but no convergence proof is provided. Interestingly and surprisingly enough, the proposed model, while formulated in a totally different framework, is equivalent to Akamatsu's model (Akamatsu 1996), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a statistical-physics formalism allowing to easily derive all the quantities of interest in an elegant, unified, way. For instance, it is shown that the optimal policy can be obtained by solving a simple linear system of equations. Finally, simulation results obtained on simple, illustrative, examples show that the model behaves as expected.

November 29, 2007
Probabilistic and topological semantics for timed automata
Thomas Brihaye (UMH)
Abstract: Like most models used in model-checking, timed automata are an ideal mathematical model to represent systems with strong timing requirements. In such mathematical models, properties can be violated, due to unlikely events. Following ideas of Varacca and Völzer in their LICS'06 paper, we aim at defining a notion of `fair'' correctness for timed systems. For this purpose, we introduce a probabilistic semantics for timed automata, which ignores unlikely events, and naturally raises a notion of almost-sure satisfaction of properties in timed systems. We prove that the almost-sure satisfaction has a corresponding topological interpretation in terms of largeness of the set of paths satisfying the property. Moreover, we show PSPACE-Completeness for the LTL model-checking problem over finite computations. We will also discuss the more complicate case of infinite computations.
Joint work with Christel Baier, Nathalie Bertrand, Patricia Bouyer and Marcus Größer.

November 22, 2007
Modular Development of Hybrid Systems for Verification in Coq
Abstract: In this talk I will present an account of a modular formalisation of hybrid automata based on predicate abstraction. We will give algorithms for building trajectory trees that can be used in reachability analysis and show a formalisation of the theory of hybrid automata and its extensions using module types and functors in Coq. Our approach paves the way for the use of Coq theorem prover -- a proof assistant based on constructive type theory-- in formalisation and verification of hybrid systems.
Joint work with Olga Tveretina

November 8, 2007
Hub location problems in public transport planning: New mathematical model and solution approaches
Shahin Gelareh (ITWM - Kaiserslautern)
Abstract: In this talk, we will propose a new mixed integer programming model for hub location problems arising in public transport. Due to the NP-hardness of the problem, even small size instances of cannot be solved to optimality in a reasonable amount of time. Therefore, some decomposition methods may be used to solve larger instances of the problem. We will present a Bender's decomposition approach and a simple neighborhood search approach to solve the problems for single period and multi-period planning. Computational results on the CAB and AP instances confirm the superiority of our model to the available models in literature. In addition, it shows that our Bender's approach always dominates the available standard solvers.

October 18, 2007
Farey Sequences and Counting Primitive Lattice Points
Mihai Patrascu (MIT, USA)
Abstract: A primitive lattice point is a point (x,y) with x,y integers and gcd(x,y)=1. "Counting" primitive lattice points inside various planar shapes has been an active area of mathematics in the past decades. But by virtue of seeking an elementary formula, the mathematical definition of "counting" is "approximating asymptotically". We now present the first results on the informatics view of this problem: give an efficient algorithm to count primitive lattice points exactly.
The problem is related to another entertaining mathematical problem that has crossed the border into informatics. The Farey sequence of order N is the sorted sequence of irreducible fractions a/b, with a<=b<=N. There are well-known algorithms for generating this sequence in its entirety. We now describe algorithms for finding just one value of the sequence (say the K-th value) significantly faster.
Joint work with Jakub Pawlewicz.

October 11, 2007
Wireless Multi-hop Networks: Opportunities and Challenges
Tinku Rasheed (Create-Net, Trento, Italy)
Abstract: Wireless multi-hop networking is gaining practical importance with the continuous proliferation of wireless devices and pervasive communication technologies. In such a network, mobile devices operates not only as host but also as routers, forwarding packets for other mobile nodes in the network that may not be within direct wireless transmission range of each other in a multi-hop fashion. Multi-hopping has important implications on several wireless networking domains, including wireless sensor networks, wireless mesh networks, personal area networks etc. Routing of information is an important challenge in multi-hop wireless networks and has grappled with the twin requirements of connectivity and scalability. In this talk, we discuss the issues and challenges related to scalable routing in wireless networks and introduce an adaptive and scalable routing architecture for multi-hop networking applications. We shall also explore the impact of the scalable routing framework on different wireless network application scenarios.

October 1, 2007 (ON MONDAY)
Colorful colorings of graphs
Mario Valencia-Pabon (LIPN, Paris, France)
Abstract: A proper k-coloring of a simple graph G=(V,E) is a partition P = {V_1,...,V_k} of the vertex set V such that each V_i (called a color class) is an independent set of G. A vertex v is called colorful if it is adjacent to at least one vertex in each color class V_j, with j not equal to i. A color class V_i is called colorful if it contains at least one colorful vertex, and a coloring P is called colorful if every color class is colorful. The maximum order of a colorful coloring of a graph G is called the b-chromatic number and such a parameter was introduced by Irving and Manlove (1999). In this talk, I will survey the most recent results concerning maximum colorful colorings in graphs. In particular, I will show that the problem of computing the b-chromatic number of a graph does not admit a Polynomial-Time Approximation Scheme.

September 13, 2007
On discretization: an application to location models
Luis Gouveia (U. Lisbon, Portugal)
Abstract: Consider an ILP model with two sets of variables for each "object" (for instance, an "arc" or "node" if the problem is associated to network design) of the problem: one binary variable Ui indicating whether the "object" i is included in the solution and an integer variable indicating a quantity associated to the object i (flow, for instance). We discuss alternative models that are obtained from the previous one, by replacing these two variables by one set of binary variables Ziq indicating whether object i is in the solution with value "q". We discuss equivalence results between the two classes of models and focus our talk on location models with modular link costs.

May 16, 2007 (ON WEDNESDAY - room 2NO800G ("rotule"))
On Geometric Spanners
Abstract: We define a graph whose vertices are points in the plane and edges are segments weighted by their length to be a geometric graph. A geometric graph G is a t-spanner (for a constant t>=1) when the weight of the shortest path in G between every pair of points a,b does not exceed t|ab| where |ab| is the Euclidean distance between a and b. The smallest constant t having this property is the spanning ratio or stretch factor of the graph. The goal in this area is to construct spanners with small stretch factor that possess additional properties such as linear number of edges, bounded degree, and so on. In this talk, we will review some of the standard techniques that are used to construct spanners, highlight some of the difficulties involved in the different construction techniques and then outline some of our recent results.

May 14, 2007 (EXCEPTIONALLY ON MONDAY - room 2NO7.08)
Issues and Solutions for Stream Data Processing and Management
Sharma Chakravarthy (University of Texas at Arlington)
Abstract: Stream data processing poses many challenges. Two important characteristics of stream data processing - bursty arrival rates and the need for near real-time performance requirement - challenge the allocation of limited resources in the system. Operator/Query modeling, scheduling, and load shedding are major issues that differ from traditional database managements systems (DBMS).
In this presentation, we discuss the above issues and solutions. We elaborate on novel scheduling strategies to minimize tuple latency as well as total memory requirement. We first introduce a path capacity strategy (PCS) with the goal of minimizing tuple latency. We then compare the PCS and the Chain strategy to identify their limitations and propose additional scheduling strategies that improve upon them. Specifically, we introduce a segment strategy (SS) with the goal of minimizing the memory requirement, and its simplified version. In addition, we introduce a hybrid strategy, termed the threshold strategy (TS), to address the combined optimization of both tuple latency and memory requirement. Finally, we present the results of a wide range of experiments conducted to evaluate the efficiency and the effectiveness of the proposed scheduling strategies.
Currently, we are synergistically integrating stream and complex event processing. We will conclude the talk with some results from this effort.
This is joint work with my students: D. Jiang, R. Adaikkalavan, V. Garg.

May 3, 2007
Scheduling messages with offsets in automotive networks - a major performance boost
Nicolas Navet and Mathieu Grenier (LORIA, France)
Abstract: With the increasing use of electronics, best using the bandwidth is of primary importance in automotive networks. An element of solution is to schedule messages with offsets, which leads to a desynchronization of the streams of messages that is very beneficial in terms of worst-case response times. Two problems, difficult from an algorithmic point of view, are to be solved: choosing the offsets and computing the worst-case response times. The talk will discuss solutions in the case of Controller Area Network, which is by far the most widely used automotive networks. Experiments demonstrate that offsets actually provide a major performance boost in terms of worst-case response time and may lengthen the lifespan of CAN, despite the availability of FlexRay.

April 24, 2007 (EXCEPTIONALLY ON TUESDAY 14:30 - FORUM F)
Conditional Independence for Prediction
François Fleuret (EPFL)
Abstract: In this talk I will present the simple and powerful idea of using conditional independence for prediction.
Modeling high-dimensional signals explicitly or through learning is often impossible due to a lack of either training data or computational power. However, a simple assumption of conditional independence of the signal components given the true state of interest or other hidden state variables often leads to a tractable form that can be handled with adequate algorithmic tricks.
I will first introduce this idea on a toy example, and then illustrate it with three different problems: face detection with a coarse-to-fine representation, multi-camera tracking with a probabilistic occupancy map, and learning from one sample with transfer learning.
In all these problems, I will also show the importance of algorithmic efficiency and how a tight integration between statistics and algorithmic design is the key to efficient and usable schemes.

April 19, 2007
A Symbolic Decision Procedure for Robust Safety of Timed systems
Mani Swaminathan (Uni. Oldenburg, Germany)
Abstract: We present a symbolic algorithm for deciding safety (reachability) of timed systems modelled as Timed Automata (TA), under the notion of robustness w.r.t infinitesimal clock-drift. The algorithm uses a "neighbourhood" operator on zones that is efficient to compute. In contrast to other approaches for robustly deciding reachability of TA, which are either region based (Puri, 2000), (DeWulf, Doyen, Markey, and Raskin, 2004), or involve a combination of forward and backward analyses when zone-based (Daws and Kordy, 2006), ours is a zone-based fully forward algorithm that is guaranteed to terminate, and requires no special treatment of cycles in TA.

March 30, 2007 (EXCEPTIONALLY ON FRIDAY - room 2NO6.07)
Computational Intelligence in the Chemical Industry
Elsa Jordaan (Core R&D, Engineering & Process Sciences, Dow Benelux B.V.)
Abstract: Computation Intelligence has revolutionized the way that engineers in the process industry solve highly complex problems. In the past, modeling and optimizing of chemical processes and materials characteristics were done by developing a fundamental model or a statistical model. Fundamental models often required years of research. Furthermore, the calculations of these models were often too time-consuming to be used for online optimization and control. Statistical models, again, required the availability of good data that could be linearised. Many industrial data sets turned out to be too noisy and high-dimensional to be solved with statistical techniques.
The introduction of Neural Networks (NN) as a new tool to quickly model highly nonlinear processes marked a clear turning point in the chemical industry. Since then, computational intelligence methods, like NN, Support Vector Machines (SVM) and Genetic Programming (GP), have been applied to a wide variety of problems in the process industry. These methods have not only become essential in the set of tools available to solve industrial problems, but also generated millions of dollars in profit due to improved process operability.
The list successful applications at the Dow Chemical Company include:
- A NN-application to predict NOx-emissions.
- Outlier detection using SVM.
- A GP-model to predict the biomass concentration in a batch reactor.
- Using GP to help developing new rheological insights.
- Particle Swarm Optimization (PSO) for optimizing properties of polymers.

March 27, 2007 (EXCEPTIONALLY ON TUESDAY 12:30 in room 2NO7.08)
Compact representations of Geometric Data Structures
Luca Castelli Aleardi (Polytechnique, Paris)
Abstract: We consider the problem of designing compact and succint representations for geometric data structures. As opposed to raw compression issues, the focus is here on designing data structures that preserve the possibility of answering local incidence queries in constant time, while using little as memory resources as possible. One of the main contribution of this work is to present a general algorithmic framework for designing compact representations of structures as planar graphs and 3D meshes. As application we propose some solutions for compactly representing the connectivity (combinatorial information) of some classes of local planar graphs: in particular we design the first optimal succinct representations for planar triangulations and 3-connected planar graphs.
Key-Words : graph encoding, succinct and compact representations, tri- angulations, geometric data structure, planar graphs

March 22, 2007
Curves in the sand: algorithmic drawing
Abstract: Sand drawings form a part of many cultural artistic traditions. Depending on the part of the world in which they occur, such drawings have different names such as sona, kolam, and Malekula drawings. Gaussian graphs are mathematical objects studied in the disciplines of graph theory and topology. We uncover a bridge between sand drawings and Gaussian graphs, leading to a variety of new mathematical problems related to sand drawings. In particular, we analyze sand drawings from combinatorial and geometric points of view, and describe algorithms that generate sona drawings under a variety of different models and constraints.

March 21, 2007 (EXCEPTIONALLY ON WEDNESDAY 11:00 in room 2NO7.07)
The cost of punctuality
Joël Ouaknine (Oxford, UK)
Abstract: In an influential paper titled "The Benefits of Relaxing Punctuality" (JACM 1996), Alur, Feder, and Henzinger introduced Metric Interval Temporal Logic (MITL) as a fragment of the real-time logic Metric Temporal Logic (MTL) in which exact or punctual timing constraints are banned. Their main result showed that model checking and satisfiability for MITL are both EXPSPACE-Complete.
Until recently, it was widely believed that admitting even the simplest punctual specifications in any linear-time temporal logic would automatically lead to undecidability. Although this was recently disproved, until now no punctual fragment of MTL was known to have even primitive recursive complexity (with certain decidable fragments having provably non-primitive recursive complexity).
In this talk, we will present a 'co-flat' subset of MTL that is capable of expressing a large class of punctual specifications and for which model checking (although not satisfiability) has no complexity cost over MITL. Our logic is moreover qualitatively different from MITL in that it can express properties that are not timed-regular. Correspondingly, our decision procedures do not involve translating formulas into finite-state automata, but rather into certain kinds of reversal-bounded Turing machines.
This is joint work with Patricia Bouyer, Nicolas Markey, and James Worrell.

March 8, 2007
Modes and Cuts in Metabolic networks: Complexity and Algorithms
Leen Stougie (TU Eindhoven and CWI Amsterdam, Netherlands)
Abstract: Constraint-based approaches recently brought new insight into our understanding of metabolism. By making very simple assumptions such as that the system is at steady-state and some reactions are irreversible, and without requiring kinetic parameters, general properties of the system can be derived. A central concept in this methodology is the notion of an elementary mode (EM for short). The computation of EMs still constitutes a limiting step in metabolic studies and several algorithms have been proposed to address this problem leading to increasingly faster methods.
First results regarding network consistency will be presented. Most consistency problems can be solved in polynomial time (are easy). Then the complexity of finding and counting elementary modes is presented, showing in particular that finding one elementary mode is easy but that this task becomes hard when a specific EM i.e. an EM containing some specified reactions) is sought. Also a number of EM related problems will be considered.
Then models and results on a closely related task is presented: the computation of so-called minimal reaction cut sets. Also this problem is hard. Two positive results will be presented, which both allow to avoid computing EMs as a prior to the computation of reaction cuts. The first one is a polynomial time approximation algorithm for finding a minimum reaction cut set. The second one is a test for verifying if a set of reactions constitutes a reaction cut; this test could be readily included in existing algorithms to improve their performance. Finally, we discuss the complexity of other cut-related problems.
Joint work with Vincent Lacroix, Alberto Marchetti-Spaccamela, Marie-France Sagot and Flavio Chierichetti.

March 1, 2007
Expressivité de la logique de séparation
Etienne Lozes (LSV, ENS Cachan)
Abstract: La logique de la séparation a été introduite par Reynolds et O'Hearn pour faciliter l'annotation de programmes à la Hoare-Floyd dans un langage impératif manipulant des pointeurs. Pour cela, on étend la logique classique avec une conjonction séparante et une implication contextuelle, équivalents (lointains) du tenseur de et l'implication linéaire. Après avoir introduit cette logique, je donnerai quelques exemples illustrant son intérêt pour la preuve de programmes, en particulier les calculs de précondition et post-condition associés. Je présenterai plusieurs résultats d'expressivité de divers fragments, de décidabilité et complexité des problèmes de validité et model-checking. Je parlerai enfin des extensions au cadre de la programmation à mémoire partagée.

February 22, 2007
On frequency assignment to cellular antennas, RFID protocols and geometry
Shakhar Smorodinsky (Courant Institute for Mathematical Sciences, NY, USA)
Abstract: The problem of frequency assignment to cellular antennas was traditionally treated as a graph coloring problem, where the vertices of the graph are the given set of antennas and the edges are those pairs of antennas that overlap in their reception range. Thus, if we color the vertices of the graph such that no two vertices that are connected by an edge have the same color, we guarantee that there will be no conflicting antennas. However, this model is too restrictive. In this model, if a client lies within the reception range of say, k antennas, then every pair of these antennas are conflicting and therefore they must be assigned k distinct colors (i.e., frequencies). However, for that purpose, if one of these antennas is assigned a color (say 1) that no other antenna is assigned (even if all other antennas are assigned the same color, say 2) then we use a total of two colors and this client can still be served.
We present a new model for frequency assignment where we use much fewer frequencies such that every possible client is still served. This model is also related to RFID protocols. We will survey the recent developments in this area and present challenges for both theoretical and practical future research.